Test Report: Docker_Windows 14269

                    
                      ab7bb61b313d0ba57acd833ecb833795c1bc5389:2022-06-02:24239
                    
                

Test fail (11/257)

x
+
TestFunctional/parallel/ServiceCmd (2073.94s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd
=== PAUSE TestFunctional/parallel/ServiceCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmd
functional_test.go:1432: (dbg) Run:  kubectl --context functional-20220602172845-12108 create deployment hello-node --image=k8s.gcr.io/echoserver:1.8
functional_test.go:1438: (dbg) Run:  kubectl --context functional-20220602172845-12108 expose deployment hello-node --type=NodePort --port=8080

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmd
functional_test.go:1438: (dbg) Done: kubectl --context functional-20220602172845-12108 expose deployment hello-node --type=NodePort --port=8080: (1.6921578s)
functional_test.go:1443: (dbg) TestFunctional/parallel/ServiceCmd: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:342: "hello-node-54fbb85-l5tpx" [9b6b8a79-b2d8-4a4f-b2d8-cad582357bb9] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmd
helpers_test.go:342: "hello-node-54fbb85-l5tpx" [9b6b8a79-b2d8-4a4f-b2d8-cad582357bb9] Running

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmd
functional_test.go:1443: (dbg) TestFunctional/parallel/ServiceCmd: app=hello-node healthy within 30.1971078s
functional_test.go:1448: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20220602172845-12108 service list

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmd
functional_test.go:1448: (dbg) Done: out/minikube-windows-amd64.exe -p functional-20220602172845-12108 service list: (7.0574951s)
functional_test.go:1462: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20220602172845-12108 service --namespace=default --https --url hello-node
functional_test.go:1391: Failed to sent interrupt to proc not supported by windows

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmd
functional_test.go:1462: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-20220602172845-12108 service --namespace=default --https --url hello-node: exit status 1 (33m30.8493724s)

                                                
                                                
-- stdout --
	https://127.0.0.1:51437

                                                
                                                
-- /stdout --
** stderr ** 
	! Because you are using a Docker driver on windows, the terminal needs to be open to run it.

                                                
                                                
** /stderr **
functional_test.go:1464: failed to get service url. args "out/minikube-windows-amd64.exe -p functional-20220602172845-12108 service --namespace=default --https --url hello-node" : exit status 1
functional_test.go:1401: service test failed - dumping debug information
functional_test.go:1402: -----------------------service failure post-mortem--------------------------------
functional_test.go:1405: (dbg) Run:  kubectl --context functional-20220602172845-12108 describe po hello-node
functional_test.go:1409: hello-node pod describe:
Name:         hello-node-54fbb85-l5tpx
Namespace:    default
Priority:     0
Node:         functional-20220602172845-12108/192.168.49.2
Start Time:   Thu, 02 Jun 2022 17:34:35 +0000
Labels:       app=hello-node
pod-template-hash=54fbb85
Annotations:  <none>
Status:       Running
IP:           172.17.0.6
IPs:
IP:           172.17.0.6
Controlled By:  ReplicaSet/hello-node-54fbb85
Containers:
echoserver:
Container ID:   docker://149fe597618122dc6fb4eb7fb2f007100a5f6db1bb8b5ca9b5a2e43bb9452bfb
Image:          k8s.gcr.io/echoserver:1.8
Image ID:       docker-pullable://k8s.gcr.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969
Port:           <none>
Host Port:      <none>
State:          Running
Started:      Thu, 02 Jun 2022 17:35:00 +0000
Ready:          True
Restart Count:  0
Environment:    <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-g8447 (ro)
Conditions:
Type              Status
Initialized       True 
Ready             True 
ContainersReady   True 
PodScheduled      True 
Volumes:
kube-api-access-g8447:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
ConfigMapOptional:       <nil>
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute for 300s
node.kubernetes.io/unreachable:NoExecute for 300s
Events:
Type    Reason     Age        From                                      Message
----    ------     ----       ----                                      -------
Normal  Scheduled  <unknown>                                            Successfully assigned default/hello-node-54fbb85-l5tpx to functional-20220602172845-12108
Normal  Pulling    34m        kubelet, functional-20220602172845-12108  Pulling image "k8s.gcr.io/echoserver:1.8"
Normal  Pulled     33m        kubelet, functional-20220602172845-12108  Successfully pulled image "k8s.gcr.io/echoserver:1.8" in 18.4299733s
Normal  Created    33m        kubelet, functional-20220602172845-12108  Created container echoserver
Normal  Started    33m        kubelet, functional-20220602172845-12108  Started container echoserver

                                                
                                                
Name:         hello-node-connect-74cf8bc446-qjhfg
Namespace:    default
Priority:     0
Node:         functional-20220602172845-12108/192.168.49.2
Start Time:   Thu, 02 Jun 2022 17:34:35 +0000
Labels:       app=hello-node-connect
pod-template-hash=74cf8bc446
Annotations:  <none>
Status:       Running
IP:           172.17.0.5
IPs:
IP:           172.17.0.5
Controlled By:  ReplicaSet/hello-node-connect-74cf8bc446
Containers:
echoserver:
Container ID:   docker://2c4191d862c838b4c8915a49753b8eec5c08916808271c4bc7711a2bc88598f9
Image:          k8s.gcr.io/echoserver:1.8
Image ID:       docker-pullable://k8s.gcr.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969
Port:           <none>
Host Port:      <none>
State:          Running
Started:      Thu, 02 Jun 2022 17:35:00 +0000
Ready:          True
Restart Count:  0
Environment:    <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-kn7cx (ro)
Conditions:
Type              Status
Initialized       True 
Ready             True 
ContainersReady   True 
PodScheduled      True 
Volumes:
kube-api-access-kn7cx:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
ConfigMapOptional:       <nil>
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute for 300s
node.kubernetes.io/unreachable:NoExecute for 300s
Events:
Type    Reason     Age        From                                      Message
----    ------     ----       ----                                      -------
Normal  Scheduled  <unknown>                                            Successfully assigned default/hello-node-connect-74cf8bc446-qjhfg to functional-20220602172845-12108
Normal  Pulling    34m        kubelet, functional-20220602172845-12108  Pulling image "k8s.gcr.io/echoserver:1.8"
Normal  Pulled     33m        kubelet, functional-20220602172845-12108  Successfully pulled image "k8s.gcr.io/echoserver:1.8" in 18.8132659s
Normal  Created    33m        kubelet, functional-20220602172845-12108  Created container echoserver
Normal  Started    33m        kubelet, functional-20220602172845-12108  Started container echoserver

                                                
                                                
functional_test.go:1411: (dbg) Run:  kubectl --context functional-20220602172845-12108 logs -l app=hello-node
functional_test.go:1415: hello-node logs:
functional_test.go:1417: (dbg) Run:  kubectl --context functional-20220602172845-12108 describe svc hello-node
functional_test.go:1421: hello-node svc describe:
Name:                     hello-node
Namespace:                default
Labels:                   app=hello-node
Annotations:              <none>
Selector:                 app=hello-node
Type:                     NodePort
IP:                       10.99.217.46
Port:                     <unset>  8080/TCP
TargetPort:               8080/TCP
NodePort:                 <unset>  31650/TCP
Endpoints:                172.17.0.6:8080
Session Affinity:         None
External Traffic Policy:  Cluster
Events:                   <none>
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestFunctional/parallel/ServiceCmd]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect functional-20220602172845-12108
helpers_test.go:231: (dbg) Done: docker inspect functional-20220602172845-12108: (1.0647881s)
helpers_test.go:235: (dbg) docker inspect functional-20220602172845-12108:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "297d7bd3193981c56923ea9a67f965b5b93d32c530a88cae0db3b424773c6ad5",
	        "Created": "2022-06-02T17:29:37.4256166Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 21052,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2022-06-02T17:29:38.4958531Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:462790409a0e2520bcd6c4a009da80732d87146c973d78561764290fa691ea41",
	        "ResolvConfPath": "/var/lib/docker/containers/297d7bd3193981c56923ea9a67f965b5b93d32c530a88cae0db3b424773c6ad5/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/297d7bd3193981c56923ea9a67f965b5b93d32c530a88cae0db3b424773c6ad5/hostname",
	        "HostsPath": "/var/lib/docker/containers/297d7bd3193981c56923ea9a67f965b5b93d32c530a88cae0db3b424773c6ad5/hosts",
	        "LogPath": "/var/lib/docker/containers/297d7bd3193981c56923ea9a67f965b5b93d32c530a88cae0db3b424773c6ad5/297d7bd3193981c56923ea9a67f965b5b93d32c530a88cae0db3b424773c6ad5-json.log",
	        "Name": "/functional-20220602172845-12108",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-20220602172845-12108:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-20220602172845-12108",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4194304000,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 4194304000,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/7b1566f809bb5cea6f1e60492a928281ca264fcb5dd82ca9a266c98413582f4e-init/diff:/var/lib/docker/overlay2/dfce970b43800856c522d9750e5e1364e8adf4be4cf71ca7c53d79b33355f5a7/diff:/var/lib/docker/overlay2/4fd23a1b84854239f1bb855d05e42ecd6acbd1b0944b347813a56f5f45356a42/diff:/var/lib/docker/overlay2/864c5b1fbc297750771bb843fdeb4bafa10868a71716f4a01f1119609fb34667/diff:/var/lib/docker/overlay2/0f11f6855118857c743b90ca120ff7aa550f8157d475abf59df950433a5bc6e8/diff:/var/lib/docker/overlay2/2ae7f559725a060dc3b3a9c2fbd554b98114ae47dbf8db75f13bd8a95cbae19a/diff:/var/lib/docker/overlay2/48f41ac288d1037223ac101e6bc07f05729cdcecd98cc85971db99e90765c437/diff:/var/lib/docker/overlay2/8d4eaae639ade3ad3459b4fb67dbcac83774b72a2550b0a4bca1f21d122b20e6/diff:/var/lib/docker/overlay2/e06515bb91756221300de52336376d32ef9bd8685a92352e522936c4947b88ee/diff:/var/lib/docker/overlay2/a2f615fb794b704dc3823080c47e2c357cf4826ec91f6ae190c7497bb18a80cd/diff:/var/lib/docker/overlay2/22f99f
8a3da21c6e2be4c5c5e9d969af73e7695aaf9b0c7d0d09b5795ba76416/diff:/var/lib/docker/overlay2/9c0266785c64b9f6c471863067ca9db045a5aa61167a7817217cf01825a7d868/diff:/var/lib/docker/overlay2/b8a0250c9ae7d899ee3e46414c2db7f7ba363793900f8fcbf1b470586ebe7bd9/diff:/var/lib/docker/overlay2/00afbeac619cb9c06d4da311f5fc5aa3f5147b88b291acf06d4c4b36984ad5a2/diff:/var/lib/docker/overlay2/da51241ed08bd861b9d27902198eae13c3e4aac5c79f522e9f3fa209ea35e8d3/diff:/var/lib/docker/overlay2/b01176f7dbe98e3004db7c0fe45d94616a803dd8ae9cbdf3a1f2a188604178af/diff:/var/lib/docker/overlay2/0ebb0ff0177c8116e72a14ac704b161f75922cea05fe804ad1f7b83f4cd3dd70/diff:/var/lib/docker/overlay2/bae8d175bc3e334a70aaa239643efa0e8b453ab163f077d9cef60e3840c717ba/diff:/var/lib/docker/overlay2/e72a79f763a44dc32f9a2e84dc5e28a060e7fbb9f4624cb8aaa084dd356522ec/diff:/var/lib/docker/overlay2/2e1bc304b205033ad7f49fb8db243b0991596e0eec913fd13e8382aa25767e21/diff:/var/lib/docker/overlay2/ebb9b39dedfc09f9f34ea879f56a8ffd24ab9f9bf8acc93aa9df5eb93dba58e8/diff:/var/lib/d
ocker/overlay2/bffdca36eba4bce9086f2c269bcfe5b915d807483717f0e27acbd51b5bbfc11b/diff:/var/lib/docker/overlay2/96c321cbf06c0050c8a0a7897e9533db1ee5788eb09b1e1d605bdd1134af8eca/diff:/var/lib/docker/overlay2/735422b44af98e330209fe1c4273bf57aa33fcfd770f3e9d6f1a6e59f7545920/diff:/var/lib/docker/overlay2/8dc177c0589f67ded7d9c229d3c587fe77b3d1c68cf0a5af871bc23768d67d84/diff:/var/lib/docker/overlay2/9a29541ccfee3849e0691950c599bb7e4e51d9026724b1ad13abc8d8e9c140e0/diff:/var/lib/docker/overlay2/50fe1dc8f357b5d624681e6f14d98e6d33a8b6b53d70293ba90ac4435a1e18d8/diff:/var/lib/docker/overlay2/86f301a296dbb7422a3d55a008a9f38278a7a19d68a0f735d298c0c2a431ee30/diff:/var/lib/docker/overlay2/dc8087ea592587f8cb5392cc0ee739c33f2724c47b83767d593b3065914820b0/diff:/var/lib/docker/overlay2/15163601889f0d414f35ccd64ae33a52958605b5b7e50618ed5d4f4bd06ec65b/diff:/var/lib/docker/overlay2/a50cf19d9d69b9c68c6c66a918cbde678b49e8d566d06772af22bf99191b08f3/diff:/var/lib/docker/overlay2/621f3b0fc578721c5d0465771ad007f022ed238fa5a2076f807c077680c
26d27/diff:/var/lib/docker/overlay2/2652f9ffde92786a77e3bb35fe07c03a623aaad541f0ca9710839800c4b470e4/diff:/var/lib/docker/overlay2/c853755ee76ea55ad6c00f5eaff82196f4953ee6fb2d27e27ba35f86d56bfc32/diff:/var/lib/docker/overlay2/a0f70e6416a8e618ea7475b5e7f4cdc9a66ac39f0a6c1969c569d8e4f0b5e9eb/diff:/var/lib/docker/overlay2/275d2c643ecb011298df16e0794bebb9a7ec82e190aea53a90369288c521f75e/diff:/var/lib/docker/overlay2/a7e78f238badc23c2c38b7e9b9c4428c0614e825744076161295740d46a20957/diff:/var/lib/docker/overlay2/39fcd4c392271449973511a31d445289c1f8d378d01759fef12c430c9f44f2b8/diff:/var/lib/docker/overlay2/e1c51360d327e86575fe8248415fae12e9dbdde580db0e6f4f4e485ac9f92e3b/diff:/var/lib/docker/overlay2/fecd88783858177cbe3b751f0717b370c5556d7cf0ef163e2710f16fce09d53c/diff:/var/lib/docker/overlay2/3b4c7afaac6f5818bc33bec8c0ec442eb5a1010d0de6fe488460ee83a3901b21/diff:/var/lib/docker/overlay2/47d0047bc42c34ea02c33c1500f96c5109f27f84f973a5636832bbc855761e3f/diff",
	                "MergedDir": "/var/lib/docker/overlay2/7b1566f809bb5cea6f1e60492a928281ca264fcb5dd82ca9a266c98413582f4e/merged",
	                "UpperDir": "/var/lib/docker/overlay2/7b1566f809bb5cea6f1e60492a928281ca264fcb5dd82ca9a266c98413582f4e/diff",
	                "WorkDir": "/var/lib/docker/overlay2/7b1566f809bb5cea6f1e60492a928281ca264fcb5dd82ca9a266c98413582f4e/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "functional-20220602172845-12108",
	                "Source": "/var/lib/docker/volumes/functional-20220602172845-12108/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-20220602172845-12108",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-20220602172845-12108",
	                "name.minikube.sigs.k8s.io": "functional-20220602172845-12108",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "efa325e477cb33d07c8eb59e8986c67cdb7a0c9d9485f8e2e3620d01ceacb8a6",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "51168"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "51169"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "51170"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "51171"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "51172"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/efa325e477cb",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-20220602172845-12108": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "297d7bd31939",
	                        "functional-20220602172845-12108"
	                    ],
	                    "NetworkID": "34bd2bef96a2e24112d476abd5ee49cf8b66ed7bdd21d8e661c89d34d79ecd9a",
	                    "EndpointID": "9d193aac4f9f962d9bbecca538ea972a8c4eb5d12eb262da1c86516b2d609ae3",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p functional-20220602172845-12108 -n functional-20220602172845-12108
helpers_test.go:239: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.Host}} -p functional-20220602172845-12108 -n functional-20220602172845-12108: (6.4547321s)
helpers_test.go:244: <<< TestFunctional/parallel/ServiceCmd FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestFunctional/parallel/ServiceCmd]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20220602172845-12108 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-windows-amd64.exe -p functional-20220602172845-12108 logs -n 25: (8.1376039s)
helpers_test.go:252: TestFunctional/parallel/ServiceCmd logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |----------------|-----------------------------------------------------------------------------------------------------|---------------------------------|-------------------|----------------|---------------------|---------------------|
	|    Command     |                                                Args                                                 |             Profile             |       User        |    Version     |     Start Time      |      End Time       |
	|----------------|-----------------------------------------------------------------------------------------------------|---------------------------------|-------------------|----------------|---------------------|---------------------|
	| cp             | functional-20220602172845-12108                                                                     | functional-20220602172845-12108 | minikube7\jenkins | v1.26.0-beta.1 | 02 Jun 22 17:36 GMT | 02 Jun 22 17:36 GMT |
	|                | cp testdata\cp-test.txt                                                                             |                                 |                   |                |                     |                     |
	|                | /home/docker/cp-test.txt                                                                            |                                 |                   |                |                     |                     |
	| ssh            | functional-20220602172845-12108                                                                     | functional-20220602172845-12108 | minikube7\jenkins | v1.26.0-beta.1 | 02 Jun 22 17:36 GMT | 02 Jun 22 17:36 GMT |
	|                | ssh -n                                                                                              |                                 |                   |                |                     |                     |
	|                | functional-20220602172845-12108                                                                     |                                 |                   |                |                     |                     |
	|                | sudo cat                                                                                            |                                 |                   |                |                     |                     |
	|                | /home/docker/cp-test.txt                                                                            |                                 |                   |                |                     |                     |
	| cp             | functional-20220602172845-12108 cp functional-20220602172845-12108:/home/docker/cp-test.txt         | functional-20220602172845-12108 | minikube7\jenkins | v1.26.0-beta.1 | 02 Jun 22 17:36 GMT | 02 Jun 22 17:36 GMT |
	|                | C:\Users\jenkins.minikube7\AppData\Local\Temp\TestFunctionalparallelCpCmd2696599892\001\cp-test.txt |                                 |                   |                |                     |                     |
	| ssh            | functional-20220602172845-12108                                                                     | functional-20220602172845-12108 | minikube7\jenkins | v1.26.0-beta.1 | 02 Jun 22 17:36 GMT | 02 Jun 22 17:36 GMT |
	|                | ssh -n                                                                                              |                                 |                   |                |                     |                     |
	|                | functional-20220602172845-12108                                                                     |                                 |                   |                |                     |                     |
	|                | sudo cat                                                                                            |                                 |                   |                |                     |                     |
	|                | /home/docker/cp-test.txt                                                                            |                                 |                   |                |                     |                     |
	| image          | functional-20220602172845-12108 image load --daemon                                                 | functional-20220602172845-12108 | minikube7\jenkins | v1.26.0-beta.1 | 02 Jun 22 17:36 GMT | 02 Jun 22 17:36 GMT |
	|                | gcr.io/google-containers/addon-resizer:functional-20220602172845-12108                              |                                 |                   |                |                     |                     |
	| image          | functional-20220602172845-12108                                                                     | functional-20220602172845-12108 | minikube7\jenkins | v1.26.0-beta.1 | 02 Jun 22 17:36 GMT | 02 Jun 22 17:36 GMT |
	|                | image ls                                                                                            |                                 |                   |                |                     |                     |
	| image          | functional-20220602172845-12108 image load --daemon                                                 | functional-20220602172845-12108 | minikube7\jenkins | v1.26.0-beta.1 | 02 Jun 22 17:36 GMT | 02 Jun 22 17:37 GMT |
	|                | gcr.io/google-containers/addon-resizer:functional-20220602172845-12108                              |                                 |                   |                |                     |                     |
	| image          | functional-20220602172845-12108                                                                     | functional-20220602172845-12108 | minikube7\jenkins | v1.26.0-beta.1 | 02 Jun 22 17:37 GMT | 02 Jun 22 17:37 GMT |
	|                | image ls                                                                                            |                                 |                   |                |                     |                     |
	| image          | functional-20220602172845-12108 image load --daemon                                                 | functional-20220602172845-12108 | minikube7\jenkins | v1.26.0-beta.1 | 02 Jun 22 17:37 GMT | 02 Jun 22 17:37 GMT |
	|                | gcr.io/google-containers/addon-resizer:functional-20220602172845-12108                              |                                 |                   |                |                     |                     |
	| update-context | functional-20220602172845-12108                                                                     | functional-20220602172845-12108 | minikube7\jenkins | v1.26.0-beta.1 | 02 Jun 22 17:37 GMT | 02 Jun 22 17:37 GMT |
	|                | update-context                                                                                      |                                 |                   |                |                     |                     |
	|                | --alsologtostderr -v=2                                                                              |                                 |                   |                |                     |                     |
	| image          | functional-20220602172845-12108                                                                     | functional-20220602172845-12108 | minikube7\jenkins | v1.26.0-beta.1 | 02 Jun 22 17:37 GMT | 02 Jun 22 17:37 GMT |
	|                | image ls                                                                                            |                                 |                   |                |                     |                     |
	| update-context | functional-20220602172845-12108                                                                     | functional-20220602172845-12108 | minikube7\jenkins | v1.26.0-beta.1 | 02 Jun 22 17:37 GMT | 02 Jun 22 17:37 GMT |
	|                | update-context                                                                                      |                                 |                   |                |                     |                     |
	|                | --alsologtostderr -v=2                                                                              |                                 |                   |                |                     |                     |
	| update-context | functional-20220602172845-12108                                                                     | functional-20220602172845-12108 | minikube7\jenkins | v1.26.0-beta.1 | 02 Jun 22 17:37 GMT | 02 Jun 22 17:37 GMT |
	|                | update-context                                                                                      |                                 |                   |                |                     |                     |
	|                | --alsologtostderr -v=2                                                                              |                                 |                   |                |                     |                     |
	| image          | functional-20220602172845-12108 image save                                                          | functional-20220602172845-12108 | minikube7\jenkins | v1.26.0-beta.1 | 02 Jun 22 17:37 GMT | 02 Jun 22 17:37 GMT |
	|                | gcr.io/google-containers/addon-resizer:functional-20220602172845-12108                              |                                 |                   |                |                     |                     |
	|                | C:\jenkins\workspace\Docker_Windows_integration\addon-resizer-save.tar                              |                                 |                   |                |                     |                     |
	| image          | functional-20220602172845-12108 image rm                                                            | functional-20220602172845-12108 | minikube7\jenkins | v1.26.0-beta.1 | 02 Jun 22 17:37 GMT | 02 Jun 22 17:37 GMT |
	|                | gcr.io/google-containers/addon-resizer:functional-20220602172845-12108                              |                                 |                   |                |                     |                     |
	| image          | functional-20220602172845-12108                                                                     | functional-20220602172845-12108 | minikube7\jenkins | v1.26.0-beta.1 | 02 Jun 22 17:37 GMT | 02 Jun 22 17:37 GMT |
	|                | image ls                                                                                            |                                 |                   |                |                     |                     |
	| image          | functional-20220602172845-12108 image load                                                          | functional-20220602172845-12108 | minikube7\jenkins | v1.26.0-beta.1 | 02 Jun 22 17:37 GMT | 02 Jun 22 17:37 GMT |
	|                | C:\jenkins\workspace\Docker_Windows_integration\addon-resizer-save.tar                              |                                 |                   |                |                     |                     |
	| image          | functional-20220602172845-12108                                                                     | functional-20220602172845-12108 | minikube7\jenkins | v1.26.0-beta.1 | 02 Jun 22 17:37 GMT | 02 Jun 22 17:37 GMT |
	|                | image ls                                                                                            |                                 |                   |                |                     |                     |
	| image          | functional-20220602172845-12108 image save --daemon                                                 | functional-20220602172845-12108 | minikube7\jenkins | v1.26.0-beta.1 | 02 Jun 22 17:38 GMT | 02 Jun 22 17:38 GMT |
	|                | gcr.io/google-containers/addon-resizer:functional-20220602172845-12108                              |                                 |                   |                |                     |                     |
	| image          | functional-20220602172845-12108                                                                     | functional-20220602172845-12108 | minikube7\jenkins | v1.26.0-beta.1 | 02 Jun 22 17:38 GMT | 02 Jun 22 17:38 GMT |
	|                | image ls --format short                                                                             |                                 |                   |                |                     |                     |
	| image          | functional-20220602172845-12108                                                                     | functional-20220602172845-12108 | minikube7\jenkins | v1.26.0-beta.1 | 02 Jun 22 17:38 GMT | 02 Jun 22 17:38 GMT |
	|                | image ls --format yaml                                                                              |                                 |                   |                |                     |                     |
	| image          | functional-20220602172845-12108                                                                     | functional-20220602172845-12108 | minikube7\jenkins | v1.26.0-beta.1 | 02 Jun 22 17:38 GMT | 02 Jun 22 17:38 GMT |
	|                | image ls --format json                                                                              |                                 |                   |                |                     |                     |
	| image          | functional-20220602172845-12108                                                                     | functional-20220602172845-12108 | minikube7\jenkins | v1.26.0-beta.1 | 02 Jun 22 17:38 GMT | 02 Jun 22 17:38 GMT |
	|                | image ls --format table                                                                             |                                 |                   |                |                     |                     |
	| image          | functional-20220602172845-12108 image build -t                                                      | functional-20220602172845-12108 | minikube7\jenkins | v1.26.0-beta.1 | 02 Jun 22 17:38 GMT | 02 Jun 22 17:38 GMT |
	|                | localhost/my-image:functional-20220602172845-12108                                                  |                                 |                   |                |                     |                     |
	|                | testdata\build                                                                                      |                                 |                   |                |                     |                     |
	| image          | functional-20220602172845-12108                                                                     | functional-20220602172845-12108 | minikube7\jenkins | v1.26.0-beta.1 | 02 Jun 22 17:38 GMT | 02 Jun 22 17:38 GMT |
	|                | image ls                                                                                            |                                 |                   |                |                     |                     |
	|----------------|-----------------------------------------------------------------------------------------------------|---------------------------------|-------------------|----------------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2022/06/02 17:35:35
	Running on machine: minikube7
	Binary: Built with gc go1.18.2 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0602 17:35:35.613351   12600 out.go:296] Setting OutFile to fd 672 ...
	I0602 17:35:35.674499   12600 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0602 17:35:35.674499   12600 out.go:309] Setting ErrFile to fd 716...
	I0602 17:35:35.674499   12600 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0602 17:35:35.687754   12600 out.go:303] Setting JSON to false
	I0602 17:35:35.690043   12600 start.go:115] hostinfo: {"hostname":"minikube7","uptime":54477,"bootTime":1654136858,"procs":195,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19044 Build 19044","kernelVersion":"10.0.19044 Build 19044","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f66ed2ea-6c04-4a6b-8eea-b2fb0953e990"}
	W0602 17:35:35.690043   12600 start.go:123] gopshost.Virtualization returned error: not implemented yet
	I0602 17:35:35.695305   12600 out.go:177] * [functional-20220602172845-12108] minikube v1.26.0-beta.1 on Microsoft Windows 10 Enterprise N 10.0.19044 Build 19044
	I0602 17:35:35.698812   12600 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube7\minikube-integration\kubeconfig
	I0602 17:35:35.701911   12600 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube7\minikube-integration\.minikube
	I0602 17:35:35.704480   12600 out.go:177]   - MINIKUBE_LOCATION=14269
	I0602 17:35:35.706590   12600 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0602 17:35:35.710270   12600 config.go:178] Loaded profile config "functional-20220602172845-12108": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.6
	I0602 17:35:35.711373   12600 driver.go:358] Setting default libvirt URI to qemu:///system
	I0602 17:35:38.327166   12600 docker.go:137] docker version: linux-20.10.16
	I0602 17:35:38.334875   12600 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0602 17:35:40.422069   12600 cli_runner.go:217] Completed: docker system info --format "{{json .}}": (2.0871845s)
	I0602 17:35:40.423137   12600 info.go:265] docker info: {ID:DTJ5:6HSD:HQYO:NIUY:6CLH:5UQL:6FBK:7BMO:GVE6:JAI7:26MB:GZA3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:59 OomKillDisable:true NGoroutines:54 SystemTime:2022-06-02 17:35:39.3867836 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:4 KernelVersion:5.10.102.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_
64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:53902323712 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.16 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16 Expected:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16} RuncCommit:{ID:v1.1.1-0-g52de29d Expected:v1.1.1-0-g52de29d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,
profile=default] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.6.0] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:C:\Program Files\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc.
Version:v0.17.0]] Warnings:<nil>}}
	I0602 17:35:40.427485   12600 out.go:177] * Using the docker driver based on existing profile
	I0602 17:35:40.430176   12600 start.go:284] selected driver: docker
	I0602 17:35:40.430176   12600 start.go:806] validating driver "docker" against &{Name:functional-20220602172845-12108 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.6 ClusterName:functional-20220602172845-12108 Namespace:
default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.23.6 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false helm-tiller:false ingress:false ingress-dns:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false reg
istry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube7:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0602 17:35:40.430176   12600 start.go:817] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0602 17:35:40.451493   12600 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0602 17:35:42.446735   12600 cli_runner.go:217] Completed: docker system info --format "{{json .}}": (1.9952326s)
	I0602 17:35:42.446735   12600 info.go:265] docker info: {ID:DTJ5:6HSD:HQYO:NIUY:6CLH:5UQL:6FBK:7BMO:GVE6:JAI7:26MB:GZA3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:59 OomKillDisable:true NGoroutines:54 SystemTime:2022-06-02 17:35:41.4676763 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:4 KernelVersion:5.10.102.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_
64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:53902323712 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.16 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16 Expected:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16} RuncCommit:{ID:v1.1.1-0-g52de29d Expected:v1.1.1-0-g52de29d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,
profile=default] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.6.0] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:C:\Program Files\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc.
Version:v0.17.0]] Warnings:<nil>}}
	I0602 17:35:42.499902   12600 cni.go:95] Creating CNI manager for ""
	I0602 17:35:42.499902   12600 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0602 17:35:42.499902   12600 start_flags.go:306] config:
	{Name:functional-20220602172845-12108 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.6 ClusterName:functional-20220602172845-12108 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:clu
ster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.23.6 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false helm-tiller:false ingress:false ingress-dns:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true
storage-provisioner-gluster:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube7:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0602 17:35:42.506382   12600 out.go:177] * dry-run validation complete!
	
	* 
	* ==> Docker <==
	* -- Logs begin at Thu 2022-06-02 17:29:39 UTC, end at Thu 2022-06-02 18:08:59 UTC. --
	Jun 02 17:29:57 functional-20220602172845-12108 dockerd[509]: time="2022-06-02T17:29:57.550276300Z" level=info msg="API listen on /var/run/docker.sock"
	Jun 02 17:30:54 functional-20220602172845-12108 dockerd[509]: time="2022-06-02T17:30:54.955361500Z" level=info msg="ignoring event" container=6109aec4f76c32123131a9950048e1b2680624bbf4f2abdb5fbbea382e2bae4d module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 02 17:30:55 functional-20220602172845-12108 dockerd[509]: time="2022-06-02T17:30:55.177866900Z" level=info msg="ignoring event" container=88afb34bb331664affe59a361dd5c3ffd9a2345ec5af59e7dba6ccda8c8d1c48 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 02 17:33:02 functional-20220602172845-12108 dockerd[509]: time="2022-06-02T17:33:02.613117300Z" level=info msg="ignoring event" container=171d182f4c0e73a955e9602dfee0071f054fe96c2aa9893733f132b2184f8293 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 02 17:33:02 functional-20220602172845-12108 dockerd[509]: time="2022-06-02T17:33:02.702451900Z" level=info msg="ignoring event" container=122eab7cec752e2c45bd0016a01e9a27f6528b4eb59b4f680d8675bce7493304 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 02 17:33:02 functional-20220602172845-12108 dockerd[509]: time="2022-06-02T17:33:02.705199000Z" level=info msg="ignoring event" container=2726828644f24dff932f4d0c265809664fffb59ac10f4a95c942f4066e28b101 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 02 17:33:02 functional-20220602172845-12108 dockerd[509]: time="2022-06-02T17:33:02.705370200Z" level=info msg="ignoring event" container=16005854d998fa81ab4593f0bce15225c7f2083fa4902bc0e14e03f6c097c550 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 02 17:33:02 functional-20220602172845-12108 dockerd[509]: time="2022-06-02T17:33:02.705979100Z" level=info msg="ignoring event" container=8e12b5d77efff8ad939f279ea7a58f375cd834d467f43d9a932fbbb6eba241bc module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 02 17:33:02 functional-20220602172845-12108 dockerd[509]: time="2022-06-02T17:33:02.906285800Z" level=info msg="ignoring event" container=8aa623a7d4491f7f7814495e952d2fb1e0fdfc313d9828338842cea1a9776245 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 02 17:33:03 functional-20220602172845-12108 dockerd[509]: time="2022-06-02T17:33:03.108434200Z" level=info msg="ignoring event" container=34874a7d34918a2a4bde07146bbac312628fb19b7a87cf8c01bff98470ed82a4 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 02 17:33:03 functional-20220602172845-12108 dockerd[509]: time="2022-06-02T17:33:03.116174100Z" level=info msg="ignoring event" container=d7e206bfa6439da5e29e68d353cc7e4e602abe2aeba2680db66406b4c569691a module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 02 17:33:03 functional-20220602172845-12108 dockerd[509]: time="2022-06-02T17:33:03.116376600Z" level=info msg="ignoring event" container=dfdd7bb40a542103af6ae3c7e98c89a1b6933da85d4d129b174c905078a9f9e6 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 02 17:33:03 functional-20220602172845-12108 dockerd[509]: time="2022-06-02T17:33:03.120470000Z" level=info msg="ignoring event" container=2efd45f063a22d30166d80b93a9dcb3574448b784412ebb9894b7b8406eddf81 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 02 17:33:03 functional-20220602172845-12108 dockerd[509]: time="2022-06-02T17:33:03.317099900Z" level=info msg="ignoring event" container=8bd63a31a2d68a9f6d952871abbe1c2afdc2407cab5c2949a3f3eaa683d4aa18 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 02 17:33:05 functional-20220602172845-12108 dockerd[509]: time="2022-06-02T17:33:05.316362900Z" level=info msg="ignoring event" container=6d6e979e17aad4d6ea111c47ee5171316116c5131bf9c1a6f139c7c6da1f5d37 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 02 17:33:05 functional-20220602172845-12108 dockerd[509]: time="2022-06-02T17:33:05.522881500Z" level=info msg="ignoring event" container=566cdb4a240568af0731c62e471d6bfcd1036a8daae1b5d140ac9189e3516227 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 02 17:33:06 functional-20220602172845-12108 dockerd[509]: time="2022-06-02T17:33:06.314558700Z" level=info msg="ignoring event" container=21743903ddc531db44b257639c5ffeb72b81d8088d26b5cc139d3151c6a4a590 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 02 17:33:07 functional-20220602172845-12108 dockerd[509]: time="2022-06-02T17:33:07.824308000Z" level=info msg="ignoring event" container=b733faa6cfc222aefb11ce7a89c72a66bb6ef2e7a3cb1d8937c69004fe03d2e1 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 02 17:33:19 functional-20220602172845-12108 dockerd[509]: time="2022-06-02T17:33:19.700877400Z" level=info msg="ignoring event" container=e8d8f7baf75548076d13e1dc81c21b7ea1acf0614a5a2e7e4c6b7ab3ce860212 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 02 17:33:19 functional-20220602172845-12108 dockerd[509]: time="2022-06-02T17:33:19.700934100Z" level=info msg="ignoring event" container=aa88977d90f779f4c40fa8b71870b33b10e4cc08f0e5aaef66f890e63beb7e35 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 02 17:33:27 functional-20220602172845-12108 dockerd[509]: time="2022-06-02T17:33:27.052295500Z" level=info msg="ignoring event" container=73ee85e0c6100158f8e9b7c49ab364726698e2a51b0f019c174582fbe125e3c3 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 02 17:34:55 functional-20220602172845-12108 dockerd[509]: time="2022-06-02T17:34:55.806788300Z" level=info msg="ignoring event" container=baa79e09163dca307a9db5772b47da0d18a764189d37889647f7bac4e80d89f4 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 02 17:34:57 functional-20220602172845-12108 dockerd[509]: time="2022-06-02T17:34:57.403135400Z" level=info msg="ignoring event" container=dd6055af112ccf04b8d243661af45b2f91d1b43cb41a82081834267f758192a9 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 02 17:38:33 functional-20220602172845-12108 dockerd[509]: time="2022-06-02T17:38:33.392965900Z" level=info msg="ignoring event" container=1beaa5aad4da9d8e9bec9e82bbad0b86b1c6cea2e2087a09fc88a649a957181c module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 02 17:38:34 functional-20220602172845-12108 dockerd[509]: time="2022-06-02T17:38:34.029319700Z" level=info msg="Layer sha256:8d988d9cbd4c3812fb85f3c741a359985602af139e727005f4d4471ac42f9d1a cleaned up"
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                                                           CREATED             STATE               NAME                      ATTEMPT             POD ID
	72a6bb0f9d4b5       mysql@sha256:7e99b2b8d5bca914ef31059858210f57b009c40375d647f0d4d65ecd01d6b1d5                   32 minutes ago      Running             mysql                     0                   c334f32103670
	fedf76bb319dd       nginx@sha256:2bcabc23b45489fb0885d69a06ba1d648aeda973fae7bb981bafbb884165e514                   33 minutes ago      Running             myfrontend                0                   39c76780d10c2
	2c4191d862c83       k8s.gcr.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969   34 minutes ago      Running             echoserver                0                   8123fc8f742a2
	149fe59761812       k8s.gcr.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969   34 minutes ago      Running             echoserver                0                   560ad2624a500
	aedbc0efe8bec       nginx@sha256:a74534e76ee1121d418fa7394ca930eb67440deda413848bc67c68138535b989                   34 minutes ago      Running             nginx                     0                   d14787edbecf3
	1b0e88fdb16da       df7b72818ad2e                                                                                   35 minutes ago      Running             kube-controller-manager   2                   1880164f78fb4
	d626c0c1fde94       a4ca41631cc7a                                                                                   35 minutes ago      Running             coredns                   1                   0eedc77f9c31e
	b46fbda7e4d23       6e38f40d628db                                                                                   35 minutes ago      Running             storage-provisioner       2                   7846c2947960e
	7ba303e0303c8       8fa62c12256df                                                                                   35 minutes ago      Running             kube-apiserver            0                   b22edf049daa2
	764f22f755e2c       595f327f224a4                                                                                   35 minutes ago      Running             kube-scheduler            1                   877ddf09aad5a
	9bb388ba532a5       4c03754524064                                                                                   35 minutes ago      Running             kube-proxy                1                   681f0a9852d79
	e7d69d28699d3       25f8c7f3da61c                                                                                   35 minutes ago      Running             etcd                      1                   1c13432e19bf8
	21743903ddc53       6e38f40d628db                                                                                   35 minutes ago      Exited              storage-provisioner       1                   7846c2947960e
	73ee85e0c6100       df7b72818ad2e                                                                                   35 minutes ago      Exited              kube-controller-manager   1                   1880164f78fb4
	b733faa6cfc22       a4ca41631cc7a                                                                                   38 minutes ago      Exited              coredns                   0                   34874a7d34918
	2efd45f063a22       4c03754524064                                                                                   38 minutes ago      Exited              kube-proxy                0                   16005854d998f
	6d6e979e17aad       595f327f224a4                                                                                   38 minutes ago      Exited              kube-scheduler            0                   8e12b5d77efff
	8bd63a31a2d68       25f8c7f3da61c                                                                                   38 minutes ago      Exited              etcd                      0                   dfdd7bb40a542
	
	* 
	* ==> coredns [b733faa6cfc2] <==
	* .:53
	[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	CoreDNS-1.8.6
	linux/amd64, go1.17.1, 13a9191
	[INFO] Reloading
	[INFO] plugin/health: Going into lameduck mode for 5s
	[INFO] plugin/reload: Running configuration MD5 = c23ed519c17e71ee396ed052e6209e94
	[INFO] Reloading complete
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	* 
	* ==> coredns [d626c0c1fde9] <==
	* [INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	.:53
	[INFO] plugin/reload: Running configuration MD5 = c23ed519c17e71ee396ed052e6209e94
	CoreDNS-1.8.6
	linux/amd64, go1.17.1, 13a9191
	
	* 
	* ==> describe nodes <==
	* Name:               functional-20220602172845-12108
	Roles:              control-plane,master
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=functional-20220602172845-12108
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=408dc4036f5a6d8b1313a2031b5dcb646a720fae
	                    minikube.k8s.io/name=functional-20220602172845-12108
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2022_06_02T17_30_29_0700
	                    minikube.k8s.io/version=v1.26.0-beta.1
	                    node-role.kubernetes.io/control-plane=
	                    node-role.kubernetes.io/master=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 02 Jun 2022 17:30:24 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  functional-20220602172845-12108
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 02 Jun 2022 18:08:53 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 02 Jun 2022 18:04:21 +0000   Thu, 02 Jun 2022 17:30:21 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 02 Jun 2022 18:04:21 +0000   Thu, 02 Jun 2022 17:30:21 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 02 Jun 2022 18:04:21 +0000   Thu, 02 Jun 2022 17:30:21 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 02 Jun 2022 18:04:21 +0000   Thu, 02 Jun 2022 17:30:40 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    functional-20220602172845-12108
	Capacity:
	  cpu:                16
	  ephemeral-storage:  263174212Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             52638988Ki
	  pods:               110
	Allocatable:
	  cpu:                16
	  ephemeral-storage:  263174212Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             52638988Ki
	  pods:               110
	System Info:
	  Machine ID:                 a34bb2508bce429bb90502b0ef044420
	  System UUID:                a34bb2508bce429bb90502b0ef044420
	  Boot ID:                    174c87a1-4ba0-4f3f-a840-04757270163f
	  Kernel Version:             5.10.102.1-microsoft-standard-WSL2
	  OS Image:                   Ubuntu 20.04.4 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://20.10.16
	  Kubelet Version:            v1.23.6
	  Kube-Proxy Version:         v1.23.6
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (12 in total)
	  Namespace                   Name                                                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                       ------------  ----------  ---------------  -------------  ---
	  default                     hello-node-54fbb85-l5tpx                                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         34m
	  default                     hello-node-connect-74cf8bc446-qjhfg                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         34m
	  default                     mysql-b87c45988-mbb25                                      600m (3%!)(MISSING)     700m (4%!)(MISSING)   512Mi (0%!)(MISSING)       700Mi (1%!)(MISSING)     33m
	  default                     nginx-svc                                                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         34m
	  default                     sp-pod                                                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         34m
	  kube-system                 coredns-64897985d-xlttb                                    100m (0%!)(MISSING)     0 (0%!)(MISSING)      70Mi (0%!)(MISSING)        170Mi (0%!)(MISSING)     38m
	  kube-system                 etcd-functional-20220602172845-12108                       100m (0%!)(MISSING)     0 (0%!)(MISSING)      100Mi (0%!)(MISSING)       0 (0%!)(MISSING)         38m
	  kube-system                 kube-apiserver-functional-20220602172845-12108             250m (1%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         35m
	  kube-system                 kube-controller-manager-functional-20220602172845-12108    200m (1%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         38m
	  kube-system                 kube-proxy-qxvkt                                           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         38m
	  kube-system                 kube-scheduler-functional-20220602172845-12108             100m (0%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         38m
	  kube-system                 storage-provisioner                                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         38m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                1350m (8%!)(MISSING)  700m (4%!)(MISSING)
	  memory             682Mi (1%!)(MISSING)  870Mi (1%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From        Message
	  ----    ------                   ----               ----        -------
	  Normal  Starting                 35m                kube-proxy  
	  Normal  Starting                 38m                kube-proxy  
	  Normal  NodeHasNoDiskPressure    38m (x4 over 38m)  kubelet     Node functional-20220602172845-12108 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     38m (x4 over 38m)  kubelet     Node functional-20220602172845-12108 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  38m                kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  38m (x4 over 38m)  kubelet     Node functional-20220602172845-12108 status is now: NodeHasSufficientMemory
	  Normal  Starting                 38m                kubelet     Starting kubelet.
	  Normal  NodeHasNoDiskPressure    38m                kubelet     Node functional-20220602172845-12108 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     38m                kubelet     Node functional-20220602172845-12108 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientMemory  38m                kubelet     Node functional-20220602172845-12108 status is now: NodeHasSufficientMemory
	  Normal  NodeAllocatableEnforced  38m                kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeReady                38m                kubelet     Node functional-20220602172845-12108 status is now: NodeReady
	  Normal  Starting                 35m                kubelet     Starting kubelet.
	  Normal  NodeAllocatableEnforced  35m                kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  35m (x7 over 35m)  kubelet     Node functional-20220602172845-12108 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    35m (x7 over 35m)  kubelet     Node functional-20220602172845-12108 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     35m (x7 over 35m)  kubelet     Node functional-20220602172845-12108 status is now: NodeHasSufficientPID
	
	* 
	* ==> dmesg <==
	* [Jun 2 17:43] WSL2: Performing memory compaction.
	[Jun 2 17:44] WSL2: Performing memory compaction.
	[Jun 2 17:45] WSL2: Performing memory compaction.
	[Jun 2 17:46] WSL2: Performing memory compaction.
	[Jun 2 17:47] WSL2: Performing memory compaction.
	[Jun 2 17:48] WSL2: Performing memory compaction.
	[Jun 2 17:49] WSL2: Performing memory compaction.
	[Jun 2 17:50] WSL2: Performing memory compaction.
	[Jun 2 17:51] WSL2: Performing memory compaction.
	[Jun 2 17:52] WSL2: Performing memory compaction.
	[Jun 2 17:53] WSL2: Performing memory compaction.
	[Jun 2 17:54] WSL2: Performing memory compaction.
	[Jun 2 17:55] WSL2: Performing memory compaction.
	[Jun 2 17:56] WSL2: Performing memory compaction.
	[Jun 2 17:57] WSL2: Performing memory compaction.
	[Jun 2 17:58] WSL2: Performing memory compaction.
	[Jun 2 17:59] WSL2: Performing memory compaction.
	[Jun 2 18:00] WSL2: Performing memory compaction.
	[Jun 2 18:01] WSL2: Performing memory compaction.
	[Jun 2 18:02] WSL2: Performing memory compaction.
	[Jun 2 18:04] WSL2: Performing memory compaction.
	[Jun 2 18:05] WSL2: Performing memory compaction.
	[Jun 2 18:06] WSL2: Performing memory compaction.
	[Jun 2 18:07] WSL2: Performing memory compaction.
	[Jun 2 18:08] WSL2: Performing memory compaction.
	
	* 
	* ==> etcd [8bd63a31a2d6] <==
	* {"level":"info","ts":"2022-06-02T17:30:18.622Z","caller":"etcdmain/main.go:53","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2022-06-02T17:30:18.623Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2022-06-02T17:30:18.623Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"192.168.49.2:2379"}
	{"level":"warn","ts":"2022-06-02T17:30:24.417Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"105.4264ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:4"}
	{"level":"warn","ts":"2022-06-02T17:30:24.418Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"105.5579ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/namespaces/kube-system\" ","response":"range_response_count:1 size:352"}
	{"level":"info","ts":"2022-06-02T17:30:24.418Z","caller":"traceutil/trace.go:171","msg":"trace[1774619180] range","detail":"{range_begin:/registry/namespaces/kube-system; range_end:; response_count:1; response_revision:29; }","duration":"105.6827ms","start":"2022-06-02T17:30:24.312Z","end":"2022-06-02T17:30:24.418Z","steps":["trace[1774619180] 'agreement among raft nodes before linearized reading'  (duration: 22.3896ms)","trace[1774619180] 'range keys from in-memory index tree'  (duration: 83.1483ms)"],"step_count":2}
	{"level":"info","ts":"2022-06-02T17:30:24.418Z","caller":"traceutil/trace.go:171","msg":"trace[570474075] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:29; }","duration":"105.664ms","start":"2022-06-02T17:30:24.312Z","end":"2022-06-02T17:30:24.418Z","steps":["trace[570474075] 'agreement among raft nodes before linearized reading'  (duration: 22.3676ms)","trace[570474075] 'range keys from in-memory index tree'  (duration: 83.0427ms)"],"step_count":2}
	{"level":"warn","ts":"2022-06-02T17:30:24.418Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"106.7701ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/ranges/servicenodeports\" ","response":"range_response_count:1 size:119"}
	{"level":"warn","ts":"2022-06-02T17:30:24.418Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"104.3459ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/flowschemas/system-leader-election\" ","response":"range_response_count:0 size:4"}
	{"level":"info","ts":"2022-06-02T17:30:24.418Z","caller":"traceutil/trace.go:171","msg":"trace[1083361185] range","detail":"{range_begin:/registry/flowschemas/system-leader-election; range_end:; response_count:0; response_revision:29; }","duration":"104.4003ms","start":"2022-06-02T17:30:24.313Z","end":"2022-06-02T17:30:24.418Z","steps":["trace[1083361185] 'agreement among raft nodes before linearized reading'  (duration: 20.9559ms)","trace[1083361185] 'range keys from in-memory index tree'  (duration: 83.3714ms)"],"step_count":2}
	{"level":"info","ts":"2022-06-02T17:30:24.418Z","caller":"traceutil/trace.go:171","msg":"trace[504897351] range","detail":"{range_begin:/registry/ranges/servicenodeports; range_end:; response_count:1; response_revision:29; }","duration":"106.9576ms","start":"2022-06-02T17:30:24.311Z","end":"2022-06-02T17:30:24.418Z","steps":["trace[504897351] 'agreement among raft nodes before linearized reading'  (duration: 23.5141ms)","trace[504897351] 'range keys from in-memory index tree'  (duration: 83.1862ms)"],"step_count":2}
	{"level":"warn","ts":"2022-06-02T17:30:24.418Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"106.6995ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/ranges/serviceips\" ","response":"range_response_count:1 size:114"}
	{"level":"info","ts":"2022-06-02T17:30:24.418Z","caller":"traceutil/trace.go:171","msg":"trace[2016913714] range","detail":"{range_begin:/registry/ranges/serviceips; range_end:; response_count:1; response_revision:29; }","duration":"107.2084ms","start":"2022-06-02T17:30:24.311Z","end":"2022-06-02T17:30:24.418Z","steps":["trace[2016913714] 'agreement among raft nodes before linearized reading'  (duration: 23.4617ms)","trace[2016913714] 'range keys from in-memory index tree'  (duration: 83.1516ms)"],"step_count":2}
	{"level":"info","ts":"2022-06-02T17:30:41.321Z","caller":"traceutil/trace.go:171","msg":"trace[1401755233] transaction","detail":"{read_only:false; response_revision:378; number_of_response:1; }","duration":"113.6492ms","start":"2022-06-02T17:30:41.207Z","end":"2022-06-02T17:30:41.321Z","steps":["trace[1401755233] 'process raft request'  (duration: 93.9624ms)","trace[1401755233] 'compare'  (duration: 18.93ms)"],"step_count":2}
	{"level":"info","ts":"2022-06-02T17:30:46.824Z","caller":"traceutil/trace.go:171","msg":"trace[909353217] transaction","detail":"{read_only:false; response_revision:450; number_of_response:1; }","duration":"110.7065ms","start":"2022-06-02T17:30:46.713Z","end":"2022-06-02T17:30:46.824Z","steps":["trace[909353217] 'process raft request'  (duration: 87.5269ms)","trace[909353217] 'compare'  (duration: 22.965ms)"],"step_count":2}
	{"level":"info","ts":"2022-06-02T17:30:56.311Z","caller":"traceutil/trace.go:171","msg":"trace[2109093670] transaction","detail":"{read_only:false; response_revision:495; number_of_response:1; }","duration":"104.4418ms","start":"2022-06-02T17:30:56.206Z","end":"2022-06-02T17:30:56.311Z","steps":["trace[2109093670] 'process raft request'  (duration: 104.3814ms)"],"step_count":1}
	{"level":"info","ts":"2022-06-02T17:30:56.311Z","caller":"traceutil/trace.go:171","msg":"trace[1922025376] transaction","detail":"{read_only:false; number_of_response:1; response_revision:494; }","duration":"110.4308ms","start":"2022-06-02T17:30:56.201Z","end":"2022-06-02T17:30:56.311Z","steps":["trace[1922025376] 'compare'  (duration: 87.4346ms)"],"step_count":1}
	{"level":"info","ts":"2022-06-02T17:33:02.802Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2022-06-02T17:33:02.802Z","caller":"embed/etcd.go:367","msg":"closing etcd server","name":"functional-20220602172845-12108","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
	WARNING: 2022/06/02 17:33:02 [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1:2379 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	WARNING: 2022/06/02 17:33:03 [core] grpc: addrConn.createTransport failed to connect to {192.168.49.2:2379 192.168.49.2:2379 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 192.168.49.2:2379: connect: connection refused". Reconnecting...
	{"level":"info","ts":"2022-06-02T17:33:03.003Z","caller":"etcdserver/server.go:1438","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"aec36adc501070cc","current-leader-member-id":"aec36adc501070cc"}
	{"level":"info","ts":"2022-06-02T17:33:03.013Z","caller":"embed/etcd.go:562","msg":"stopping serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2022-06-02T17:33:03.016Z","caller":"embed/etcd.go:567","msg":"stopped serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2022-06-02T17:33:03.016Z","caller":"embed/etcd.go:369","msg":"closed etcd server","name":"functional-20220602172845-12108","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
	
	* 
	* ==> etcd [e7d69d28699d] <==
	* {"level":"warn","ts":"2022-06-02T17:36:25.810Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2022-06-02T17:36:25.224Z","time spent":"586.2614ms","remote":"127.0.0.1:34670","response type":"/etcdserverpb.KV/Range","request count":0,"request size":35,"response count":1,"response size":427,"request content":"key:\"/registry/ranges/servicenodeports\" "}
	{"level":"info","ts":"2022-06-02T17:36:38.369Z","caller":"traceutil/trace.go:171","msg":"trace[1191981672] linearizableReadLoop","detail":"{readStateIndex:1020; appliedIndex:1020; }","duration":"116.746ms","start":"2022-06-02T17:36:38.252Z","end":"2022-06-02T17:36:38.369Z","steps":["trace[1191981672] 'read index received'  (duration: 116.7355ms)","trace[1191981672] 'applied index is now lower than readState.Index'  (duration: 7.6µs)"],"step_count":2}
	{"level":"warn","ts":"2022-06-02T17:36:38.398Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"145.8866ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/horizontalpodautoscalers/\" range_end:\"/registry/horizontalpodautoscalers0\" count_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2022-06-02T17:36:38.398Z","caller":"traceutil/trace.go:171","msg":"trace[1799755926] range","detail":"{range_begin:/registry/horizontalpodautoscalers/; range_end:/registry/horizontalpodautoscalers0; response_count:0; response_revision:913; }","duration":"146.0811ms","start":"2022-06-02T17:36:38.252Z","end":"2022-06-02T17:36:38.398Z","steps":["trace[1799755926] 'agreement among raft nodes before linearized reading'  (duration: 117.0069ms)","trace[1799755926] 'count revisions from in-memory index tree'  (duration: 28.8544ms)"],"step_count":2}
	{"level":"warn","ts":"2022-06-02T17:36:38.398Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"119.1687ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/controllers/\" range_end:\"/registry/controllers0\" count_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2022-06-02T17:36:38.398Z","caller":"traceutil/trace.go:171","msg":"trace[1421134268] range","detail":"{range_begin:/registry/controllers/; range_end:/registry/controllers0; response_count:0; response_revision:913; }","duration":"119.2197ms","start":"2022-06-02T17:36:38.279Z","end":"2022-06-02T17:36:38.398Z","steps":["trace[1421134268] 'agreement among raft nodes before linearized reading'  (duration: 90.136ms)","trace[1421134268] 'count revisions from in-memory index tree'  (duration: 29.0158ms)"],"step_count":2}
	{"level":"info","ts":"2022-06-02T17:36:57.588Z","caller":"traceutil/trace.go:171","msg":"trace[589399620] linearizableReadLoop","detail":"{readStateIndex:1038; appliedIndex:1037; }","duration":"184.7549ms","start":"2022-06-02T17:36:57.403Z","end":"2022-06-02T17:36:57.588Z","steps":["trace[589399620] 'read index received'  (duration: 183.4298ms)","trace[589399620] 'applied index is now lower than readState.Index'  (duration: 1.321ms)"],"step_count":2}
	{"level":"info","ts":"2022-06-02T17:36:57.588Z","caller":"traceutil/trace.go:171","msg":"trace[63869590] transaction","detail":"{read_only:false; response_revision:927; number_of_response:1; }","duration":"446.2825ms","start":"2022-06-02T17:36:57.142Z","end":"2022-06-02T17:36:57.588Z","steps":["trace[63869590] 'process raft request'  (duration: 444.8829ms)"],"step_count":1}
	{"level":"warn","ts":"2022-06-02T17:36:57.588Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2022-06-02T17:36:57.142Z","time spent":"446.3547ms","remote":"127.0.0.1:34636","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":118,"response count":0,"response size":40,"request content":"compare:<target:MOD key:\"/registry/masterleases/192.168.49.2\" mod_revision:919 > success:<request_put:<key:\"/registry/masterleases/192.168.49.2\" value_size:67 lease:8128013425464041231 >> failure:<request_range:<key:\"/registry/masterleases/192.168.49.2\" > >"}
	{"level":"warn","ts":"2022-06-02T17:36:57.589Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"185.2684ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/ingress/\" range_end:\"/registry/ingress0\" count_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2022-06-02T17:36:57.589Z","caller":"traceutil/trace.go:171","msg":"trace[1713815688] range","detail":"{range_begin:/registry/ingress/; range_end:/registry/ingress0; response_count:0; response_revision:927; }","duration":"185.4318ms","start":"2022-06-02T17:36:57.403Z","end":"2022-06-02T17:36:57.589Z","steps":["trace[1713815688] 'agreement among raft nodes before linearized reading'  (duration: 185.0151ms)"],"step_count":1}
	{"level":"warn","ts":"2022-06-02T17:37:37.122Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"108.4254ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2022-06-02T17:37:37.123Z","caller":"traceutil/trace.go:171","msg":"trace[1947285661] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:955; }","duration":"108.6294ms","start":"2022-06-02T17:37:37.014Z","end":"2022-06-02T17:37:37.123Z","steps":["trace[1947285661] 'range keys from in-memory index tree'  (duration: 108.3134ms)"],"step_count":1}
	{"level":"info","ts":"2022-06-02T17:43:20.949Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":988}
	{"level":"info","ts":"2022-06-02T17:43:20.951Z","caller":"mvcc/kvstore_compaction.go:57","msg":"finished scheduled compaction","compact-revision":988,"took":"1.3657ms"}
	{"level":"info","ts":"2022-06-02T17:48:20.967Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1198}
	{"level":"info","ts":"2022-06-02T17:48:20.968Z","caller":"mvcc/kvstore_compaction.go:57","msg":"finished scheduled compaction","compact-revision":1198,"took":"798.2µs"}
	{"level":"info","ts":"2022-06-02T17:53:20.985Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1408}
	{"level":"info","ts":"2022-06-02T17:53:20.986Z","caller":"mvcc/kvstore_compaction.go:57","msg":"finished scheduled compaction","compact-revision":1408,"took":"663.2µs"}
	{"level":"info","ts":"2022-06-02T17:58:21.002Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1618}
	{"level":"info","ts":"2022-06-02T17:58:21.004Z","caller":"mvcc/kvstore_compaction.go:57","msg":"finished scheduled compaction","compact-revision":1618,"took":"796.3µs"}
	{"level":"info","ts":"2022-06-02T18:03:21.020Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1828}
	{"level":"info","ts":"2022-06-02T18:03:21.021Z","caller":"mvcc/kvstore_compaction.go:57","msg":"finished scheduled compaction","compact-revision":1828,"took":"934.8µs"}
	{"level":"info","ts":"2022-06-02T18:08:21.035Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":2038}
	{"level":"info","ts":"2022-06-02T18:08:21.036Z","caller":"mvcc/kvstore_compaction.go:57","msg":"finished scheduled compaction","compact-revision":2038,"took":"634.6µs"}
	
	* 
	* ==> kernel <==
	*  18:09:00 up 58 min,  0 users,  load average: 0.42, 0.32, 0.45
	Linux functional-20220602172845-12108 5.10.102.1-microsoft-standard-WSL2 #1 SMP Wed Mar 2 00:30:59 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 20.04.4 LTS"
	
	* 
	* ==> kube-apiserver [7ba303e0303c] <==
	* I0602 17:34:36.903145       1 trace.go:205] Trace[497186442]: "Update" url:/api/v1/namespaces/kube-system/endpoints/k8s.io-minikube-hostpath,user-agent:storage-provisioner/v0.0.0 (linux/amd64) kubernetes/$Format,audit-id:3e2988d3-a7d8-4081-861e-249452a4eb8e,client:192.168.49.2,accept:application/json, */*,protocol:HTTP/2.0 (02-Jun-2022 17:34:35.815) (total time: 1087ms):
	Trace[497186442]: ---"Object stored in database" 999ms (17:34:36.902)
	Trace[497186442]: [1.087817s] [1.087817s] END
	I0602 17:34:36.903208       1 trace.go:205] Trace[148171019]: "Update" url:/apis/apps/v1/namespaces/default/deployments/hello-node/status,user-agent:kube-controller-manager/v1.23.6 (linux/amd64) kubernetes/ad33385/system:serviceaccount:kube-system:deployment-controller,audit-id:f94b95c4-3f05-4b8e-918f-cce13fe5f05b,client:192.168.49.2,accept:application/vnd.kubernetes.protobuf, */*,protocol:HTTP/2.0 (02-Jun-2022 17:34:35.905) (total time: 998ms):
	Trace[148171019]: ---"Object stored in database" 997ms (17:34:36.903)
	Trace[148171019]: [998.0108ms] [998.0108ms] END
	I0602 17:34:36.915213       1 alloc.go:329] "allocated clusterIPs" service="default/hello-node" clusterIPs=map[IPv4:10.99.217.46]
	I0602 17:34:36.915418       1 trace.go:205] Trace[658946329]: "Create" url:/api/v1/namespaces/default/services,user-agent:kubectl.exe/v1.18.2 (windows/amd64) kubernetes/52c56ce,audit-id:0b497938-188e-4c38-95e9-458b54ccdad6,client:192.168.49.1,accept:application/json, */*,protocol:HTTP/2.0 (02-Jun-2022 17:34:35.803) (total time: 1111ms):
	Trace[658946329]: ---"Object stored in database" 1111ms (17:34:36.915)
	Trace[658946329]: [1.1115856s] [1.1115856s] END
	I0602 17:35:43.105842       1 alloc.go:329] "allocated clusterIPs" service="default/mysql" clusterIPs=map[IPv4:10.104.243.176]
	I0602 17:36:25.810547       1 trace.go:205] Trace[831226069]: "Get" url:/api/v1/namespaces/kube-system/endpoints/k8s.io-minikube-hostpath,user-agent:storage-provisioner/v0.0.0 (linux/amd64) kubernetes/$Format,audit-id:aad7c730-33b7-4c99-9a3e-889e24e3eb41,client:192.168.49.2,accept:application/json, */*,protocol:HTTP/2.0 (02-Jun-2022 17:36:24.424) (total time: 1385ms):
	Trace[831226069]: ---"About to write a response" 1385ms (17:36:25.810)
	Trace[831226069]: [1.3856389s] [1.3856389s] END
	I0602 17:36:25.810764       1 trace.go:205] Trace[1244335267]: "List etcd3" key:/pods/default,resourceVersion:,resourceVersionMatch:,limit:0,continue: (02-Jun-2022 17:36:24.222) (total time: 1587ms):
	Trace[1244335267]: [1.5879435s] [1.5879435s] END
	I0602 17:36:25.810546       1 trace.go:205] Trace[868752966]: "Get" url:/api/v1/namespaces/default/services/nginx-svc,user-agent:kubectl.exe/v1.18.2 (windows/amd64) kubernetes/52c56ce,audit-id:36708dc3-e450-488c-861d-92bd08e5b67e,client:192.168.49.1,accept:application/json,protocol:HTTP/2.0 (02-Jun-2022 17:36:23.859) (total time: 1950ms):
	Trace[868752966]: ---"About to write a response" 1950ms (17:36:25.810)
	Trace[868752966]: [1.9506941s] [1.9506941s] END
	I0602 17:36:25.811631       1 trace.go:205] Trace[451222435]: "List" url:/api/v1/namespaces/default/pods,user-agent:e2e-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format,audit-id:1f73a754-efc9-495b-b471-dbbf00a8de8e,client:192.168.49.1,accept:application/json, */*,protocol:HTTP/2.0 (02-Jun-2022 17:36:24.222) (total time: 1588ms):
	Trace[451222435]: ---"Listing from storage done" 1588ms (17:36:25.810)
	Trace[451222435]: [1.5888983s] [1.5888983s] END
	W0602 17:46:16.623442       1 watcher.go:229] watch chan error: etcdserver: mvcc: required revision has been compacted
	W0602 17:55:03.636337       1 watcher.go:229] watch chan error: etcdserver: mvcc: required revision has been compacted
	W0602 18:04:29.873047       1 watcher.go:229] watch chan error: etcdserver: mvcc: required revision has been compacted
	
	* 
	* ==> kube-controller-manager [1b0e88fdb16d] <==
	* I0602 17:33:40.603736       1 shared_informer.go:247] Caches are synced for persistent volume 
	I0602 17:33:40.607680       1 shared_informer.go:240] Waiting for caches to sync for garbage collector
	I0602 17:33:40.610220       1 shared_informer.go:247] Caches are synced for certificate-csrsigning-kubelet-serving 
	I0602 17:33:40.610383       1 shared_informer.go:247] Caches are synced for certificate-csrsigning-kubelet-client 
	I0602 17:33:40.612008       1 shared_informer.go:247] Caches are synced for PVC protection 
	I0602 17:33:40.612192       1 shared_informer.go:247] Caches are synced for certificate-csrsigning-legacy-unknown 
	I0602 17:33:40.620002       1 shared_informer.go:247] Caches are synced for resource quota 
	I0602 17:33:40.629084       1 shared_informer.go:247] Caches are synced for cronjob 
	I0602 17:33:40.702120       1 shared_informer.go:247] Caches are synced for TTL after finished 
	I0602 17:33:40.702270       1 shared_informer.go:247] Caches are synced for service account 
	I0602 17:33:40.702826       1 shared_informer.go:247] Caches are synced for job 
	I0602 17:33:40.703753       1 shared_informer.go:247] Caches are synced for namespace 
	I0602 17:33:40.704838       1 shared_informer.go:247] Caches are synced for resource quota 
	I0602 17:33:40.716759       1 shared_informer.go:247] Caches are synced for attach detach 
	I0602 17:33:41.108301       1 shared_informer.go:247] Caches are synced for garbage collector 
	I0602 17:33:41.180581       1 shared_informer.go:247] Caches are synced for garbage collector 
	I0602 17:33:41.180757       1 garbagecollector.go:155] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I0602 17:34:25.073098       1 event.go:294] "Event occurred" object="default/myclaim" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="ExternalProvisioning" message="waiting for a volume to be created, either by external provisioner \"k8s.io/minikube-hostpath\" or manually created by system administrator"
	I0602 17:34:25.073243       1 event.go:294] "Event occurred" object="default/myclaim" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="ExternalProvisioning" message="waiting for a volume to be created, either by external provisioner \"k8s.io/minikube-hostpath\" or manually created by system administrator"
	I0602 17:34:34.703002       1 event.go:294] "Event occurred" object="default/hello-node-connect" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set hello-node-connect-74cf8bc446 to 1"
	I0602 17:34:34.904761       1 event.go:294] "Event occurred" object="default/hello-node-connect-74cf8bc446" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: hello-node-connect-74cf8bc446-qjhfg"
	I0602 17:34:35.304018       1 event.go:294] "Event occurred" object="default/hello-node" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set hello-node-54fbb85 to 1"
	I0602 17:34:35.409147       1 event.go:294] "Event occurred" object="default/hello-node-54fbb85" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: hello-node-54fbb85-l5tpx"
	I0602 17:35:43.212131       1 event.go:294] "Event occurred" object="default/mysql" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set mysql-b87c45988 to 1"
	I0602 17:35:43.324203       1 event.go:294] "Event occurred" object="default/mysql-b87c45988" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: mysql-b87c45988-mbb25"
	
	* 
	* ==> kube-controller-manager [73ee85e0c610] <==
	* 	/usr/local/go/src/bytes/buffer.go:204 +0x98
	crypto/tls.(*Conn).readFromUntil(0xc000130e00, {0x4d51100, 0xc0005a8058}, 0x8ef)
		/usr/local/go/src/crypto/tls/conn.go:799 +0xe5
	crypto/tls.(*Conn).readRecordOrCCS(0xc000130e00, 0x0)
		/usr/local/go/src/crypto/tls/conn.go:606 +0x112
	crypto/tls.(*Conn).readRecord(...)
		/usr/local/go/src/crypto/tls/conn.go:574
	crypto/tls.(*Conn).Read(0xc000130e00, {0xc000d32000, 0x1000, 0x919560})
		/usr/local/go/src/crypto/tls/conn.go:1277 +0x16f
	bufio.(*Reader).Read(0xc00017b620, {0xc0000e70e0, 0x9, 0x934bc2})
		/usr/local/go/src/bufio/bufio.go:227 +0x1b4
	io.ReadAtLeast({0x4d48ae0, 0xc00017b620}, {0xc0000e70e0, 0x9, 0x9}, 0x9)
		/usr/local/go/src/io/io.go:328 +0x9a
	io.ReadFull(...)
		/usr/local/go/src/io/io.go:347
	k8s.io/kubernetes/vendor/golang.org/x/net/http2.readFrameHeader({0xc0000e70e0, 0x9, 0xc001d074a0}, {0x4d48ae0, 0xc00017b620})
		/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/golang.org/x/net/http2/frame.go:237 +0x6e
	k8s.io/kubernetes/vendor/golang.org/x/net/http2.(*Framer).ReadFrame(0xc0000e70a0)
		/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/golang.org/x/net/http2/frame.go:498 +0x95
	k8s.io/kubernetes/vendor/golang.org/x/net/http2.(*clientConnReadLoop).run(0xc000a5ff98)
		/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/golang.org/x/net/http2/transport.go:2101 +0x130
	k8s.io/kubernetes/vendor/golang.org/x/net/http2.(*ClientConn).readLoop(0xc0009e3b00)
		/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/golang.org/x/net/http2/transport.go:1997 +0x6f
	created by k8s.io/kubernetes/vendor/golang.org/x/net/http2.(*Transport).newClientConn
		/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/golang.org/x/net/http2/transport.go:725 +0xac5
	
	* 
	* ==> kube-proxy [2efd45f063a2] <==
	* E0602 17:30:45.022452       1 proxier.go:647] "Failed to read builtin modules file, you can ignore this message when kube-proxy is running inside container without mounting /lib/modules" err="open /lib/modules/5.10.102.1-microsoft-standard-WSL2/modules.builtin: no such file or directory" filePath="/lib/modules/5.10.102.1-microsoft-standard-WSL2/modules.builtin"
	I0602 17:30:45.106745       1 proxier.go:657] "Failed to load kernel module with modprobe, you can ignore this message when kube-proxy is running inside container without mounting /lib/modules" moduleName="ip_vs"
	I0602 17:30:45.110103       1 proxier.go:657] "Failed to load kernel module with modprobe, you can ignore this message when kube-proxy is running inside container without mounting /lib/modules" moduleName="ip_vs_rr"
	I0602 17:30:45.114380       1 proxier.go:657] "Failed to load kernel module with modprobe, you can ignore this message when kube-proxy is running inside container without mounting /lib/modules" moduleName="ip_vs_wrr"
	I0602 17:30:45.118433       1 proxier.go:657] "Failed to load kernel module with modprobe, you can ignore this message when kube-proxy is running inside container without mounting /lib/modules" moduleName="ip_vs_sh"
	I0602 17:30:45.123477       1 proxier.go:657] "Failed to load kernel module with modprobe, you can ignore this message when kube-proxy is running inside container without mounting /lib/modules" moduleName="nf_conntrack"
	I0602 17:30:45.316774       1 node.go:163] Successfully retrieved node IP: 192.168.49.2
	I0602 17:30:45.316884       1 server_others.go:138] "Detected node IP" address="192.168.49.2"
	I0602 17:30:45.317007       1 server_others.go:561] "Unknown proxy mode, assuming iptables proxy" proxyMode=""
	I0602 17:30:45.521001       1 server_others.go:206] "Using iptables Proxier"
	I0602 17:30:45.521138       1 server_others.go:213] "kube-proxy running in dual-stack mode" ipFamily=IPv4
	I0602 17:30:45.521161       1 server_others.go:214] "Creating dualStackProxier for iptables"
	I0602 17:30:45.521203       1 server_others.go:491] "Detect-local-mode set to ClusterCIDR, but no IPv6 cluster CIDR defined, , defaulting to no-op detect-local for IPv6"
	I0602 17:30:45.522484       1 server.go:656] "Version info" version="v1.23.6"
	I0602 17:30:45.523620       1 config.go:317] "Starting service config controller"
	I0602 17:30:45.523766       1 shared_informer.go:240] Waiting for caches to sync for service config
	I0602 17:30:45.523678       1 config.go:226] "Starting endpoint slice config controller"
	I0602 17:30:45.523980       1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
	I0602 17:30:45.701703       1 shared_informer.go:247] Caches are synced for service config 
	I0602 17:30:45.701735       1 shared_informer.go:247] Caches are synced for endpoint slice config 
	
	* 
	* ==> kube-proxy [9bb388ba532a] <==
	* E0602 17:33:06.906472       1 proxier.go:647] "Failed to read builtin modules file, you can ignore this message when kube-proxy is running inside container without mounting /lib/modules" err="open /lib/modules/5.10.102.1-microsoft-standard-WSL2/modules.builtin: no such file or directory" filePath="/lib/modules/5.10.102.1-microsoft-standard-WSL2/modules.builtin"
	I0602 17:33:06.910503       1 proxier.go:657] "Failed to load kernel module with modprobe, you can ignore this message when kube-proxy is running inside container without mounting /lib/modules" moduleName="ip_vs"
	I0602 17:33:06.913742       1 proxier.go:657] "Failed to load kernel module with modprobe, you can ignore this message when kube-proxy is running inside container without mounting /lib/modules" moduleName="ip_vs_rr"
	I0602 17:33:06.916532       1 proxier.go:657] "Failed to load kernel module with modprobe, you can ignore this message when kube-proxy is running inside container without mounting /lib/modules" moduleName="ip_vs_wrr"
	I0602 17:33:06.919736       1 proxier.go:657] "Failed to load kernel module with modprobe, you can ignore this message when kube-proxy is running inside container without mounting /lib/modules" moduleName="ip_vs_sh"
	I0602 17:33:06.922743       1 proxier.go:657] "Failed to load kernel module with modprobe, you can ignore this message when kube-proxy is running inside container without mounting /lib/modules" moduleName="nf_conntrack"
	E0602 17:33:06.926472       1 node.go:152] Failed to retrieve node info: Get "https://control-plane.minikube.internal:8441/api/v1/nodes/functional-20220602172845-12108": dial tcp 192.168.49.2:8441: connect: connection refused
	E0602 17:33:08.026457       1 node.go:152] Failed to retrieve node info: Get "https://control-plane.minikube.internal:8441/api/v1/nodes/functional-20220602172845-12108": dial tcp 192.168.49.2:8441: connect: connection refused
	I0602 17:33:17.209151       1 node.go:163] Successfully retrieved node IP: 192.168.49.2
	I0602 17:33:17.209273       1 server_others.go:138] "Detected node IP" address="192.168.49.2"
	I0602 17:33:17.209436       1 server_others.go:561] "Unknown proxy mode, assuming iptables proxy" proxyMode=""
	I0602 17:33:17.614819       1 server_others.go:206] "Using iptables Proxier"
	I0602 17:33:17.615238       1 server_others.go:213] "kube-proxy running in dual-stack mode" ipFamily=IPv4
	I0602 17:33:17.615264       1 server_others.go:214] "Creating dualStackProxier for iptables"
	I0602 17:33:17.615346       1 server_others.go:491] "Detect-local-mode set to ClusterCIDR, but no IPv6 cluster CIDR defined, , defaulting to no-op detect-local for IPv6"
	I0602 17:33:17.617852       1 server.go:656] "Version info" version="v1.23.6"
	I0602 17:33:17.620806       1 config.go:317] "Starting service config controller"
	I0602 17:33:17.621124       1 shared_informer.go:240] Waiting for caches to sync for service config
	I0602 17:33:17.620923       1 config.go:226] "Starting endpoint slice config controller"
	I0602 17:33:17.621169       1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
	I0602 17:33:17.722650       1 shared_informer.go:247] Caches are synced for endpoint slice config 
	I0602 17:33:17.722781       1 shared_informer.go:247] Caches are synced for service config 
	
	* 
	* ==> kube-scheduler [6d6e979e17aa] <==
	* E0602 17:30:25.304284       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0602 17:30:25.304318       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0602 17:30:25.304375       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0602 17:30:25.304437       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0602 17:30:25.304461       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0602 17:30:25.305411       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0602 17:30:25.305524       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0602 17:30:25.356211       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0602 17:30:25.356341       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0602 17:30:25.404116       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0602 17:30:25.404165       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0602 17:30:25.463029       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0602 17:30:25.463183       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0602 17:30:25.563238       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0602 17:30:25.563480       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0602 17:30:25.603459       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0602 17:30:25.603630       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0602 17:30:25.603635       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0602 17:30:25.603660       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0602 17:30:25.704167       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0602 17:30:25.704272       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	I0602 17:30:28.220787       1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	I0602 17:33:02.803787       1 configmap_cafile_content.go:222] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0602 17:33:02.805491       1 secure_serving.go:311] Stopped listening on 127.0.0.1:10259
	I0602 17:33:02.805655       1 tlsconfig.go:255] "Shutting down DynamicServingCertificateController"
	
	* 
	* ==> kube-scheduler [764f22f755e2] <==
	* W0602 17:33:17.003203       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0602 17:33:17.003243       1 authentication.go:345] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0602 17:33:17.003259       1 authentication.go:346] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0602 17:33:17.003275       1 authentication.go:347] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0602 17:33:17.115792       1 server.go:139] "Starting Kubernetes Scheduler" version="v1.23.6"
	I0602 17:33:17.203043       1 secure_serving.go:200] Serving securely on 127.0.0.1:10259
	I0602 17:33:17.203316       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0602 17:33:17.203081       1 configmap_cafile_content.go:201] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0602 17:33:17.203450       1 shared_informer.go:240] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0602 17:33:17.303702       1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	E0602 17:33:25.009877       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: unknown (get storageclasses.storage.k8s.io)
	E0602 17:33:25.010971       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: unknown (get persistentvolumes)
	E0602 17:33:25.011203       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: unknown (get replicasets.apps)
	E0602 17:33:25.011624       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: unknown (get persistentvolumeclaims)
	E0602 17:33:25.011704       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: unknown (get replicationcontrollers)
	E0602 17:33:25.011830       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: unknown (get pods)
	E0602 17:33:25.012819       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.CSIStorageCapacity: unknown (get csistoragecapacities.storage.k8s.io)
	E0602 17:33:25.012873       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: unknown (get csinodes.storage.k8s.io)
	E0602 17:33:25.013010       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: unknown (get services)
	E0602 17:33:25.013260       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Namespace: unknown (get namespaces)
	E0602 17:33:25.101887       1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:205: Failed to watch *v1.ConfigMap: unknown (get configmaps)
	E0602 17:33:25.102017       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: unknown (get poddisruptionbudgets.policy)
	E0602 17:33:25.102079       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: unknown (get nodes)
	E0602 17:33:25.102122       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: unknown (get csidrivers.storage.k8s.io)
	E0602 17:33:25.102323       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: unknown (get statefulsets.apps)
	
	* 
	* ==> kubelet <==
	* -- Logs begin at Thu 2022-06-02 17:29:39 UTC, end at Thu 2022-06-02 18:09:01 UTC. --
	Jun 02 17:35:00 functional-20220602172845-12108 kubelet[6098]: I0602 17:35:00.426051    6098 docker_sandbox.go:402] "Failed to read pod IP from plugin/docker" err="Couldn't find network status for default/hello-node-connect-74cf8bc446-qjhfg through plugin: invalid network status for"
	Jun 02 17:35:00 functional-20220602172845-12108 kubelet[6098]: I0602 17:35:00.604378    6098 reconciler.go:221] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-309b9820-d9d6-4da7-8d9a-107aeedb3011\" (UniqueName: \"kubernetes.io/host-path/54ac2c9b-7834-43cc-9659-4796f4b3a5c4-pvc-309b9820-d9d6-4da7-8d9a-107aeedb3011\") pod \"sp-pod\" (UID: \"54ac2c9b-7834-43cc-9659-4796f4b3a5c4\") " pod="default/sp-pod"
	Jun 02 17:35:00 functional-20220602172845-12108 kubelet[6098]: I0602 17:35:00.604609    6098 reconciler.go:221] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7jmlc\" (UniqueName: \"kubernetes.io/projected/54ac2c9b-7834-43cc-9659-4796f4b3a5c4-kube-api-access-7jmlc\") pod \"sp-pod\" (UID: \"54ac2c9b-7834-43cc-9659-4796f4b3a5c4\") " pod="default/sp-pod"
	Jun 02 17:35:01 functional-20220602172845-12108 kubelet[6098]: I0602 17:35:01.304256    6098 kubelet_volumes.go:160] "Cleaned up orphaned pod volumes dir" podUID=13c07143-bcda-415b-987d-4813238cdbe3 path="/var/lib/kubelet/pods/13c07143-bcda-415b-987d-4813238cdbe3/volumes"
	Jun 02 17:35:01 functional-20220602172845-12108 kubelet[6098]: I0602 17:35:01.730504    6098 docker_sandbox.go:402] "Failed to read pod IP from plugin/docker" err="Couldn't find network status for default/sp-pod through plugin: invalid network status for"
	Jun 02 17:35:01 functional-20220602172845-12108 kubelet[6098]: I0602 17:35:01.743742    6098 docker_sandbox.go:402] "Failed to read pod IP from plugin/docker" err="Couldn't find network status for default/hello-node-54fbb85-l5tpx through plugin: invalid network status for"
	Jun 02 17:35:01 functional-20220602172845-12108 kubelet[6098]: I0602 17:35:01.806854    6098 docker_sandbox.go:402] "Failed to read pod IP from plugin/docker" err="Couldn't find network status for default/hello-node-connect-74cf8bc446-qjhfg through plugin: invalid network status for"
	Jun 02 17:35:01 functional-20220602172845-12108 kubelet[6098]: I0602 17:35:01.821523    6098 docker_sandbox.go:402] "Failed to read pod IP from plugin/docker" err="Couldn't find network status for default/sp-pod through plugin: invalid network status for"
	Jun 02 17:35:01 functional-20220602172845-12108 kubelet[6098]: I0602 17:35:01.823851    6098 pod_container_deletor.go:79] "Container not found in pod's containers" containerID="39c76780d10c2c949604b78c48825227602add04c93a94e540ddc889d5416150"
	Jun 02 17:35:02 functional-20220602172845-12108 kubelet[6098]: I0602 17:35:02.840034    6098 docker_sandbox.go:402] "Failed to read pod IP from plugin/docker" err="Couldn't find network status for default/sp-pod through plugin: invalid network status for"
	Jun 02 17:35:03 functional-20220602172845-12108 kubelet[6098]: I0602 17:35:03.878648    6098 docker_sandbox.go:402] "Failed to read pod IP from plugin/docker" err="Couldn't find network status for default/sp-pod through plugin: invalid network status for"
	Jun 02 17:35:43 functional-20220602172845-12108 kubelet[6098]: I0602 17:35:43.331556    6098 topology_manager.go:200] "Topology Admit Handler"
	Jun 02 17:35:43 functional-20220602172845-12108 kubelet[6098]: I0602 17:35:43.503528    6098 reconciler.go:221] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hvf5z\" (UniqueName: \"kubernetes.io/projected/f72f89e4-b36a-4fdd-8ebe-b615c45f18a4-kube-api-access-hvf5z\") pod \"mysql-b87c45988-mbb25\" (UID: \"f72f89e4-b36a-4fdd-8ebe-b615c45f18a4\") " pod="default/mysql-b87c45988-mbb25"
	Jun 02 17:35:44 functional-20220602172845-12108 kubelet[6098]: I0602 17:35:44.554683    6098 pod_container_deletor.go:79] "Container not found in pod's containers" containerID="c334f32103670ae236edfaa2a0bdf63555e49e3874fa38d16d17e4c30c462e64"
	Jun 02 17:35:44 functional-20220602172845-12108 kubelet[6098]: I0602 17:35:44.555557    6098 docker_sandbox.go:402] "Failed to read pod IP from plugin/docker" err="Couldn't find network status for default/mysql-b87c45988-mbb25 through plugin: invalid network status for"
	Jun 02 17:35:45 functional-20220602172845-12108 kubelet[6098]: I0602 17:35:45.573025    6098 docker_sandbox.go:402] "Failed to read pod IP from plugin/docker" err="Couldn't find network status for default/mysql-b87c45988-mbb25 through plugin: invalid network status for"
	Jun 02 17:36:25 functional-20220602172845-12108 kubelet[6098]: I0602 17:36:25.975103    6098 docker_sandbox.go:402] "Failed to read pod IP from plugin/docker" err="Couldn't find network status for default/mysql-b87c45988-mbb25 through plugin: invalid network status for"
	Jun 02 17:36:27 functional-20220602172845-12108 kubelet[6098]: I0602 17:36:27.355554    6098 docker_sandbox.go:402] "Failed to read pod IP from plugin/docker" err="Couldn't find network status for default/mysql-b87c45988-mbb25 through plugin: invalid network status for"
	Jun 02 17:38:16 functional-20220602172845-12108 kubelet[6098]: W0602 17:38:16.028083    6098 sysinfo.go:203] Nodes topology is not available, providing CPU topology
	Jun 02 17:43:16 functional-20220602172845-12108 kubelet[6098]: W0602 17:43:16.023590    6098 sysinfo.go:203] Nodes topology is not available, providing CPU topology
	Jun 02 17:48:16 functional-20220602172845-12108 kubelet[6098]: W0602 17:48:16.025281    6098 sysinfo.go:203] Nodes topology is not available, providing CPU topology
	Jun 02 17:53:16 functional-20220602172845-12108 kubelet[6098]: W0602 17:53:16.026511    6098 sysinfo.go:203] Nodes topology is not available, providing CPU topology
	Jun 02 17:58:16 functional-20220602172845-12108 kubelet[6098]: W0602 17:58:16.027619    6098 sysinfo.go:203] Nodes topology is not available, providing CPU topology
	Jun 02 18:03:16 functional-20220602172845-12108 kubelet[6098]: W0602 18:03:16.029803    6098 sysinfo.go:203] Nodes topology is not available, providing CPU topology
	Jun 02 18:08:16 functional-20220602172845-12108 kubelet[6098]: W0602 18:08:16.030301    6098 sysinfo.go:203] Nodes topology is not available, providing CPU topology
	
	* 
	* ==> storage-provisioner [21743903ddc5] <==
	* I0602 17:33:06.121988       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F0602 17:33:06.201342       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: connect: connection refused
	
	* 
	* ==> storage-provisioner [b46fbda7e4d2] <==
	* I0602 17:33:20.522005       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0602 17:33:25.113668       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0602 17:33:25.113880       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0602 17:33:42.660204       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0602 17:33:42.660586       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_functional-20220602172845-12108_c1102dcb-2ca9-47cd-ae2b-4d0e28cc1795!
	I0602 17:33:42.660584       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"81205cc1-768b-42f4-93e6-bb23e91e5f2d", APIVersion:"v1", ResourceVersion:"656", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' functional-20220602172845-12108_c1102dcb-2ca9-47cd-ae2b-4d0e28cc1795 became leader
	I0602 17:33:42.761527       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_functional-20220602172845-12108_c1102dcb-2ca9-47cd-ae2b-4d0e28cc1795!
	I0602 17:34:25.102478       1 controller.go:1332] provision "default/myclaim" class "standard": started
	I0602 17:34:25.102841       1 storage_provisioner.go:61] Provisioning volume {&StorageClass{ObjectMeta:{standard    08a31009-78ce-400c-b4e4-386a272ea447 464 0 2022-06-02 17:30:49 +0000 UTC <nil> <nil> map[addonmanager.kubernetes.io/mode:EnsureExists] map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"storage.k8s.io/v1","kind":"StorageClass","metadata":{"annotations":{"storageclass.kubernetes.io/is-default-class":"true"},"labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"name":"standard"},"provisioner":"k8s.io/minikube-hostpath"}
	 storageclass.kubernetes.io/is-default-class:true] [] []  [{kubectl-client-side-apply Update storage.k8s.io/v1 2022-06-02 17:30:49 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storageclass.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanager.kubernetes.io/mode":{}}},"f:provisioner":{},"f:reclaimPolicy":{},"f:volumeBindingMode":{}}}]},Provisioner:k8s.io/minikube-hostpath,Parameters:map[string]string{},ReclaimPolicy:*Delete,MountOptions:[],AllowVolumeExpansion:nil,VolumeBindingMode:*Immediate,AllowedTopologies:[]TopologySelectorTerm{},} pvc-309b9820-d9d6-4da7-8d9a-107aeedb3011 &PersistentVolumeClaim{ObjectMeta:{myclaim  default  309b9820-d9d6-4da7-8d9a-107aeedb3011 706 0 2022-06-02 17:34:25 +0000 UTC <nil> <nil> map[] map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"v1","kind":"PersistentVolumeClaim","metadata":{"annotations":{},"name":"myclaim","namespace":"default"},"spec":{"accessModes":["Rea
dWriteOnce"],"resources":{"requests":{"storage":"500Mi"}},"volumeMode":"Filesystem"}}
	 volume.beta.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath volume.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath] [] [kubernetes.io/pvc-protection]  [{kubectl.exe Update v1 2022-06-02 17:34:24 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{}}},"f:spec":{"f:accessModes":{},"f:resources":{"f:requests":{".":{},"f:storage":{}}},"f:volumeMode":{}}}} {kube-controller-manager Update v1 2022-06-02 17:34:25 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:volume.beta.kubernetes.io/storage-provisioner":{},"f:volume.kubernetes.io/storage-provisioner":{}}}}}]},Spec:PersistentVolumeClaimSpec{AccessModes:[ReadWriteOnce],Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{storage: {{524288000 0} {<nil>} 500Mi BinarySI},},},VolumeName:,Selector:nil,StorageClassName:*standard,VolumeMode:*Filesystem,DataSource:nil,},Status:PersistentVolumeClaimStatus{Phase:Pending,AccessModes:[],Capacity:ResourceList{}
,Conditions:[]PersistentVolumeClaimCondition{},},} nil} to /tmp/hostpath-provisioner/default/myclaim
	I0602 17:34:25.103415       1 event.go:282] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"default", Name:"myclaim", UID:"309b9820-d9d6-4da7-8d9a-107aeedb3011", APIVersion:"v1", ResourceVersion:"706", FieldPath:""}): type: 'Normal' reason: 'Provisioning' External provisioner is provisioning volume for claim "default/myclaim"
	I0602 17:34:25.104060       1 controller.go:1439] provision "default/myclaim" class "standard": volume "pvc-309b9820-d9d6-4da7-8d9a-107aeedb3011" provisioned
	I0602 17:34:25.104226       1 controller.go:1456] provision "default/myclaim" class "standard": succeeded
	I0602 17:34:25.104239       1 volume_store.go:212] Trying to save persistentvolume "pvc-309b9820-d9d6-4da7-8d9a-107aeedb3011"
	I0602 17:34:25.122669       1 volume_store.go:219] persistentvolume "pvc-309b9820-d9d6-4da7-8d9a-107aeedb3011" saved
	I0602 17:34:25.123421       1 event.go:282] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"default", Name:"myclaim", UID:"309b9820-d9d6-4da7-8d9a-107aeedb3011", APIVersion:"v1", ResourceVersion:"706", FieldPath:""}): type: 'Normal' reason: 'ProvisioningSucceeded' Successfully provisioned volume pvc-309b9820-d9d6-4da7-8d9a-107aeedb3011
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p functional-20220602172845-12108 -n functional-20220602172845-12108
helpers_test.go:254: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p functional-20220602172845-12108 -n functional-20220602172845-12108: (6.3352679s)
helpers_test.go:261: (dbg) Run:  kubectl --context functional-20220602172845-12108 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:270: non-running pods: 
helpers_test.go:272: ======> post-mortem[TestFunctional/parallel/ServiceCmd]: describe non-running pods <======
helpers_test.go:275: (dbg) Run:  kubectl --context functional-20220602172845-12108 describe pod 
helpers_test.go:275: (dbg) Non-zero exit: kubectl --context functional-20220602172845-12108 describe pod : exit status 1 (198.5873ms)

                                                
                                                
** stderr ** 
	error: resource name may not be empty

                                                
                                                
** /stderr **
helpers_test.go:277: kubectl --context functional-20220602172845-12108 describe pod : exit status 1
--- FAIL: TestFunctional/parallel/ServiceCmd (2073.94s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (181.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:169: (dbg) Run:  kubectl --context functional-20220602172845-12108 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}

                                                
                                                
=== CONT  TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:169: (dbg) Run:  kubectl --context functional-20220602172845-12108 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}

                                                
                                                
=== CONT  TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:169: (dbg) Run:  kubectl --context functional-20220602172845-12108 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}

                                                
                                                
=== CONT  TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:169: (dbg) Run:  kubectl --context functional-20220602172845-12108 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
functional_test_tunnel_test.go:169: (dbg) Run:  kubectl --context functional-20220602172845-12108 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
functional_test_tunnel_test.go:169: (dbg) Run:  kubectl --context functional-20220602172845-12108 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}

                                                
                                                
=== CONT  TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:169: (dbg) Run:  kubectl --context functional-20220602172845-12108 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}

                                                
                                                
=== CONT  TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:169: (dbg) Run:  kubectl --context functional-20220602172845-12108 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}

                                                
                                                
=== CONT  TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:169: (dbg) Run:  kubectl --context functional-20220602172845-12108 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}

                                                
                                                
=== CONT  TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:169: (dbg) Run:  kubectl --context functional-20220602172845-12108 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}

                                                
                                                
=== CONT  TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:169: (dbg) Run:  kubectl --context functional-20220602172845-12108 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}

                                                
                                                
=== CONT  TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:169: (dbg) Run:  kubectl --context functional-20220602172845-12108 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}

                                                
                                                
=== CONT  TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:169: (dbg) Run:  kubectl --context functional-20220602172845-12108 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}

                                                
                                                
=== CONT  TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:169: (dbg) Run:  kubectl --context functional-20220602172845-12108 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}

                                                
                                                
=== CONT  TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:169: (dbg) Run:  kubectl --context functional-20220602172845-12108 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}

                                                
                                                
=== CONT  TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:169: (dbg) Run:  kubectl --context functional-20220602172845-12108 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}

                                                
                                                
=== CONT  TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:169: (dbg) Run:  kubectl --context functional-20220602172845-12108 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}

                                                
                                                
=== CONT  TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:169: (dbg) Run:  kubectl --context functional-20220602172845-12108 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
functional_test_tunnel_test.go:169: (dbg) Run:  kubectl --context functional-20220602172845-12108 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}

                                                
                                                
=== CONT  TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:169: (dbg) Run:  kubectl --context functional-20220602172845-12108 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}

                                                
                                                
=== CONT  TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:169: (dbg) Run:  kubectl --context functional-20220602172845-12108 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}

                                                
                                                
=== CONT  TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:169: (dbg) Run:  kubectl --context functional-20220602172845-12108 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
functional_test_tunnel_test.go:169: (dbg) Run:  kubectl --context functional-20220602172845-12108 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}

                                                
                                                
=== CONT  TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:169: (dbg) Run:  kubectl --context functional-20220602172845-12108 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}

                                                
                                                
=== CONT  TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:169: (dbg) Run:  kubectl --context functional-20220602172845-12108 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
functional_test_tunnel_test.go:169: (dbg) Done: kubectl --context functional-20220602172845-12108 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}: (2.1952487s)

                                                
                                                
=== CONT  TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:169: (dbg) Run:  kubectl --context functional-20220602172845-12108 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}

                                                
                                                
=== CONT  TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:169: (dbg) Run:  kubectl --context functional-20220602172845-12108 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}

                                                
                                                
=== CONT  TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:169: (dbg) Run:  kubectl --context functional-20220602172845-12108 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}

                                                
                                                
=== CONT  TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:169: (dbg) Run:  kubectl --context functional-20220602172845-12108 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}

                                                
                                                
=== CONT  TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:169: (dbg) Run:  kubectl --context functional-20220602172845-12108 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}

                                                
                                                
=== CONT  TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:169: (dbg) Run:  kubectl --context functional-20220602172845-12108 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}

                                                
                                                
=== CONT  TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:169: (dbg) Run:  kubectl --context functional-20220602172845-12108 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}

                                                
                                                
=== CONT  TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:169: (dbg) Run:  kubectl --context functional-20220602172845-12108 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}

                                                
                                                
=== CONT  TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:169: (dbg) Run:  kubectl --context functional-20220602172845-12108 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}

                                                
                                                
=== CONT  TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:169: (dbg) Run:  kubectl --context functional-20220602172845-12108 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
functional_test_tunnel_test.go:169: (dbg) Run:  kubectl --context functional-20220602172845-12108 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
functional_test_tunnel_test.go:169: (dbg) Run:  kubectl --context functional-20220602172845-12108 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
functional_test_tunnel_test.go:169: (dbg) Run:  kubectl --context functional-20220602172845-12108 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
functional_test_tunnel_test.go:180: nginx-svc svc.status.loadBalancer.ingress never got an IP: timed out waiting for the condition
functional_test_tunnel_test.go:181: (dbg) Run:  kubectl --context functional-20220602172845-12108 get svc nginx-svc
functional_test_tunnel_test.go:185: failed to kubectl get svc nginx-svc:

                                                
                                                
-- stdout --
	NAME        TYPE           CLUSTER-IP    EXTERNAL-IP   PORT(S)        AGE
	nginx-svc   LoadBalancer   10.99.63.21   <pending>     80:31089/TCP   3m17s

                                                
                                                
-- /stdout --
--- FAIL: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (181.17s)

                                                
                                    
x
+
TestKubernetesUpgrade (254.19s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:229: (dbg) Run:  out/minikube-windows-amd64.exe start -p kubernetes-upgrade-20220602191340-12108 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=docker
E0602 19:14:06.676197   12108 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\functional-20220602172845-12108\client.crt: The system cannot find the path specified.

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:229: (dbg) Done: out/minikube-windows-amd64.exe start -p kubernetes-upgrade-20220602191340-12108 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=docker: (2m38.0218794s)
version_upgrade_test.go:234: (dbg) Run:  out/minikube-windows-amd64.exe stop -p kubernetes-upgrade-20220602191340-12108

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:234: (dbg) Done: out/minikube-windows-amd64.exe stop -p kubernetes-upgrade-20220602191340-12108: (10.8990914s)
version_upgrade_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe -p kubernetes-upgrade-20220602191340-12108 status --format={{.Host}}

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p kubernetes-upgrade-20220602191340-12108 status --format={{.Host}}: exit status 7 (3.0704272s)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:241: status error: exit status 7 (may be ok)
version_upgrade_test.go:250: (dbg) Run:  out/minikube-windows-amd64.exe start -p kubernetes-upgrade-20220602191340-12108 --memory=2200 --kubernetes-version=v1.23.6 --alsologtostderr -v=1 --driver=docker

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:250: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p kubernetes-upgrade-20220602191340-12108 --memory=2200 --kubernetes-version=v1.23.6 --alsologtostderr -v=1 --driver=docker: exit status 80 (53.4888967s)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-20220602191340-12108] minikube v1.26.0-beta.1 on Microsoft Windows 10 Enterprise N 10.0.19044 Build 19044
	  - KUBECONFIG=C:\Users\jenkins.minikube7\minikube-integration\kubeconfig
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube7\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=14269
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	* Using the docker driver based on existing profile
	* Starting control plane node kubernetes-upgrade-20220602191340-12108 in cluster kubernetes-upgrade-20220602191340-12108
	* Pulling base image ...
	* Restarting existing docker container for "kubernetes-upgrade-20220602191340-12108" ...
	* Restarting existing docker container for "kubernetes-upgrade-20220602191340-12108" ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0602 19:16:32.783544     936 out.go:296] Setting OutFile to fd 2040 ...
	I0602 19:16:32.837548     936 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0602 19:16:32.837548     936 out.go:309] Setting ErrFile to fd 2044...
	I0602 19:16:32.837548     936 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0602 19:16:32.855272     936 out.go:303] Setting JSON to false
	I0602 19:16:32.857776     936 start.go:115] hostinfo: {"hostname":"minikube7","uptime":60534,"bootTime":1654136858,"procs":196,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19044 Build 19044","kernelVersion":"10.0.19044 Build 19044","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f66ed2ea-6c04-4a6b-8eea-b2fb0953e990"}
	W0602 19:16:32.858295     936 start.go:123] gopshost.Virtualization returned error: not implemented yet
	I0602 19:16:32.861708     936 out.go:177] * [kubernetes-upgrade-20220602191340-12108] minikube v1.26.0-beta.1 on Microsoft Windows 10 Enterprise N 10.0.19044 Build 19044
	I0602 19:16:32.865159     936 notify.go:193] Checking for updates...
	I0602 19:16:32.867427     936 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube7\minikube-integration\kubeconfig
	I0602 19:16:32.870241     936 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube7\minikube-integration\.minikube
	I0602 19:16:32.872706     936 out.go:177]   - MINIKUBE_LOCATION=14269
	I0602 19:16:32.874913     936 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0602 19:16:32.877809     936 config.go:178] Loaded profile config "kubernetes-upgrade-20220602191340-12108": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.16.0
	I0602 19:16:32.878541     936 driver.go:358] Setting default libvirt URI to qemu:///system
	I0602 19:16:35.593292     936 docker.go:137] docker version: linux-20.10.16
	I0602 19:16:35.602329     936 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0602 19:16:37.742626     936 cli_runner.go:217] Completed: docker system info --format "{{json .}}": (2.1402872s)
	I0602 19:16:37.742626     936 info.go:265] docker info: {ID:DTJ5:6HSD:HQYO:NIUY:6CLH:5UQL:6FBK:7BMO:GVE6:JAI7:26MB:GZA3 Containers:1 ContainersRunning:0 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:49 OomKillDisable:true NGoroutines:50 SystemTime:2022-06-02 19:16:36.6671596 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:4 KernelVersion:5.10.102.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_
64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:53902323712 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.16 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16 Expected:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16} RuncCommit:{ID:v1.1.1-0-g52de29d Expected:v1.1.1-0-g52de29d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,
profile=default] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.6.0] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:C:\Program Files\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc.
Version:v0.17.0]] Warnings:<nil>}}
	I0602 19:16:37.756615     936 out.go:177] * Using the docker driver based on existing profile
	I0602 19:16:37.759623     936 start.go:284] selected driver: docker
	I0602 19:16:37.759623     936 start.go:806] validating driver "docker" against &{Name:kubernetes-upgrade-20220602191340-12108 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:kubernetes-upgrade-20220602191340-
12108 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube7:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:do
cker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0602 19:16:37.759623     936 start.go:817] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0602 19:16:37.823622     936 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0602 19:16:42.186027     936 cli_runner.go:217] Completed: docker system info --format "{{json .}}": (4.362247s)
	I0602 19:16:42.186611     936 info.go:265] docker info: {ID:DTJ5:6HSD:HQYO:NIUY:6CLH:5UQL:6FBK:7BMO:GVE6:JAI7:26MB:GZA3 Containers:1 ContainersRunning:0 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:52 OomKillDisable:true NGoroutines:54 SystemTime:2022-06-02 19:16:38.9116107 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:4 KernelVersion:5.10.102.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_
64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:53902323712 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.16 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16 Expected:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16} RuncCommit:{ID:v1.1.1-0-g52de29d Expected:v1.1.1-0-g52de29d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,
profile=default] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.6.0] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:C:\Program Files\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc.
Version:v0.17.0]] Warnings:<nil>}}
	I0602 19:16:42.187346     936 cni.go:95] Creating CNI manager for ""
	I0602 19:16:42.187389     936 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0602 19:16:42.187389     936 start_flags.go:306] config:
	{Name:kubernetes-upgrade-20220602191340-12108 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.6 ClusterName:kubernetes-upgrade-20220602191340-12108 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:
[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube7:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0602 19:16:42.192085     936 out.go:177] * Starting control plane node kubernetes-upgrade-20220602191340-12108 in cluster kubernetes-upgrade-20220602191340-12108
	I0602 19:16:42.194415     936 cache.go:120] Beginning downloading kic base image for docker with docker
	I0602 19:16:42.196352     936 out.go:177] * Pulling base image ...
	I0602 19:16:42.199823     936 preload.go:132] Checking if preload exists for k8s version v1.23.6 and runtime docker
	I0602 19:16:42.199823     936 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496 in local docker daemon
	I0602 19:16:42.200440     936 preload.go:148] Found local preload: C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.23.6-docker-overlay2-amd64.tar.lz4
	I0602 19:16:42.200440     936 cache.go:57] Caching tarball of preloaded images
	I0602 19:16:42.200664     936 preload.go:174] Found C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.23.6-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0602 19:16:42.201171     936 cache.go:60] Finished verifying existence of preloaded tar for  v1.23.6 on docker
	I0602 19:16:42.201397     936 profile.go:148] Saving config to C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\kubernetes-upgrade-20220602191340-12108\config.json ...
	I0602 19:16:43.440604     936 image.go:79] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496 in local docker daemon, skipping pull
	I0602 19:16:43.440604     936 cache.go:141] gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496 exists in daemon, skipping load
	I0602 19:16:43.440604     936 cache.go:206] Successfully downloaded all kic artifacts
	I0602 19:16:43.440604     936 start.go:352] acquiring machines lock for kubernetes-upgrade-20220602191340-12108: {Name:mkbaca63acce43a03a0803ba4a0d56470a4248b7 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0602 19:16:43.440604     936 start.go:356] acquired machines lock for "kubernetes-upgrade-20220602191340-12108" in 0s
	I0602 19:16:43.440604     936 start.go:94] Skipping create...Using existing machine configuration
	I0602 19:16:43.440604     936 fix.go:55] fixHost starting: 
	I0602 19:16:43.455602     936 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-20220602191340-12108 --format={{.State.Status}}
	I0602 19:16:44.703394     936 cli_runner.go:217] Completed: docker container inspect kubernetes-upgrade-20220602191340-12108 --format={{.State.Status}}: (1.247787s)
	I0602 19:16:44.703394     936 fix.go:103] recreateIfNeeded on kubernetes-upgrade-20220602191340-12108: state=Stopped err=<nil>
	W0602 19:16:44.703394     936 fix.go:129] unexpected machine state, will restart: <nil>
	I0602 19:16:44.707396     936 out.go:177] * Restarting existing docker container for "kubernetes-upgrade-20220602191340-12108" ...
	I0602 19:16:44.717400     936 cli_runner.go:164] Run: docker start kubernetes-upgrade-20220602191340-12108
	W0602 19:16:45.963054     936 cli_runner.go:211] docker start kubernetes-upgrade-20220602191340-12108 returned with exit code 1
	I0602 19:16:45.963104     936 cli_runner.go:217] Completed: docker start kubernetes-upgrade-20220602191340-12108: (1.2456483s)
	I0602 19:16:45.974758     936 cli_runner.go:164] Run: docker inspect kubernetes-upgrade-20220602191340-12108
	I0602 19:16:47.134081     936 cli_runner.go:217] Completed: docker inspect kubernetes-upgrade-20220602191340-12108: (1.1593172s)
	I0602 19:16:47.134081     936 errors.go:84] Postmortem inspect ("docker inspect kubernetes-upgrade-20220602191340-12108"): -- stdout --
	[
	    {
	        "Id": "e425b5d64014fc112e2c6213cca7749ecbd803cdfd50c69d90cb4a22f6676024",
	        "Created": "2022-06-02T19:14:47.8422731Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "exited",
	            "Running": false,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 0,
	            "ExitCode": 130,
	            "Error": "network 322f4233d6e4b7b1eb6c0630668710dda5d7f72c8791e2d7e090bf7c854cb1c3 not found",
	            "StartedAt": "2022-06-02T19:14:50.2110413Z",
	            "FinishedAt": "2022-06-02T19:16:27.1435289Z"
	        },
	        "Image": "sha256:462790409a0e2520bcd6c4a009da80732d87146c973d78561764290fa691ea41",
	        "ResolvConfPath": "/var/lib/docker/containers/e425b5d64014fc112e2c6213cca7749ecbd803cdfd50c69d90cb4a22f6676024/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/e425b5d64014fc112e2c6213cca7749ecbd803cdfd50c69d90cb4a22f6676024/hostname",
	        "HostsPath": "/var/lib/docker/containers/e425b5d64014fc112e2c6213cca7749ecbd803cdfd50c69d90cb4a22f6676024/hosts",
	        "LogPath": "/var/lib/docker/containers/e425b5d64014fc112e2c6213cca7749ecbd803cdfd50c69d90cb4a22f6676024/e425b5d64014fc112e2c6213cca7749ecbd803cdfd50c69d90cb4a22f6676024-json.log",
	        "Name": "/kubernetes-upgrade-20220602191340-12108",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "kubernetes-upgrade-20220602191340-12108:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "kubernetes-upgrade-20220602191340-12108",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 2306867200,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/dd7f480705ee377d00cab0a1c3708bcaa5fd0a0169bf492baa593faa194ce02c-init/diff:/var/lib/docker/overlay2/dfce970b43800856c522d9750e5e1364e8adf4be4cf71ca7c53d79b33355f5a7/diff:/var/lib/docker/overlay2/4fd23a1b84854239f1bb855d05e42ecd6acbd1b0944b347813a56f5f45356a42/diff:/var/lib/docker/overlay2/864c5b1fbc297750771bb843fdeb4bafa10868a71716f4a01f1119609fb34667/diff:/var/lib/docker/overlay2/0f11f6855118857c743b90ca120ff7aa550f8157d475abf59df950433a5bc6e8/diff:/var/lib/docker/overlay2/2ae7f559725a060dc3b3a9c2fbd554b98114ae47dbf8db75f13bd8a95cbae19a/diff:/var/lib/docker/overlay2/48f41ac288d1037223ac101e6bc07f05729cdcecd98cc85971db99e90765c437/diff:/var/lib/docker/overlay2/8d4eaae639ade3ad3459b4fb67dbcac83774b72a2550b0a4bca1f21d122b20e6/diff:/var/lib/docker/overlay2/e06515bb91756221300de52336376d32ef9bd8685a92352e522936c4947b88ee/diff:/var/lib/docker/overlay2/a2f615fb794b704dc3823080c47e2c357cf4826ec91f6ae190c7497bb18a80cd/diff:/var/lib/docker/overlay2/22f99f
8a3da21c6e2be4c5c5e9d969af73e7695aaf9b0c7d0d09b5795ba76416/diff:/var/lib/docker/overlay2/9c0266785c64b9f6c471863067ca9db045a5aa61167a7817217cf01825a7d868/diff:/var/lib/docker/overlay2/b8a0250c9ae7d899ee3e46414c2db7f7ba363793900f8fcbf1b470586ebe7bd9/diff:/var/lib/docker/overlay2/00afbeac619cb9c06d4da311f5fc5aa3f5147b88b291acf06d4c4b36984ad5a2/diff:/var/lib/docker/overlay2/da51241ed08bd861b9d27902198eae13c3e4aac5c79f522e9f3fa209ea35e8d3/diff:/var/lib/docker/overlay2/b01176f7dbe98e3004db7c0fe45d94616a803dd8ae9cbdf3a1f2a188604178af/diff:/var/lib/docker/overlay2/0ebb0ff0177c8116e72a14ac704b161f75922cea05fe804ad1f7b83f4cd3dd70/diff:/var/lib/docker/overlay2/bae8d175bc3e334a70aaa239643efa0e8b453ab163f077d9cef60e3840c717ba/diff:/var/lib/docker/overlay2/e72a79f763a44dc32f9a2e84dc5e28a060e7fbb9f4624cb8aaa084dd356522ec/diff:/var/lib/docker/overlay2/2e1bc304b205033ad7f49fb8db243b0991596e0eec913fd13e8382aa25767e21/diff:/var/lib/docker/overlay2/ebb9b39dedfc09f9f34ea879f56a8ffd24ab9f9bf8acc93aa9df5eb93dba58e8/diff:/var/lib/d
ocker/overlay2/bffdca36eba4bce9086f2c269bcfe5b915d807483717f0e27acbd51b5bbfc11b/diff:/var/lib/docker/overlay2/96c321cbf06c0050c8a0a7897e9533db1ee5788eb09b1e1d605bdd1134af8eca/diff:/var/lib/docker/overlay2/735422b44af98e330209fe1c4273bf57aa33fcfd770f3e9d6f1a6e59f7545920/diff:/var/lib/docker/overlay2/8dc177c0589f67ded7d9c229d3c587fe77b3d1c68cf0a5af871bc23768d67d84/diff:/var/lib/docker/overlay2/9a29541ccfee3849e0691950c599bb7e4e51d9026724b1ad13abc8d8e9c140e0/diff:/var/lib/docker/overlay2/50fe1dc8f357b5d624681e6f14d98e6d33a8b6b53d70293ba90ac4435a1e18d8/diff:/var/lib/docker/overlay2/86f301a296dbb7422a3d55a008a9f38278a7a19d68a0f735d298c0c2a431ee30/diff:/var/lib/docker/overlay2/dc8087ea592587f8cb5392cc0ee739c33f2724c47b83767d593b3065914820b0/diff:/var/lib/docker/overlay2/15163601889f0d414f35ccd64ae33a52958605b5b7e50618ed5d4f4bd06ec65b/diff:/var/lib/docker/overlay2/a50cf19d9d69b9c68c6c66a918cbde678b49e8d566d06772af22bf99191b08f3/diff:/var/lib/docker/overlay2/621f3b0fc578721c5d0465771ad007f022ed238fa5a2076f807c077680c
26d27/diff:/var/lib/docker/overlay2/2652f9ffde92786a77e3bb35fe07c03a623aaad541f0ca9710839800c4b470e4/diff:/var/lib/docker/overlay2/c853755ee76ea55ad6c00f5eaff82196f4953ee6fb2d27e27ba35f86d56bfc32/diff:/var/lib/docker/overlay2/a0f70e6416a8e618ea7475b5e7f4cdc9a66ac39f0a6c1969c569d8e4f0b5e9eb/diff:/var/lib/docker/overlay2/275d2c643ecb011298df16e0794bebb9a7ec82e190aea53a90369288c521f75e/diff:/var/lib/docker/overlay2/a7e78f238badc23c2c38b7e9b9c4428c0614e825744076161295740d46a20957/diff:/var/lib/docker/overlay2/39fcd4c392271449973511a31d445289c1f8d378d01759fef12c430c9f44f2b8/diff:/var/lib/docker/overlay2/e1c51360d327e86575fe8248415fae12e9dbdde580db0e6f4f4e485ac9f92e3b/diff:/var/lib/docker/overlay2/fecd88783858177cbe3b751f0717b370c5556d7cf0ef163e2710f16fce09d53c/diff:/var/lib/docker/overlay2/3b4c7afaac6f5818bc33bec8c0ec442eb5a1010d0de6fe488460ee83a3901b21/diff:/var/lib/docker/overlay2/47d0047bc42c34ea02c33c1500f96c5109f27f84f973a5636832bbc855761e3f/diff",
	                "MergedDir": "/var/lib/docker/overlay2/dd7f480705ee377d00cab0a1c3708bcaa5fd0a0169bf492baa593faa194ce02c/merged",
	                "UpperDir": "/var/lib/docker/overlay2/dd7f480705ee377d00cab0a1c3708bcaa5fd0a0169bf492baa593faa194ce02c/diff",
	                "WorkDir": "/var/lib/docker/overlay2/dd7f480705ee377d00cab0a1c3708bcaa5fd0a0169bf492baa593faa194ce02c/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "kubernetes-upgrade-20220602191340-12108",
	                "Source": "/var/lib/docker/volumes/kubernetes-upgrade-20220602191340-12108/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "kubernetes-upgrade-20220602191340-12108",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "kubernetes-upgrade-20220602191340-12108",
	                "name.minikube.sigs.k8s.io": "kubernetes-upgrade-20220602191340-12108",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "7ed50d5ab7698ef031105a3fedaf6e5918caf31e28958792e67e2c629831ecb0",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {},
	            "SandboxKey": "/var/run/docker/netns/7ed50d5ab769",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "kubernetes-upgrade-20220602191340-12108": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.58.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "e425b5d64014",
	                        "kubernetes-upgrade-20220602191340-12108"
	                    ],
	                    "NetworkID": "322f4233d6e4b7b1eb6c0630668710dda5d7f72c8791e2d7e090bf7c854cb1c3",
	                    "EndpointID": "",
	                    "Gateway": "",
	                    "IPAddress": "",
	                    "IPPrefixLen": 0,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]
	
	-- /stdout --
	I0602 19:16:47.144080     936 cli_runner.go:164] Run: docker logs --timestamps --details kubernetes-upgrade-20220602191340-12108
	I0602 19:16:48.362248     936 cli_runner.go:217] Completed: docker logs --timestamps --details kubernetes-upgrade-20220602191340-12108: (1.2181633s)
	I0602 19:16:48.362248     936 errors.go:91] Postmortem logs ("docker logs --timestamps --details kubernetes-upgrade-20220602191340-12108"): -- stdout --
	2022-06-02T19:14:50.208962100Z  + userns=
	2022-06-02T19:14:50.209003100Z  + grep -Eqv '0[[:space:]]+0[[:space:]]+4294967295' /proc/self/uid_map
	2022-06-02T19:14:50.215897100Z  + validate_userns
	2022-06-02T19:14:50.215923800Z  + [[ -z '' ]]
	2022-06-02T19:14:50.215933200Z  + return
	2022-06-02T19:14:50.215940400Z  + configure_containerd
	2022-06-02T19:14:50.215947700Z  + local snapshotter=
	2022-06-02T19:14:50.215954700Z  + [[ -n '' ]]
	2022-06-02T19:14:50.215961500Z  + [[ -z '' ]]
	2022-06-02T19:14:50.216747300Z  ++ stat -f -c %T /kind
	2022-06-02T19:14:50.218271300Z  + '[[overlayfs' == zfs ']]'
	2022-06-02T19:14:50.219106500Z  /usr/local/bin/entrypoint: line 112: [[overlayfs: command not found
	2022-06-02T19:14:50.220142800Z  + [[ -n '' ]]
	2022-06-02T19:14:50.220160700Z  + configure_proxy
	2022-06-02T19:14:50.220168300Z  + mkdir -p /etc/systemd/system.conf.d/
	2022-06-02T19:14:50.222374800Z  + [[ ! -z '' ]]
	2022-06-02T19:14:50.222393200Z  + cat
	2022-06-02T19:14:50.224164700Z  + fix_kmsg
	2022-06-02T19:14:50.224182900Z  + [[ ! -e /dev/kmsg ]]
	2022-06-02T19:14:50.224188800Z  + fix_mount
	2022-06-02T19:14:50.224194900Z  + echo 'INFO: ensuring we can execute mount/umount even with userns-remap'
	2022-06-02T19:14:50.224199600Z  INFO: ensuring we can execute mount/umount even with userns-remap
	2022-06-02T19:14:50.225458300Z  ++ which mount
	2022-06-02T19:14:50.227484800Z  ++ which umount
	2022-06-02T19:14:50.229527300Z  + chown root:root /usr/bin/mount /usr/bin/umount
	2022-06-02T19:14:50.292349200Z  ++ which mount
	2022-06-02T19:14:50.295658700Z  ++ which umount
	2022-06-02T19:14:50.299295600Z  + chmod -s /usr/bin/mount /usr/bin/umount
	2022-06-02T19:14:50.303523700Z  +++ which mount
	2022-06-02T19:14:50.306732900Z  ++ stat -f -c %T /usr/bin/mount
	2022-06-02T19:14:50.309597500Z  + [[ overlayfs == \a\u\f\s ]]
	2022-06-02T19:14:50.309626100Z  + echo 'INFO: remounting /sys read-only'
	2022-06-02T19:14:50.309636500Z  INFO: remounting /sys read-only
	2022-06-02T19:14:50.309643700Z  + mount -o remount,ro /sys
	2022-06-02T19:14:50.314827100Z  + echo 'INFO: making mounts shared'
	2022-06-02T19:14:50.314857700Z  INFO: making mounts shared
	2022-06-02T19:14:50.314869700Z  + mount --make-rshared /
	2022-06-02T19:14:50.317807700Z  + retryable_fix_cgroup
	2022-06-02T19:14:50.318621300Z  ++ seq 0 10
	2022-06-02T19:14:50.320614200Z  + for i in $(seq 0 10)
	2022-06-02T19:14:50.320633700Z  + fix_cgroup
	2022-06-02T19:14:50.320642500Z  + [[ -f /sys/fs/cgroup/cgroup.controllers ]]
	2022-06-02T19:14:50.320650300Z  + echo 'INFO: detected cgroup v1'
	2022-06-02T19:14:50.320692200Z  INFO: detected cgroup v1
	2022-06-02T19:14:50.320701400Z  + local current_cgroup
	2022-06-02T19:14:50.323730400Z  ++ grep -E '^[^:]*:([^:]*,)?cpu(,[^,:]*)?:.*' /proc/self/cgroup
	2022-06-02T19:14:50.323756800Z  ++ cut -d: -f3
	2022-06-02T19:14:50.326667300Z  + current_cgroup=/docker/e425b5d64014fc112e2c6213cca7749ecbd803cdfd50c69d90cb4a22f6676024
	2022-06-02T19:14:50.326715300Z  + '[' /docker/e425b5d64014fc112e2c6213cca7749ecbd803cdfd50c69d90cb4a22f6676024 = / ']'
	2022-06-02T19:14:50.326727600Z  + echo 'WARN: cgroupns not enabled! Please use cgroup v2, or cgroup v1 with cgroupns enabled.'
	2022-06-02T19:14:50.326735000Z  WARN: cgroupns not enabled! Please use cgroup v2, or cgroup v1 with cgroupns enabled.
	2022-06-02T19:14:50.326742000Z  + echo 'INFO: fix cgroup mounts for all subsystems'
	2022-06-02T19:14:50.326748700Z  INFO: fix cgroup mounts for all subsystems
	2022-06-02T19:14:50.326766200Z  + local cgroup_subsystems
	2022-06-02T19:14:50.328389500Z  ++ findmnt -lun -o source,target -t cgroup
	2022-06-02T19:14:50.328431100Z  ++ grep /docker/e425b5d64014fc112e2c6213cca7749ecbd803cdfd50c69d90cb4a22f6676024
	2022-06-02T19:14:50.329738500Z  ++ awk '{print $2}'
	2022-06-02T19:14:50.334821400Z  + cgroup_subsystems='/sys/fs/cgroup/cpuset
	2022-06-02T19:14:50.334848200Z  /sys/fs/cgroup/cpu
	2022-06-02T19:14:50.334857700Z  /sys/fs/cgroup/cpuacct
	2022-06-02T19:14:50.334877000Z  /sys/fs/cgroup/blkio
	2022-06-02T19:14:50.334890000Z  /sys/fs/cgroup/memory
	2022-06-02T19:14:50.334900800Z  /sys/fs/cgroup/devices
	2022-06-02T19:14:50.334907700Z  /sys/fs/cgroup/freezer
	2022-06-02T19:14:50.334921900Z  /sys/fs/cgroup/net_cls
	2022-06-02T19:14:50.334934100Z  /sys/fs/cgroup/perf_event
	2022-06-02T19:14:50.335048300Z  /sys/fs/cgroup/net_prio
	2022-06-02T19:14:50.335058000Z  /sys/fs/cgroup/hugetlb
	2022-06-02T19:14:50.335080700Z  /sys/fs/cgroup/pids
	2022-06-02T19:14:50.335091100Z  /sys/fs/cgroup/rdma
	2022-06-02T19:14:50.335100900Z  /sys/fs/cgroup/systemd'
	2022-06-02T19:14:50.335107500Z  + local unsupported_cgroups
	2022-06-02T19:14:50.337396700Z  ++ findmnt -lun -o source,target -t cgroup
	2022-06-02T19:14:50.337421200Z  ++ grep_allow_nomatch -v /docker/e425b5d64014fc112e2c6213cca7749ecbd803cdfd50c69d90cb4a22f6676024
	2022-06-02T19:14:50.337431700Z  ++ grep -v /docker/e425b5d64014fc112e2c6213cca7749ecbd803cdfd50c69d90cb4a22f6676024
	2022-06-02T19:14:50.338394800Z  ++ awk '{print $2}'
	2022-06-02T19:14:50.341699000Z  ++ [[ 1 == 1 ]]
	2022-06-02T19:14:50.343746000Z  + unsupported_cgroups=
	2022-06-02T19:14:50.343768900Z  + '[' -n '' ']'
	2022-06-02T19:14:50.343776200Z  + local cgroup_mounts
	2022-06-02T19:14:50.345168800Z  ++ grep -E -o '/[[:alnum:]].* /sys/fs/cgroup.*.*cgroup' /proc/self/mountinfo
	2022-06-02T19:14:50.350792000Z  + cgroup_mounts='/docker/e425b5d64014fc112e2c6213cca7749ecbd803cdfd50c69d90cb4a22f6676024 /sys/fs/cgroup/cpuset rw,nosuid,nodev,noexec,relatime shared:450 master:58 - cgroup
	2022-06-02T19:14:50.350816100Z  /docker/e425b5d64014fc112e2c6213cca7749ecbd803cdfd50c69d90cb4a22f6676024 /sys/fs/cgroup/cpu rw,nosuid,nodev,noexec,relatime shared:451 master:59 - cgroup
	2022-06-02T19:14:50.350830100Z  /docker/e425b5d64014fc112e2c6213cca7749ecbd803cdfd50c69d90cb4a22f6676024 /sys/fs/cgroup/cpuacct rw,nosuid,nodev,noexec,relatime shared:452 master:60 - cgroup
	2022-06-02T19:14:50.350838400Z  /docker/e425b5d64014fc112e2c6213cca7749ecbd803cdfd50c69d90cb4a22f6676024 /sys/fs/cgroup/blkio rw,nosuid,nodev,noexec,relatime shared:453 master:61 - cgroup
	2022-06-02T19:14:50.350845800Z  /docker/e425b5d64014fc112e2c6213cca7749ecbd803cdfd50c69d90cb4a22f6676024 /sys/fs/cgroup/memory rw,nosuid,nodev,noexec,relatime shared:454 master:62 - cgroup
	2022-06-02T19:14:50.350879000Z  /docker/e425b5d64014fc112e2c6213cca7749ecbd803cdfd50c69d90cb4a22f6676024 /sys/fs/cgroup/devices rw,nosuid,nodev,noexec,relatime shared:455 master:63 - cgroup
	2022-06-02T19:14:50.350886800Z  /docker/e425b5d64014fc112e2c6213cca7749ecbd803cdfd50c69d90cb4a22f6676024 /sys/fs/cgroup/freezer rw,nosuid,nodev,noexec,relatime shared:456 master:64 - cgroup
	2022-06-02T19:14:50.350893700Z  /docker/e425b5d64014fc112e2c6213cca7749ecbd803cdfd50c69d90cb4a22f6676024 /sys/fs/cgroup/net_cls rw,nosuid,nodev,noexec,relatime shared:457 master:65 - cgroup
	2022-06-02T19:14:50.350902900Z  /docker/e425b5d64014fc112e2c6213cca7749ecbd803cdfd50c69d90cb4a22f6676024 /sys/fs/cgroup/perf_event rw,nosuid,nodev,noexec,relatime shared:458 master:66 - cgroup
	2022-06-02T19:14:50.350910200Z  /docker/e425b5d64014fc112e2c6213cca7749ecbd803cdfd50c69d90cb4a22f6676024 /sys/fs/cgroup/net_prio rw,nosuid,nodev,noexec,relatime shared:459 master:67 - cgroup
	2022-06-02T19:14:50.350923800Z  /docker/e425b5d64014fc112e2c6213cca7749ecbd803cdfd50c69d90cb4a22f6676024 /sys/fs/cgroup/hugetlb rw,nosuid,nodev,noexec,relatime shared:460 master:68 - cgroup
	2022-06-02T19:14:50.351025300Z  /docker/e425b5d64014fc112e2c6213cca7749ecbd803cdfd50c69d90cb4a22f6676024 /sys/fs/cgroup/pids rw,nosuid,nodev,noexec,relatime shared:461 master:69 - cgroup
	2022-06-02T19:14:50.351041500Z  /docker/e425b5d64014fc112e2c6213cca7749ecbd803cdfd50c69d90cb4a22f6676024 /sys/fs/cgroup/rdma rw,nosuid,nodev,noexec,relatime shared:462 master:70 - cgroup
	2022-06-02T19:14:50.351059400Z  /docker/e425b5d64014fc112e2c6213cca7749ecbd803cdfd50c69d90cb4a22f6676024 /sys/fs/cgroup/systemd rw,nosuid,nodev,noexec,relatime shared:463 master:71 - cgroup cgroup'
	2022-06-02T19:14:50.351071300Z  + [[ -n /docker/e425b5d64014fc112e2c6213cca7749ecbd803cdfd50c69d90cb4a22f6676024 /sys/fs/cgroup/cpuset rw,nosuid,nodev,noexec,relatime shared:450 master:58 - cgroup
	2022-06-02T19:14:50.351081100Z  /docker/e425b5d64014fc112e2c6213cca7749ecbd803cdfd50c69d90cb4a22f6676024 /sys/fs/cgroup/cpu rw,nosuid,nodev,noexec,relatime shared:451 master:59 - cgroup
	2022-06-02T19:14:50.351088500Z  /docker/e425b5d64014fc112e2c6213cca7749ecbd803cdfd50c69d90cb4a22f6676024 /sys/fs/cgroup/cpuacct rw,nosuid,nodev,noexec,relatime shared:452 master:60 - cgroup
	2022-06-02T19:14:50.351095500Z  /docker/e425b5d64014fc112e2c6213cca7749ecbd803cdfd50c69d90cb4a22f6676024 /sys/fs/cgroup/blkio rw,nosuid,nodev,noexec,relatime shared:453 master:61 - cgroup
	2022-06-02T19:14:50.351123500Z  /docker/e425b5d64014fc112e2c6213cca7749ecbd803cdfd50c69d90cb4a22f6676024 /sys/fs/cgroup/memory rw,nosuid,nodev,noexec,relatime shared:454 master:62 - cgroup
	2022-06-02T19:14:50.351132400Z  /docker/e425b5d64014fc112e2c6213cca7749ecbd803cdfd50c69d90cb4a22f6676024 /sys/fs/cgroup/devices rw,nosuid,nodev,noexec,relatime shared:455 master:63 - cgroup
	2022-06-02T19:14:50.351139600Z  /docker/e425b5d64014fc112e2c6213cca7749ecbd803cdfd50c69d90cb4a22f6676024 /sys/fs/cgroup/freezer rw,nosuid,nodev,noexec,relatime shared:456 master:64 - cgroup
	2022-06-02T19:14:50.351148800Z  /docker/e425b5d64014fc112e2c6213cca7749ecbd803cdfd50c69d90cb4a22f6676024 /sys/fs/cgroup/net_cls rw,nosuid,nodev,noexec,relatime shared:457 master:65 - cgroup
	2022-06-02T19:14:50.351155900Z  /docker/e425b5d64014fc112e2c6213cca7749ecbd803cdfd50c69d90cb4a22f6676024 /sys/fs/cgroup/perf_event rw,nosuid,nodev,noexec,relatime shared:458 master:66 - cgroup
	2022-06-02T19:14:50.351162900Z  /docker/e425b5d64014fc112e2c6213cca7749ecbd803cdfd50c69d90cb4a22f6676024 /sys/fs/cgroup/net_prio rw,nosuid,nodev,noexec,relatime shared:459 master:67 - cgroup
	2022-06-02T19:14:50.351180500Z  /docker/e425b5d64014fc112e2c6213cca7749ecbd803cdfd50c69d90cb4a22f6676024 /sys/fs/cgroup/hugetlb rw,nosuid,nodev,noexec,relatime shared:460 master:68 - cgroup
	2022-06-02T19:14:50.352579300Z  /docker/e425b5d64014fc112e2c6213cca7749ecbd803cdfd50c69d90cb4a22f6676024 /sys/fs/cgroup/pids rw,nosuid,nodev,noexec,relatime shared:461 master:69 - cgroup
	2022-06-02T19:14:50.352603200Z  /docker/e425b5d64014fc112e2c6213cca7749ecbd803cdfd50c69d90cb4a22f6676024 /sys/fs/cgroup/rdma rw,nosuid,nodev,noexec,relatime shared:462 master:70 - cgroup
	2022-06-02T19:14:50.352632900Z  /docker/e425b5d64014fc112e2c6213cca7749ecbd803cdfd50c69d90cb4a22f6676024 /sys/fs/cgroup/systemd rw,nosuid,nodev,noexec,relatime shared:463 master:71 - cgroup cgroup ]]
	2022-06-02T19:14:50.352646500Z  + local mount_root
	2022-06-02T19:14:50.352652900Z  ++ head -n 1
	2022-06-02T19:14:50.352660900Z  ++ cut '-d ' -f1
	2022-06-02T19:14:50.354479100Z  + mount_root=/docker/e425b5d64014fc112e2c6213cca7749ecbd803cdfd50c69d90cb4a22f6676024
	2022-06-02T19:14:50.356187600Z  ++ echo '/docker/e425b5d64014fc112e2c6213cca7749ecbd803cdfd50c69d90cb4a22f6676024 /sys/fs/cgroup/cpuset rw,nosuid,nodev,noexec,relatime shared:450 master:58 - cgroup
	2022-06-02T19:14:50.356216200Z  /docker/e425b5d64014fc112e2c6213cca7749ecbd803cdfd50c69d90cb4a22f6676024 /sys/fs/cgroup/cpu rw,nosuid,nodev,noexec,relatime shared:451 master:59 - cgroup
	2022-06-02T19:14:50.356227500Z  /docker/e425b5d64014fc112e2c6213cca7749ecbd803cdfd50c69d90cb4a22f6676024 /sys/fs/cgroup/cpuacct rw,nosuid,nodev,noexec,relatime shared:452 master:60 - cgroup
	2022-06-02T19:14:50.356237000Z  /docker/e425b5d64014fc112e2c6213cca7749ecbd803cdfd50c69d90cb4a22f6676024 /sys/fs/cgroup/blkio rw,nosuid,nodev,noexec,relatime shared:453 master:61 - cgroup
	2022-06-02T19:14:50.356250400Z  /docker/e425b5d64014fc112e2c6213cca7749ecbd803cdfd50c69d90cb4a22f6676024 /sys/fs/cgroup/memory rw,nosuid,nodev,noexec,relatime shared:454 master:62 - cgroup
	2022-06-02T19:14:50.356260400Z  /docker/e425b5d64014fc112e2c6213cca7749ecbd803cdfd50c69d90cb4a22f6676024 /sys/fs/cgroup/devices rw,nosuid,nodev,noexec,relatime shared:455 master:63 - cgroup
	2022-06-02T19:14:50.356292300Z  /docker/e425b5d64014fc112e2c6213cca7749ecbd803cdfd50c69d90cb4a22f6676024 /sys/fs/cgroup/freezer rw,nosuid,nodev,noexec,relatime shared:456 master:64 - cgroup
	2022-06-02T19:14:50.356308000Z  /docker/e425b5d64014fc112e2c6213cca7749ecbd803cdfd50c69d90cb4a22f6676024 /sys/fs/cgroup/net_cls rw,nosuid,nodev,noexec,relatime shared:457 master:65 - cgroup
	2022-06-02T19:14:50.356317700Z  /docker/e425b5d64014fc112e2c6213cca7749ecbd803cdfd50c69d90cb4a22f6676024 /sys/fs/cgroup/perf_event rw,nosuid,nodev,noexec,relatime shared:458 master:66 - cgroup
	2022-06-02T19:14:50.356327200Z  /docker/e425b5d64014fc112e2c6213cca7749ecbd803cdfd50c69d90cb4a22f6676024 /sys/fs/cgroup/net_prio rw,nosuid,nodev,noexec,relatime shared:459 master:67 - cgroup
	2022-06-02T19:14:50.356336500Z  /docker/e425b5d64014fc112e2c6213cca7749ecbd803cdfd50c69d90cb4a22f6676024 /sys/fs/cgroup/hugetlb rw,nosuid,nodev,noexec,relatime shared:460 master:68 - cgroup
	2022-06-02T19:14:50.356349500Z  /docker/e425b5d64014fc112e2c6213cca7749ecbd803cdfd50c69d90cb4a22f6676024 /sys/fs/cgroup/pids rw,nosuid,nodev,noexec,relatime shared:461 master:69 - cgroup
	2022-06-02T19:14:50.356361900Z  /docker/e425b5d64014fc112e2c6213cca7749ecbd803cdfd50c69d90cb4a22f6676024 /sys/fs/cgroup/rdma rw,nosuid,nodev,noexec,relatime shared:462 master:70 - cgroup
	2022-06-02T19:14:50.356380200Z  /docker/e425b5d64014fc112e2c6213cca7749ecbd803cdfd50c69d90cb4a22f6676024 /sys/fs/cgroup/systemd rw,nosuid,nodev,noexec,relatime shared:463 master:71 - cgroup cgroup'
	2022-06-02T19:14:50.356392700Z  ++ cut '-d ' -f 2
	2022-06-02T19:14:50.359198700Z  + for mount_point in $(echo "${cgroup_mounts}" | cut -d' ' -f 2)
	2022-06-02T19:14:50.359220900Z  + local target=/sys/fs/cgroup/cpuset/docker/e425b5d64014fc112e2c6213cca7749ecbd803cdfd50c69d90cb4a22f6676024
	2022-06-02T19:14:50.359230600Z  + findmnt /sys/fs/cgroup/cpuset/docker/e425b5d64014fc112e2c6213cca7749ecbd803cdfd50c69d90cb4a22f6676024
	2022-06-02T19:14:50.363517300Z  + mkdir -p /sys/fs/cgroup/cpuset/docker/e425b5d64014fc112e2c6213cca7749ecbd803cdfd50c69d90cb4a22f6676024
	2022-06-02T19:14:50.427299000Z  + mount --bind /sys/fs/cgroup/cpuset /sys/fs/cgroup/cpuset/docker/e425b5d64014fc112e2c6213cca7749ecbd803cdfd50c69d90cb4a22f6676024
	2022-06-02T19:14:50.430718400Z  + for mount_point in $(echo "${cgroup_mounts}" | cut -d' ' -f 2)
	2022-06-02T19:14:50.430741100Z  + local target=/sys/fs/cgroup/cpu/docker/e425b5d64014fc112e2c6213cca7749ecbd803cdfd50c69d90cb4a22f6676024
	2022-06-02T19:14:50.430751700Z  + findmnt /sys/fs/cgroup/cpu/docker/e425b5d64014fc112e2c6213cca7749ecbd803cdfd50c69d90cb4a22f6676024
	2022-06-02T19:14:50.434475400Z  + mkdir -p /sys/fs/cgroup/cpu/docker/e425b5d64014fc112e2c6213cca7749ecbd803cdfd50c69d90cb4a22f6676024
	2022-06-02T19:14:50.438694200Z  + mount --bind /sys/fs/cgroup/cpu /sys/fs/cgroup/cpu/docker/e425b5d64014fc112e2c6213cca7749ecbd803cdfd50c69d90cb4a22f6676024
	2022-06-02T19:14:50.440930400Z  + for mount_point in $(echo "${cgroup_mounts}" | cut -d' ' -f 2)
	2022-06-02T19:14:50.440952200Z  + local target=/sys/fs/cgroup/cpuacct/docker/e425b5d64014fc112e2c6213cca7749ecbd803cdfd50c69d90cb4a22f6676024
	2022-06-02T19:14:50.440960800Z  + findmnt /sys/fs/cgroup/cpuacct/docker/e425b5d64014fc112e2c6213cca7749ecbd803cdfd50c69d90cb4a22f6676024
	2022-06-02T19:14:50.445325100Z  + mkdir -p /sys/fs/cgroup/cpuacct/docker/e425b5d64014fc112e2c6213cca7749ecbd803cdfd50c69d90cb4a22f6676024
	2022-06-02T19:14:50.448231000Z  + mount --bind /sys/fs/cgroup/cpuacct /sys/fs/cgroup/cpuacct/docker/e425b5d64014fc112e2c6213cca7749ecbd803cdfd50c69d90cb4a22f6676024
	2022-06-02T19:14:50.451531300Z  + for mount_point in $(echo "${cgroup_mounts}" | cut -d' ' -f 2)
	2022-06-02T19:14:50.451553000Z  + local target=/sys/fs/cgroup/blkio/docker/e425b5d64014fc112e2c6213cca7749ecbd803cdfd50c69d90cb4a22f6676024
	2022-06-02T19:14:50.451562000Z  + findmnt /sys/fs/cgroup/blkio/docker/e425b5d64014fc112e2c6213cca7749ecbd803cdfd50c69d90cb4a22f6676024
	2022-06-02T19:14:50.455984700Z  + mkdir -p /sys/fs/cgroup/blkio/docker/e425b5d64014fc112e2c6213cca7749ecbd803cdfd50c69d90cb4a22f6676024
	2022-06-02T19:14:50.458189000Z  + mount --bind /sys/fs/cgroup/blkio /sys/fs/cgroup/blkio/docker/e425b5d64014fc112e2c6213cca7749ecbd803cdfd50c69d90cb4a22f6676024
	2022-06-02T19:14:50.461672000Z  + for mount_point in $(echo "${cgroup_mounts}" | cut -d' ' -f 2)
	2022-06-02T19:14:50.461694700Z  + local target=/sys/fs/cgroup/memory/docker/e425b5d64014fc112e2c6213cca7749ecbd803cdfd50c69d90cb4a22f6676024
	2022-06-02T19:14:50.461704300Z  + findmnt /sys/fs/cgroup/memory/docker/e425b5d64014fc112e2c6213cca7749ecbd803cdfd50c69d90cb4a22f6676024
	2022-06-02T19:14:50.465852800Z  + mkdir -p /sys/fs/cgroup/memory/docker/e425b5d64014fc112e2c6213cca7749ecbd803cdfd50c69d90cb4a22f6676024
	2022-06-02T19:14:50.468788200Z  + mount --bind /sys/fs/cgroup/memory /sys/fs/cgroup/memory/docker/e425b5d64014fc112e2c6213cca7749ecbd803cdfd50c69d90cb4a22f6676024
	2022-06-02T19:14:50.471344400Z  + for mount_point in $(echo "${cgroup_mounts}" | cut -d' ' -f 2)
	2022-06-02T19:14:50.471366900Z  + local target=/sys/fs/cgroup/devices/docker/e425b5d64014fc112e2c6213cca7749ecbd803cdfd50c69d90cb4a22f6676024
	2022-06-02T19:14:50.471376800Z  + findmnt /sys/fs/cgroup/devices/docker/e425b5d64014fc112e2c6213cca7749ecbd803cdfd50c69d90cb4a22f6676024
	2022-06-02T19:14:50.475133400Z  + mkdir -p /sys/fs/cgroup/devices/docker/e425b5d64014fc112e2c6213cca7749ecbd803cdfd50c69d90cb4a22f6676024
	2022-06-02T19:14:50.477082700Z  + mount --bind /sys/fs/cgroup/devices /sys/fs/cgroup/devices/docker/e425b5d64014fc112e2c6213cca7749ecbd803cdfd50c69d90cb4a22f6676024
	2022-06-02T19:14:50.479168000Z  + for mount_point in $(echo "${cgroup_mounts}" | cut -d' ' -f 2)
	2022-06-02T19:14:50.479186400Z  + local target=/sys/fs/cgroup/freezer/docker/e425b5d64014fc112e2c6213cca7749ecbd803cdfd50c69d90cb4a22f6676024
	2022-06-02T19:14:50.479192300Z  + findmnt /sys/fs/cgroup/freezer/docker/e425b5d64014fc112e2c6213cca7749ecbd803cdfd50c69d90cb4a22f6676024
	2022-06-02T19:14:50.482450300Z  + mkdir -p /sys/fs/cgroup/freezer/docker/e425b5d64014fc112e2c6213cca7749ecbd803cdfd50c69d90cb4a22f6676024
	2022-06-02T19:14:50.484448200Z  + mount --bind /sys/fs/cgroup/freezer /sys/fs/cgroup/freezer/docker/e425b5d64014fc112e2c6213cca7749ecbd803cdfd50c69d90cb4a22f6676024
	2022-06-02T19:14:50.488261400Z  + for mount_point in $(echo "${cgroup_mounts}" | cut -d' ' -f 2)
	2022-06-02T19:14:50.488298800Z  + local target=/sys/fs/cgroup/net_cls/docker/e425b5d64014fc112e2c6213cca7749ecbd803cdfd50c69d90cb4a22f6676024
	2022-06-02T19:14:50.488307200Z  + findmnt /sys/fs/cgroup/net_cls/docker/e425b5d64014fc112e2c6213cca7749ecbd803cdfd50c69d90cb4a22f6676024
	2022-06-02T19:14:50.491445200Z  + mkdir -p /sys/fs/cgroup/net_cls/docker/e425b5d64014fc112e2c6213cca7749ecbd803cdfd50c69d90cb4a22f6676024
	2022-06-02T19:14:50.493419100Z  + mount --bind /sys/fs/cgroup/net_cls /sys/fs/cgroup/net_cls/docker/e425b5d64014fc112e2c6213cca7749ecbd803cdfd50c69d90cb4a22f6676024
	2022-06-02T19:14:50.496059900Z  + for mount_point in $(echo "${cgroup_mounts}" | cut -d' ' -f 2)
	2022-06-02T19:14:50.496083900Z  + local target=/sys/fs/cgroup/perf_event/docker/e425b5d64014fc112e2c6213cca7749ecbd803cdfd50c69d90cb4a22f6676024
	2022-06-02T19:14:50.496092600Z  + findmnt /sys/fs/cgroup/perf_event/docker/e425b5d64014fc112e2c6213cca7749ecbd803cdfd50c69d90cb4a22f6676024
	2022-06-02T19:14:50.500089400Z  + mkdir -p /sys/fs/cgroup/perf_event/docker/e425b5d64014fc112e2c6213cca7749ecbd803cdfd50c69d90cb4a22f6676024
	2022-06-02T19:14:50.502349600Z  + mount --bind /sys/fs/cgroup/perf_event /sys/fs/cgroup/perf_event/docker/e425b5d64014fc112e2c6213cca7749ecbd803cdfd50c69d90cb4a22f6676024
	2022-06-02T19:14:50.504773100Z  + for mount_point in $(echo "${cgroup_mounts}" | cut -d' ' -f 2)
	2022-06-02T19:14:50.504797100Z  + local target=/sys/fs/cgroup/net_prio/docker/e425b5d64014fc112e2c6213cca7749ecbd803cdfd50c69d90cb4a22f6676024
	2022-06-02T19:14:50.504807600Z  + findmnt /sys/fs/cgroup/net_prio/docker/e425b5d64014fc112e2c6213cca7749ecbd803cdfd50c69d90cb4a22f6676024
	2022-06-02T19:14:50.507397100Z  + mkdir -p /sys/fs/cgroup/net_prio/docker/e425b5d64014fc112e2c6213cca7749ecbd803cdfd50c69d90cb4a22f6676024
	2022-06-02T19:14:50.510043900Z  + mount --bind /sys/fs/cgroup/net_prio /sys/fs/cgroup/net_prio/docker/e425b5d64014fc112e2c6213cca7749ecbd803cdfd50c69d90cb4a22f6676024
	2022-06-02T19:14:50.512661600Z  + for mount_point in $(echo "${cgroup_mounts}" | cut -d' ' -f 2)
	2022-06-02T19:14:50.512699100Z  + local target=/sys/fs/cgroup/hugetlb/docker/e425b5d64014fc112e2c6213cca7749ecbd803cdfd50c69d90cb4a22f6676024
	2022-06-02T19:14:50.512717400Z  + findmnt /sys/fs/cgroup/hugetlb/docker/e425b5d64014fc112e2c6213cca7749ecbd803cdfd50c69d90cb4a22f6676024
	2022-06-02T19:14:50.517050500Z  + mkdir -p /sys/fs/cgroup/hugetlb/docker/e425b5d64014fc112e2c6213cca7749ecbd803cdfd50c69d90cb4a22f6676024
	2022-06-02T19:14:50.519238100Z  + mount --bind /sys/fs/cgroup/hugetlb /sys/fs/cgroup/hugetlb/docker/e425b5d64014fc112e2c6213cca7749ecbd803cdfd50c69d90cb4a22f6676024
	2022-06-02T19:14:50.522241900Z  + for mount_point in $(echo "${cgroup_mounts}" | cut -d' ' -f 2)
	2022-06-02T19:14:50.522255000Z  + local target=/sys/fs/cgroup/pids/docker/e425b5d64014fc112e2c6213cca7749ecbd803cdfd50c69d90cb4a22f6676024
	2022-06-02T19:14:50.522279000Z  + findmnt /sys/fs/cgroup/pids/docker/e425b5d64014fc112e2c6213cca7749ecbd803cdfd50c69d90cb4a22f6676024
	2022-06-02T19:14:50.525687400Z  + mkdir -p /sys/fs/cgroup/pids/docker/e425b5d64014fc112e2c6213cca7749ecbd803cdfd50c69d90cb4a22f6676024
	2022-06-02T19:14:50.528323500Z  + mount --bind /sys/fs/cgroup/pids /sys/fs/cgroup/pids/docker/e425b5d64014fc112e2c6213cca7749ecbd803cdfd50c69d90cb4a22f6676024
	2022-06-02T19:14:50.531332800Z  + for mount_point in $(echo "${cgroup_mounts}" | cut -d' ' -f 2)
	2022-06-02T19:14:50.531352600Z  + local target=/sys/fs/cgroup/rdma/docker/e425b5d64014fc112e2c6213cca7749ecbd803cdfd50c69d90cb4a22f6676024
	2022-06-02T19:14:50.531362300Z  + findmnt /sys/fs/cgroup/rdma/docker/e425b5d64014fc112e2c6213cca7749ecbd803cdfd50c69d90cb4a22f6676024
	2022-06-02T19:14:50.535075900Z  + mkdir -p /sys/fs/cgroup/rdma/docker/e425b5d64014fc112e2c6213cca7749ecbd803cdfd50c69d90cb4a22f6676024
	2022-06-02T19:14:50.537765700Z  + mount --bind /sys/fs/cgroup/rdma /sys/fs/cgroup/rdma/docker/e425b5d64014fc112e2c6213cca7749ecbd803cdfd50c69d90cb4a22f6676024
	2022-06-02T19:14:50.541808500Z  + for mount_point in $(echo "${cgroup_mounts}" | cut -d' ' -f 2)
	2022-06-02T19:14:50.541818500Z  + local target=/sys/fs/cgroup/systemd/docker/e425b5d64014fc112e2c6213cca7749ecbd803cdfd50c69d90cb4a22f6676024
	2022-06-02T19:14:50.541826100Z  + findmnt /sys/fs/cgroup/systemd/docker/e425b5d64014fc112e2c6213cca7749ecbd803cdfd50c69d90cb4a22f6676024
	2022-06-02T19:14:50.547185800Z  + mkdir -p /sys/fs/cgroup/systemd/docker/e425b5d64014fc112e2c6213cca7749ecbd803cdfd50c69d90cb4a22f6676024
	2022-06-02T19:14:50.549581900Z  + mount --bind /sys/fs/cgroup/systemd /sys/fs/cgroup/systemd/docker/e425b5d64014fc112e2c6213cca7749ecbd803cdfd50c69d90cb4a22f6676024
	2022-06-02T19:14:50.552456200Z  + mount --make-rprivate /sys/fs/cgroup
	2022-06-02T19:14:50.557241700Z  + echo '/sys/fs/cgroup/cpuset
	2022-06-02T19:14:50.557269500Z  /sys/fs/cgroup/cpu
	2022-06-02T19:14:50.557281000Z  /sys/fs/cgroup/cpuacct
	2022-06-02T19:14:50.557288900Z  /sys/fs/cgroup/blkio
	2022-06-02T19:14:50.557295900Z  /sys/fs/cgroup/memory
	2022-06-02T19:14:50.557304800Z  /sys/fs/cgroup/devices
	2022-06-02T19:14:50.557312300Z  /sys/fs/cgroup/freezer
	2022-06-02T19:14:50.557319300Z  /sys/fs/cgroup/net_cls
	2022-06-02T19:14:50.557326200Z  /sys/fs/cgroup/perf_event
	2022-06-02T19:14:50.557333300Z  /sys/fs/cgroup/net_prio
	2022-06-02T19:14:50.557340300Z  /sys/fs/cgroup/hugetlb
	2022-06-02T19:14:50.557348200Z  /sys/fs/cgroup/pids
	2022-06-02T19:14:50.557356800Z  /sys/fs/cgroup/rdma
	2022-06-02T19:14:50.557365200Z  /sys/fs/cgroup/systemd'
	2022-06-02T19:14:50.557372400Z  + IFS=
	2022-06-02T19:14:50.557379500Z  + read -r subsystem
	2022-06-02T19:14:50.557386900Z  + mount_kubelet_cgroup_root /kubelet /sys/fs/cgroup/cpuset
	2022-06-02T19:14:50.557397200Z  + local cgroup_root=/kubelet
	2022-06-02T19:14:50.557408600Z  + local subsystem=/sys/fs/cgroup/cpuset
	2022-06-02T19:14:50.557416200Z  + '[' -z /kubelet ']'
	2022-06-02T19:14:50.557423200Z  + mkdir -p /sys/fs/cgroup/cpuset//kubelet
	2022-06-02T19:14:50.626737300Z  + '[' /sys/fs/cgroup/cpuset == /sys/fs/cgroup/cpuset ']'
	2022-06-02T19:14:50.626805000Z  + cat /sys/fs/cgroup/cpuset/cpuset.cpus
	2022-06-02T19:14:50.630539000Z  + cat /sys/fs/cgroup/cpuset/cpuset.mems
	2022-06-02T19:14:50.635384100Z  + mount --bind /sys/fs/cgroup/cpuset//kubelet /sys/fs/cgroup/cpuset//kubelet
	2022-06-02T19:14:50.639406700Z  + mount_kubelet_cgroup_root /kubelet.slice /sys/fs/cgroup/cpuset
	2022-06-02T19:14:50.639428300Z  + local cgroup_root=/kubelet.slice
	2022-06-02T19:14:50.639437800Z  + local subsystem=/sys/fs/cgroup/cpuset
	2022-06-02T19:14:50.639444700Z  + '[' -z /kubelet.slice ']'
	2022-06-02T19:14:50.639451300Z  + mkdir -p /sys/fs/cgroup/cpuset//kubelet.slice
	2022-06-02T19:14:50.642141300Z  + '[' /sys/fs/cgroup/cpuset == /sys/fs/cgroup/cpuset ']'
	2022-06-02T19:14:50.642163900Z  + cat /sys/fs/cgroup/cpuset/cpuset.cpus
	2022-06-02T19:14:50.647798800Z  + cat /sys/fs/cgroup/cpuset/cpuset.mems
	2022-06-02T19:14:50.649799900Z  + mount --bind /sys/fs/cgroup/cpuset//kubelet.slice /sys/fs/cgroup/cpuset//kubelet.slice
	2022-06-02T19:14:50.653207700Z  + IFS=
	2022-06-02T19:14:50.653228400Z  + read -r subsystem
	2022-06-02T19:14:50.653238000Z  + mount_kubelet_cgroup_root /kubelet /sys/fs/cgroup/cpu
	2022-06-02T19:14:50.653251400Z  + local cgroup_root=/kubelet
	2022-06-02T19:14:50.653262800Z  + local subsystem=/sys/fs/cgroup/cpu
	2022-06-02T19:14:50.653271200Z  + '[' -z /kubelet ']'
	2022-06-02T19:14:50.653278000Z  + mkdir -p /sys/fs/cgroup/cpu//kubelet
	2022-06-02T19:14:50.656424100Z  + '[' /sys/fs/cgroup/cpu == /sys/fs/cgroup/cpuset ']'
	2022-06-02T19:14:50.656445800Z  + mount --bind /sys/fs/cgroup/cpu//kubelet /sys/fs/cgroup/cpu//kubelet
	2022-06-02T19:14:50.659861600Z  + mount_kubelet_cgroup_root /kubelet.slice /sys/fs/cgroup/cpu
	2022-06-02T19:14:50.659877200Z  + local cgroup_root=/kubelet.slice
	2022-06-02T19:14:50.659889200Z  + local subsystem=/sys/fs/cgroup/cpu
	2022-06-02T19:14:50.659896600Z  + '[' -z /kubelet.slice ']'
	2022-06-02T19:14:50.659903700Z  + mkdir -p /sys/fs/cgroup/cpu//kubelet.slice
	2022-06-02T19:14:50.662154200Z  + '[' /sys/fs/cgroup/cpu == /sys/fs/cgroup/cpuset ']'
	2022-06-02T19:14:50.662173800Z  + mount --bind /sys/fs/cgroup/cpu//kubelet.slice /sys/fs/cgroup/cpu//kubelet.slice
	2022-06-02T19:14:50.665303600Z  + IFS=
	2022-06-02T19:14:50.665327500Z  + read -r subsystem
	2022-06-02T19:14:50.665337200Z  + mount_kubelet_cgroup_root /kubelet /sys/fs/cgroup/cpuacct
	2022-06-02T19:14:50.665345400Z  + local cgroup_root=/kubelet
	2022-06-02T19:14:50.665352600Z  + local subsystem=/sys/fs/cgroup/cpuacct
	2022-06-02T19:14:50.665360100Z  + '[' -z /kubelet ']'
	2022-06-02T19:14:50.665367000Z  + mkdir -p /sys/fs/cgroup/cpuacct//kubelet
	2022-06-02T19:14:50.667896500Z  + '[' /sys/fs/cgroup/cpuacct == /sys/fs/cgroup/cpuset ']'
	2022-06-02T19:14:50.667934100Z  + mount --bind /sys/fs/cgroup/cpuacct//kubelet /sys/fs/cgroup/cpuacct//kubelet
	2022-06-02T19:14:50.671382200Z  + mount_kubelet_cgroup_root /kubelet.slice /sys/fs/cgroup/cpuacct
	2022-06-02T19:14:50.671403800Z  + local cgroup_root=/kubelet.slice
	2022-06-02T19:14:50.671413000Z  + local subsystem=/sys/fs/cgroup/cpuacct
	2022-06-02T19:14:50.671420200Z  + '[' -z /kubelet.slice ']'
	2022-06-02T19:14:50.671429200Z  + mkdir -p /sys/fs/cgroup/cpuacct//kubelet.slice
	2022-06-02T19:14:50.673418700Z  + '[' /sys/fs/cgroup/cpuacct == /sys/fs/cgroup/cpuset ']'
	2022-06-02T19:14:50.673441900Z  + mount --bind /sys/fs/cgroup/cpuacct//kubelet.slice /sys/fs/cgroup/cpuacct//kubelet.slice
	2022-06-02T19:14:50.676144600Z  + IFS=
	2022-06-02T19:14:50.676166200Z  + read -r subsystem
	2022-06-02T19:14:50.676173700Z  + mount_kubelet_cgroup_root /kubelet /sys/fs/cgroup/blkio
	2022-06-02T19:14:50.676182200Z  + local cgroup_root=/kubelet
	2022-06-02T19:14:50.676189000Z  + local subsystem=/sys/fs/cgroup/blkio
	2022-06-02T19:14:50.676195700Z  + '[' -z /kubelet ']'
	2022-06-02T19:14:50.676202300Z  + mkdir -p /sys/fs/cgroup/blkio//kubelet
	2022-06-02T19:14:50.678909900Z  + '[' /sys/fs/cgroup/blkio == /sys/fs/cgroup/cpuset ']'
	2022-06-02T19:14:50.678931300Z  + mount --bind /sys/fs/cgroup/blkio//kubelet /sys/fs/cgroup/blkio//kubelet
	2022-06-02T19:14:50.680760500Z  + mount_kubelet_cgroup_root /kubelet.slice /sys/fs/cgroup/blkio
	2022-06-02T19:14:50.680783400Z  + local cgroup_root=/kubelet.slice
	2022-06-02T19:14:50.680793300Z  + local subsystem=/sys/fs/cgroup/blkio
	2022-06-02T19:14:50.680800400Z  + '[' -z /kubelet.slice ']'
	2022-06-02T19:14:50.680807500Z  + mkdir -p /sys/fs/cgroup/blkio//kubelet.slice
	2022-06-02T19:14:50.682970200Z  + '[' /sys/fs/cgroup/blkio == /sys/fs/cgroup/cpuset ']'
	2022-06-02T19:14:50.683058300Z  + mount --bind /sys/fs/cgroup/blkio//kubelet.slice /sys/fs/cgroup/blkio//kubelet.slice
	2022-06-02T19:14:50.686365900Z  + IFS=
	2022-06-02T19:14:50.686384200Z  + read -r subsystem
	2022-06-02T19:14:50.686389700Z  + mount_kubelet_cgroup_root /kubelet /sys/fs/cgroup/memory
	2022-06-02T19:14:50.686394300Z  + local cgroup_root=/kubelet
	2022-06-02T19:14:50.686398700Z  + local subsystem=/sys/fs/cgroup/memory
	2022-06-02T19:14:50.686403200Z  + '[' -z /kubelet ']'
	2022-06-02T19:14:50.686407600Z  + mkdir -p /sys/fs/cgroup/memory//kubelet
	2022-06-02T19:14:50.688478900Z  + '[' /sys/fs/cgroup/memory == /sys/fs/cgroup/cpuset ']'
	2022-06-02T19:14:50.688497000Z  + mount --bind /sys/fs/cgroup/memory//kubelet /sys/fs/cgroup/memory//kubelet
	2022-06-02T19:14:50.690775200Z  + mount_kubelet_cgroup_root /kubelet.slice /sys/fs/cgroup/memory
	2022-06-02T19:14:50.690794700Z  + local cgroup_root=/kubelet.slice
	2022-06-02T19:14:50.690803900Z  + local subsystem=/sys/fs/cgroup/memory
	2022-06-02T19:14:50.690825100Z  + '[' -z /kubelet.slice ']'
	2022-06-02T19:14:50.690835400Z  + mkdir -p /sys/fs/cgroup/memory//kubelet.slice
	2022-06-02T19:14:50.692611800Z  + '[' /sys/fs/cgroup/memory == /sys/fs/cgroup/cpuset ']'
	2022-06-02T19:14:50.692618200Z  + mount --bind /sys/fs/cgroup/memory//kubelet.slice /sys/fs/cgroup/memory//kubelet.slice
	2022-06-02T19:14:50.694936900Z  + IFS=
	2022-06-02T19:14:50.694961900Z  + read -r subsystem
	2022-06-02T19:14:50.694975900Z  + mount_kubelet_cgroup_root /kubelet /sys/fs/cgroup/devices
	2022-06-02T19:14:50.695066200Z  + local cgroup_root=/kubelet
	2022-06-02T19:14:50.695085300Z  + local subsystem=/sys/fs/cgroup/devices
	2022-06-02T19:14:50.695094900Z  + '[' -z /kubelet ']'
	2022-06-02T19:14:50.695102700Z  + mkdir -p /sys/fs/cgroup/devices//kubelet
	2022-06-02T19:14:50.697617600Z  + '[' /sys/fs/cgroup/devices == /sys/fs/cgroup/cpuset ']'
	2022-06-02T19:14:50.697635000Z  + mount --bind /sys/fs/cgroup/devices//kubelet /sys/fs/cgroup/devices//kubelet
	2022-06-02T19:14:50.700338200Z  + mount_kubelet_cgroup_root /kubelet.slice /sys/fs/cgroup/devices
	2022-06-02T19:14:50.700359800Z  + local cgroup_root=/kubelet.slice
	2022-06-02T19:14:50.700368400Z  + local subsystem=/sys/fs/cgroup/devices
	2022-06-02T19:14:50.700376000Z  + '[' -z /kubelet.slice ']'
	2022-06-02T19:14:50.700382700Z  + mkdir -p /sys/fs/cgroup/devices//kubelet.slice
	2022-06-02T19:14:50.702196600Z  + '[' /sys/fs/cgroup/devices == /sys/fs/cgroup/cpuset ']'
	2022-06-02T19:14:50.702221900Z  + mount --bind /sys/fs/cgroup/devices//kubelet.slice /sys/fs/cgroup/devices//kubelet.slice
	2022-06-02T19:14:50.704303600Z  + IFS=
	2022-06-02T19:14:50.704325300Z  + read -r subsystem
	2022-06-02T19:14:50.704333700Z  + mount_kubelet_cgroup_root /kubelet /sys/fs/cgroup/freezer
	2022-06-02T19:14:50.704338400Z  + local cgroup_root=/kubelet
	2022-06-02T19:14:50.704342700Z  + local subsystem=/sys/fs/cgroup/freezer
	2022-06-02T19:14:50.704347200Z  + '[' -z /kubelet ']'
	2022-06-02T19:14:50.704351500Z  + mkdir -p /sys/fs/cgroup/freezer//kubelet
	2022-06-02T19:14:50.706414000Z  + '[' /sys/fs/cgroup/freezer == /sys/fs/cgroup/cpuset ']'
	2022-06-02T19:14:50.706430400Z  + mount --bind /sys/fs/cgroup/freezer//kubelet /sys/fs/cgroup/freezer//kubelet
	2022-06-02T19:14:50.708873200Z  + mount_kubelet_cgroup_root /kubelet.slice /sys/fs/cgroup/freezer
	2022-06-02T19:14:50.708905200Z  + local cgroup_root=/kubelet.slice
	2022-06-02T19:14:50.708910200Z  + local subsystem=/sys/fs/cgroup/freezer
	2022-06-02T19:14:50.708914700Z  + '[' -z /kubelet.slice ']'
	2022-06-02T19:14:50.708920700Z  + mkdir -p /sys/fs/cgroup/freezer//kubelet.slice
	2022-06-02T19:14:50.710904700Z  + '[' /sys/fs/cgroup/freezer == /sys/fs/cgroup/cpuset ']'
	2022-06-02T19:14:50.710930200Z  + mount --bind /sys/fs/cgroup/freezer//kubelet.slice /sys/fs/cgroup/freezer//kubelet.slice
	2022-06-02T19:14:50.713307000Z  + IFS=
	2022-06-02T19:14:50.713324500Z  + read -r subsystem
	2022-06-02T19:14:50.713330400Z  + mount_kubelet_cgroup_root /kubelet /sys/fs/cgroup/net_cls
	2022-06-02T19:14:50.713336900Z  + local cgroup_root=/kubelet
	2022-06-02T19:14:50.713341500Z  + local subsystem=/sys/fs/cgroup/net_cls
	2022-06-02T19:14:50.713345900Z  + '[' -z /kubelet ']'
	2022-06-02T19:14:50.713350400Z  + mkdir -p /sys/fs/cgroup/net_cls//kubelet
	2022-06-02T19:14:50.715809100Z  + '[' /sys/fs/cgroup/net_cls == /sys/fs/cgroup/cpuset ']'
	2022-06-02T19:14:50.715826000Z  + mount --bind /sys/fs/cgroup/net_cls//kubelet /sys/fs/cgroup/net_cls//kubelet
	2022-06-02T19:14:50.718136500Z  + mount_kubelet_cgroup_root /kubelet.slice /sys/fs/cgroup/net_cls
	2022-06-02T19:14:50.718184100Z  + local cgroup_root=/kubelet.slice
	2022-06-02T19:14:50.718194800Z  + local subsystem=/sys/fs/cgroup/net_cls
	2022-06-02T19:14:50.718202200Z  + '[' -z /kubelet.slice ']'
	2022-06-02T19:14:50.718209100Z  + mkdir -p /sys/fs/cgroup/net_cls//kubelet.slice
	2022-06-02T19:14:50.720366500Z  + '[' /sys/fs/cgroup/net_cls == /sys/fs/cgroup/cpuset ']'
	2022-06-02T19:14:50.720377900Z  + mount --bind /sys/fs/cgroup/net_cls//kubelet.slice /sys/fs/cgroup/net_cls//kubelet.slice
	2022-06-02T19:14:50.723945100Z  + IFS=
	2022-06-02T19:14:50.723970900Z  + read -r subsystem
	2022-06-02T19:14:50.723982500Z  + mount_kubelet_cgroup_root /kubelet /sys/fs/cgroup/perf_event
	2022-06-02T19:14:50.723990300Z  + local cgroup_root=/kubelet
	2022-06-02T19:14:50.723997900Z  + local subsystem=/sys/fs/cgroup/perf_event
	2022-06-02T19:14:50.724004800Z  + '[' -z /kubelet ']'
	2022-06-02T19:14:50.724018800Z  + mkdir -p /sys/fs/cgroup/perf_event//kubelet
	2022-06-02T19:14:50.725699100Z  + '[' /sys/fs/cgroup/perf_event == /sys/fs/cgroup/cpuset ']'
	2022-06-02T19:14:50.725716500Z  + mount --bind /sys/fs/cgroup/perf_event//kubelet /sys/fs/cgroup/perf_event//kubelet
	2022-06-02T19:14:50.728610600Z  + mount_kubelet_cgroup_root /kubelet.slice /sys/fs/cgroup/perf_event
	2022-06-02T19:14:50.728710800Z  + local cgroup_root=/kubelet.slice
	2022-06-02T19:14:50.728722000Z  + local subsystem=/sys/fs/cgroup/perf_event
	2022-06-02T19:14:50.728729500Z  + '[' -z /kubelet.slice ']'
	2022-06-02T19:14:50.728738200Z  + mkdir -p /sys/fs/cgroup/perf_event//kubelet.slice
	2022-06-02T19:14:50.730989800Z  + '[' /sys/fs/cgroup/perf_event == /sys/fs/cgroup/cpuset ']'
	2022-06-02T19:14:50.731016200Z  + mount --bind /sys/fs/cgroup/perf_event//kubelet.slice /sys/fs/cgroup/perf_event//kubelet.slice
	2022-06-02T19:14:50.733910600Z  + IFS=
	2022-06-02T19:14:50.733936800Z  + read -r subsystem
	2022-06-02T19:14:50.733948700Z  + mount_kubelet_cgroup_root /kubelet /sys/fs/cgroup/net_prio
	2022-06-02T19:14:50.734058400Z  + local cgroup_root=/kubelet
	2022-06-02T19:14:50.734068200Z  + local subsystem=/sys/fs/cgroup/net_prio
	2022-06-02T19:14:50.734075200Z  + '[' -z /kubelet ']'
	2022-06-02T19:14:50.734081800Z  + mkdir -p /sys/fs/cgroup/net_prio//kubelet
	2022-06-02T19:14:50.737271000Z  + '[' /sys/fs/cgroup/net_prio == /sys/fs/cgroup/cpuset ']'
	2022-06-02T19:14:50.737287600Z  + mount --bind /sys/fs/cgroup/net_prio//kubelet /sys/fs/cgroup/net_prio//kubelet
	2022-06-02T19:14:50.741574700Z  + mount_kubelet_cgroup_root /kubelet.slice /sys/fs/cgroup/net_prio
	2022-06-02T19:14:50.741805700Z  + local cgroup_root=/kubelet.slice
	2022-06-02T19:14:50.741820400Z  + local subsystem=/sys/fs/cgroup/net_prio
	2022-06-02T19:14:50.741825100Z  + '[' -z /kubelet.slice ']'
	2022-06-02T19:14:50.741829500Z  + mkdir -p /sys/fs/cgroup/net_prio//kubelet.slice
	2022-06-02T19:14:50.743231700Z  + '[' /sys/fs/cgroup/net_prio == /sys/fs/cgroup/cpuset ']'
	2022-06-02T19:14:50.743250900Z  + mount --bind /sys/fs/cgroup/net_prio//kubelet.slice /sys/fs/cgroup/net_prio//kubelet.slice
	2022-06-02T19:14:50.746578800Z  + IFS=
	2022-06-02T19:14:50.746600100Z  + read -r subsystem
	2022-06-02T19:14:50.746623200Z  + mount_kubelet_cgroup_root /kubelet /sys/fs/cgroup/hugetlb
	2022-06-02T19:14:50.746629500Z  + local cgroup_root=/kubelet
	2022-06-02T19:14:50.746635700Z  + local subsystem=/sys/fs/cgroup/hugetlb
	2022-06-02T19:14:50.746642100Z  + '[' -z /kubelet ']'
	2022-06-02T19:14:50.746648400Z  + mkdir -p /sys/fs/cgroup/hugetlb//kubelet
	2022-06-02T19:14:50.748713000Z  + '[' /sys/fs/cgroup/hugetlb == /sys/fs/cgroup/cpuset ']'
	2022-06-02T19:14:50.748735300Z  + mount --bind /sys/fs/cgroup/hugetlb//kubelet /sys/fs/cgroup/hugetlb//kubelet
	2022-06-02T19:14:50.751380300Z  + mount_kubelet_cgroup_root /kubelet.slice /sys/fs/cgroup/hugetlb
	2022-06-02T19:14:50.751402900Z  + local cgroup_root=/kubelet.slice
	2022-06-02T19:14:50.751411400Z  + local subsystem=/sys/fs/cgroup/hugetlb
	2022-06-02T19:14:50.751513300Z  + '[' -z /kubelet.slice ']'
	2022-06-02T19:14:50.751732100Z  + mkdir -p /sys/fs/cgroup/hugetlb//kubelet.slice
	2022-06-02T19:14:50.754310600Z  + '[' /sys/fs/cgroup/hugetlb == /sys/fs/cgroup/cpuset ']'
	2022-06-02T19:14:50.754326700Z  + mount --bind /sys/fs/cgroup/hugetlb//kubelet.slice /sys/fs/cgroup/hugetlb//kubelet.slice
	2022-06-02T19:14:50.756979200Z  + IFS=
	2022-06-02T19:14:50.757072600Z  + read -r subsystem
	2022-06-02T19:14:50.757092300Z  + mount_kubelet_cgroup_root /kubelet /sys/fs/cgroup/pids
	2022-06-02T19:14:50.757099700Z  + local cgroup_root=/kubelet
	2022-06-02T19:14:50.757106800Z  + local subsystem=/sys/fs/cgroup/pids
	2022-06-02T19:14:50.757662700Z  + '[' -z /kubelet ']'
	2022-06-02T19:14:50.757717700Z  + mkdir -p /sys/fs/cgroup/pids//kubelet
	2022-06-02T19:14:50.760184200Z  + '[' /sys/fs/cgroup/pids == /sys/fs/cgroup/cpuset ']'
	2022-06-02T19:14:50.760202300Z  + mount --bind /sys/fs/cgroup/pids//kubelet /sys/fs/cgroup/pids//kubelet
	2022-06-02T19:14:50.762758900Z  + mount_kubelet_cgroup_root /kubelet.slice /sys/fs/cgroup/pids
	2022-06-02T19:14:50.762781300Z  + local cgroup_root=/kubelet.slice
	2022-06-02T19:14:50.762797800Z  + local subsystem=/sys/fs/cgroup/pids
	2022-06-02T19:14:50.762805400Z  + '[' -z /kubelet.slice ']'
	2022-06-02T19:14:50.762812400Z  + mkdir -p /sys/fs/cgroup/pids//kubelet.slice
	2022-06-02T19:14:50.765008100Z  + '[' /sys/fs/cgroup/pids == /sys/fs/cgroup/cpuset ']'
	2022-06-02T19:14:50.765033100Z  + mount --bind /sys/fs/cgroup/pids//kubelet.slice /sys/fs/cgroup/pids//kubelet.slice
	2022-06-02T19:14:50.768915600Z  + IFS=
	2022-06-02T19:14:50.768939900Z  + read -r subsystem
	2022-06-02T19:14:50.768953500Z  + mount_kubelet_cgroup_root /kubelet /sys/fs/cgroup/rdma
	2022-06-02T19:14:50.768961500Z  + local cgroup_root=/kubelet
	2022-06-02T19:14:50.770134900Z  + local subsystem=/sys/fs/cgroup/rdma
	2022-06-02T19:14:50.770150800Z  + '[' -z /kubelet ']'
	2022-06-02T19:14:50.770155700Z  + mkdir -p /sys/fs/cgroup/rdma//kubelet
	2022-06-02T19:14:50.772625100Z  + '[' /sys/fs/cgroup/rdma == /sys/fs/cgroup/cpuset ']'
	2022-06-02T19:14:50.772641200Z  + mount --bind /sys/fs/cgroup/rdma//kubelet /sys/fs/cgroup/rdma//kubelet
	2022-06-02T19:14:50.774988700Z  + mount_kubelet_cgroup_root /kubelet.slice /sys/fs/cgroup/rdma
	2022-06-02T19:14:50.775014900Z  + local cgroup_root=/kubelet.slice
	2022-06-02T19:14:50.775023200Z  + local subsystem=/sys/fs/cgroup/rdma
	2022-06-02T19:14:50.775030600Z  + '[' -z /kubelet.slice ']'
	2022-06-02T19:14:50.775037800Z  + mkdir -p /sys/fs/cgroup/rdma//kubelet.slice
	2022-06-02T19:14:50.777036100Z  + '[' /sys/fs/cgroup/rdma == /sys/fs/cgroup/cpuset ']'
	2022-06-02T19:14:50.777058300Z  + mount --bind /sys/fs/cgroup/rdma//kubelet.slice /sys/fs/cgroup/rdma//kubelet.slice
	2022-06-02T19:14:50.780622600Z  + IFS=
	2022-06-02T19:14:50.780639700Z  + read -r subsystem
	2022-06-02T19:14:50.780644700Z  + mount_kubelet_cgroup_root /kubelet /sys/fs/cgroup/systemd
	2022-06-02T19:14:50.780649000Z  + local cgroup_root=/kubelet
	2022-06-02T19:14:50.780653200Z  + local subsystem=/sys/fs/cgroup/systemd
	2022-06-02T19:14:50.780657200Z  + '[' -z /kubelet ']'
	2022-06-02T19:14:50.780661200Z  + mkdir -p /sys/fs/cgroup/systemd//kubelet
	2022-06-02T19:14:50.782786400Z  + '[' /sys/fs/cgroup/systemd == /sys/fs/cgroup/cpuset ']'
	2022-06-02T19:14:50.782804800Z  + mount --bind /sys/fs/cgroup/systemd//kubelet /sys/fs/cgroup/systemd//kubelet
	2022-06-02T19:14:50.785767300Z  + mount_kubelet_cgroup_root /kubelet.slice /sys/fs/cgroup/systemd
	2022-06-02T19:14:50.785803400Z  + local cgroup_root=/kubelet.slice
	2022-06-02T19:14:50.785811700Z  + local subsystem=/sys/fs/cgroup/systemd
	2022-06-02T19:14:50.785819100Z  + '[' -z /kubelet.slice ']'
	2022-06-02T19:14:50.785826400Z  + mkdir -p /sys/fs/cgroup/systemd//kubelet.slice
	2022-06-02T19:14:50.787815200Z  + '[' /sys/fs/cgroup/systemd == /sys/fs/cgroup/cpuset ']'
	2022-06-02T19:14:50.787831800Z  + mount --bind /sys/fs/cgroup/systemd//kubelet.slice /sys/fs/cgroup/systemd//kubelet.slice
	2022-06-02T19:14:50.790942900Z  + IFS=
	2022-06-02T19:14:50.790959200Z  + read -r subsystem
	2022-06-02T19:14:50.790964300Z  + return
	2022-06-02T19:14:50.790969000Z  + fix_machine_id
	2022-06-02T19:14:50.790975300Z  + echo 'INFO: clearing and regenerating /etc/machine-id'
	2022-06-02T19:14:50.791498800Z  INFO: clearing and regenerating /etc/machine-id
	2022-06-02T19:14:50.791517300Z  + rm -f /etc/machine-id
	2022-06-02T19:14:50.794022200Z  + systemd-machine-id-setup
	2022-06-02T19:14:50.801127500Z  Initializing machine ID from D-Bus machine ID.
	2022-06-02T19:14:50.821163900Z  + fix_product_name
	2022-06-02T19:14:50.821183400Z  + [[ -f /sys/class/dmi/id/product_name ]]
	2022-06-02T19:14:50.821188900Z  + fix_product_uuid
	2022-06-02T19:14:50.821193600Z  + [[ ! -f /kind/product_uuid ]]
	2022-06-02T19:14:50.821200800Z  + cat /proc/sys/kernel/random/uuid
	2022-06-02T19:14:50.823503700Z  + [[ -f /sys/class/dmi/id/product_uuid ]]
	2022-06-02T19:14:50.823602700Z  + [[ -f /sys/devices/virtual/dmi/id/product_uuid ]]
	2022-06-02T19:14:50.823613400Z  + select_iptables
	2022-06-02T19:14:50.823618400Z  + local mode num_legacy_lines num_nft_lines
	2022-06-02T19:14:50.825450600Z  ++ grep -c '^-'
	2022-06-02T19:14:50.835300900Z  + num_legacy_lines=6
	2022-06-02T19:14:50.836987500Z  ++ grep -c '^-'
	2022-06-02T19:14:50.844414000Z  ++ true
	2022-06-02T19:14:50.844442100Z  + num_nft_lines=0
	2022-06-02T19:14:50.844453200Z  + '[' 6 -ge 0 ']'
	2022-06-02T19:14:50.844461000Z  + mode=legacy
	2022-06-02T19:14:50.844872000Z  + echo 'INFO: setting iptables to detected mode: legacy'
	2022-06-02T19:14:50.844896700Z  INFO: setting iptables to detected mode: legacy
	2022-06-02T19:14:50.844906200Z  + update-alternatives --set iptables /usr/sbin/iptables-legacy
	2022-06-02T19:14:50.844913700Z  + echo 'retryable update-alternatives: --set iptables /usr/sbin/iptables-legacy'
	2022-06-02T19:14:50.846306600Z  + local 'args=--set iptables /usr/sbin/iptables-legacy'
	2022-06-02T19:14:50.846326400Z  ++ seq 0 15
	2022-06-02T19:14:50.848247500Z  + for i in $(seq 0 15)
	2022-06-02T19:14:50.848270600Z  + /usr/bin/update-alternatives --set iptables /usr/sbin/iptables-legacy
	2022-06-02T19:14:50.861315500Z  + return
	2022-06-02T19:14:50.862343300Z  + update-alternatives --set ip6tables /usr/sbin/ip6tables-legacy
	2022-06-02T19:14:50.862360700Z  + echo 'retryable update-alternatives: --set ip6tables /usr/sbin/ip6tables-legacy'
	2022-06-02T19:14:50.862514700Z  + local 'args=--set ip6tables /usr/sbin/ip6tables-legacy'
	2022-06-02T19:14:50.863719600Z  ++ seq 0 15
	2022-06-02T19:14:50.864845400Z  + for i in $(seq 0 15)
	2022-06-02T19:14:50.864868200Z  + /usr/bin/update-alternatives --set ip6tables /usr/sbin/ip6tables-legacy
	2022-06-02T19:14:50.885144200Z  + return
	2022-06-02T19:14:50.885170500Z  + enable_network_magic
	2022-06-02T19:14:50.885180600Z  + local docker_embedded_dns_ip=127.0.0.11
	2022-06-02T19:14:50.885190500Z  + local docker_host_ip
	2022-06-02T19:14:50.887395500Z  ++ cut '-d ' -f1
	2022-06-02T19:14:50.887416900Z  ++ head -n1 /dev/fd/63
	2022-06-02T19:14:50.887426000Z  +++ getent ahostsv4 host.docker.internal
	2022-06-02T19:14:50.899108300Z  + docker_host_ip=192.168.65.2
	2022-06-02T19:14:50.899135900Z  + [[ -z 192.168.65.2 ]]
	2022-06-02T19:14:50.899148100Z  + [[ 192.168.65.2 =~ ^127\.[0-9]+\.[0-9]+\.[0-9]+$ ]]
	2022-06-02T19:14:50.899156500Z  + iptables-save
	2022-06-02T19:14:50.900104200Z  + iptables-restore
	2022-06-02T19:14:50.902605500Z  + sed -e 's/-d 127.0.0.11/-d 192.168.65.2/g' -e 's/-A OUTPUT \(.*\) -j DOCKER_OUTPUT/\0\n-A PREROUTING \1 -j DOCKER_OUTPUT/' -e 's/--to-source :53/--to-source 192.168.65.2:53/g'
	2022-06-02T19:14:50.907713700Z  + cp /etc/resolv.conf /etc/resolv.conf.original
	2022-06-02T19:14:50.910805200Z  + sed -e s/127.0.0.11/192.168.65.2/g /etc/resolv.conf.original
	2022-06-02T19:14:50.915844200Z  ++ cut '-d ' -f1
	2022-06-02T19:14:50.915868100Z  ++ head -n1 /dev/fd/63
	2022-06-02T19:14:50.916817200Z  ++++ hostname
	2022-06-02T19:14:50.918417300Z  +++ getent ahostsv4 kubernetes-upgrade-20220602191340-12108
	2022-06-02T19:14:50.922052900Z  + curr_ipv4=192.168.58.2
	2022-06-02T19:14:50.922075200Z  + echo 'INFO: Detected IPv4 address: 192.168.58.2'
	2022-06-02T19:14:50.922084900Z  INFO: Detected IPv4 address: 192.168.58.2
	2022-06-02T19:14:50.922092500Z  + '[' -f /kind/old-ipv4 ']'
	2022-06-02T19:14:50.922099700Z  + [[ -n 192.168.58.2 ]]
	2022-06-02T19:14:50.922113700Z  + echo -n 192.168.58.2
	2022-06-02T19:14:50.924330900Z  ++ cut '-d ' -f1
	2022-06-02T19:14:50.924353000Z  ++ head -n1 /dev/fd/63
	2022-06-02T19:14:50.925374800Z  ++++ hostname
	2022-06-02T19:14:50.926764900Z  +++ getent ahostsv6 kubernetes-upgrade-20220602191340-12108
	2022-06-02T19:14:50.930022200Z  + curr_ipv6=
	2022-06-02T19:14:50.930044200Z  + echo 'INFO: Detected IPv6 address: '
	2022-06-02T19:14:50.930067400Z  INFO: Detected IPv6 address: 
	2022-06-02T19:14:50.930075400Z  + '[' -f /kind/old-ipv6 ']'
	2022-06-02T19:14:50.930472500Z  + [[ -n '' ]]
	2022-06-02T19:14:50.931714600Z  ++ uname -a
	2022-06-02T19:14:50.933410300Z  + echo 'entrypoint completed: Linux kubernetes-upgrade-20220602191340-12108 5.10.102.1-microsoft-standard-WSL2 #1 SMP Wed Mar 2 00:30:59 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux'
	2022-06-02T19:14:50.933426400Z  entrypoint completed: Linux kubernetes-upgrade-20220602191340-12108 5.10.102.1-microsoft-standard-WSL2 #1 SMP Wed Mar 2 00:30:59 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux
	2022-06-02T19:14:50.933431900Z  + exec /sbin/init
	2022-06-02T19:14:50.944506800Z  systemd 245.4-4ubuntu3.17 running in system mode. (+PAM +AUDIT +SELINUX +IMA +APPARMOR +SMACK +SYSVINIT +UTMP +LIBCRYPTSETUP +GCRYPT +GNUTLS +ACL +XZ +LZ4 +SECCOMP +BLKID +ELFUTILS +KMOD +IDN2 -IDN +PCRE2 default-hierarchy=hybrid)
	2022-06-02T19:14:50.944533100Z  Detected virtualization wsl.
	2022-06-02T19:14:50.944547400Z  Detected architecture x86-64.
	2022-06-02T19:14:50.944558600Z  Failed to create symlink /sys/fs/cgroup/cpuacct: File exists
	2022-06-02T19:14:50.944567000Z  Failed to create symlink /sys/fs/cgroup/cpu: File exists
	2022-06-02T19:14:50.944574500Z  Failed to create symlink /sys/fs/cgroup/net_cls: File exists
	2022-06-02T19:14:50.944581700Z  Failed to create symlink /sys/fs/cgroup/net_prio: File exists
	2022-06-02T19:14:50.944589100Z  
	2022-06-02T19:14:50.944735500Z  Welcome to Ubuntu 20.04.4 LTS!
	2022-06-02T19:14:50.944746300Z  
	2022-06-02T19:14:50.944753200Z  Set hostname to <kubernetes-upgrade-20220602191340-12108>.
	2022-06-02T19:14:51.009827900Z  [  OK  ] Started Dispatch Password …ts to Console Directory Watch.
	2022-06-02T19:14:51.009863700Z  [  OK  ] Set up automount Arbitrary…s File System Automount Point.
	2022-06-02T19:14:51.009874200Z  [  OK  ] Reached target Local Encrypted Volumes.
	2022-06-02T19:14:51.009907000Z  [  OK  ] Reached target Network is Online.
	2022-06-02T19:14:51.010355900Z  [  OK  ] Reached target Paths.
	2022-06-02T19:14:51.010380100Z  [  OK  ] Reached target Slices.
	2022-06-02T19:14:51.010391000Z  [  OK  ] Reached target Swap.
	2022-06-02T19:14:51.012438200Z  [  OK  ] Listening on Journal Socket (/dev/log).
	2022-06-02T19:14:51.012457400Z  [  OK  ] Listening on Journal Socket.
	2022-06-02T19:14:51.015334600Z           Mounting Huge Pages File System...
	2022-06-02T19:14:51.018129400Z           Mounting Kernel Debug File System...
	2022-06-02T19:14:51.020483400Z           Mounting Kernel Trace File System...
	2022-06-02T19:14:51.029572300Z           Starting Journal Service...
	2022-06-02T19:14:51.032604500Z           Mounting FUSE Control File System...
	2022-06-02T19:14:51.035745200Z           Starting Remount Root and Kernel File Systems...
	2022-06-02T19:14:51.039067700Z           Starting Apply Kernel Variables...
	2022-06-02T19:14:51.043204800Z  [  OK  ] Mounted Huge Pages File System.
	2022-06-02T19:14:51.043227200Z  [  OK  ] Mounted Kernel Debug File System.
	2022-06-02T19:14:51.043885500Z  [  OK  ] Mounted Kernel Trace File System.
	2022-06-02T19:14:51.043915500Z  [  OK  ] Mounted FUSE Control File System.
	2022-06-02T19:14:51.048860000Z  [  OK  ] Finished Remount Root and Kernel File Systems.
	2022-06-02T19:14:51.055112100Z           Starting Create System Users...
	2022-06-02T19:14:51.059600400Z           Starting Update UTMP about System Boot/Shutdown...
	2022-06-02T19:14:51.062223500Z  [  OK  ] Finished Apply Kernel Variables.
	2022-06-02T19:14:51.075082200Z  [  OK  ] Finished Update UTMP about System Boot/Shutdown.
	2022-06-02T19:14:51.090738300Z  [  OK  ] Started Journal Service.
	2022-06-02T19:14:51.093257300Z           Starting Flush Journal to Persistent Storage...
	2022-06-02T19:14:51.102388200Z  [  OK  ] Finished Flush Journal to Persistent Storage.
	2022-06-02T19:14:51.376486400Z  [  OK  ] Finished Create System Users.
	2022-06-02T19:14:51.378728400Z           Starting Create Static Device Nodes in /dev...
	2022-06-02T19:14:51.388995100Z  [  OK  ] Finished Create Static Device Nodes in /dev.
	2022-06-02T19:14:51.389027500Z  [  OK  ] Reached target Local File Systems (Pre).
	2022-06-02T19:14:51.389481500Z  [  OK  ] Reached target Local File Systems.
	2022-06-02T19:14:51.390031800Z  [  OK  ] Reached target System Initialization.
	2022-06-02T19:14:51.390911400Z  [  OK  ] Started Daily Cleanup of Temporary Directories.
	2022-06-02T19:14:51.390942900Z  [  OK  ] Reached target Timers.
	2022-06-02T19:14:51.391858200Z  [  OK  ] Listening on BuildKit.
	2022-06-02T19:14:51.392704400Z  [  OK  ] Listening on D-Bus System Message Bus Socket.
	2022-06-02T19:14:51.394666900Z           Starting Docker Socket for the API.
	2022-06-02T19:14:51.396946800Z           Starting Podman API Socket.
	2022-06-02T19:14:51.397910000Z  [  OK  ] Listening on Docker Socket for the API.
	2022-06-02T19:14:51.398877500Z  [  OK  ] Listening on Podman API Socket.
	2022-06-02T19:14:51.398893100Z  [  OK  ] Reached target Sockets.
	2022-06-02T19:14:51.399407100Z  [  OK  ] Reached target Basic System.
	2022-06-02T19:14:51.401570300Z           Starting containerd container runtime...
	2022-06-02T19:14:51.403719200Z  [  OK  ] Started D-Bus System Message Bus.
	2022-06-02T19:14:51.407847300Z           Starting minikube automount...
	2022-06-02T19:14:51.410746200Z           Starting OpenBSD Secure Shell server...
	2022-06-02T19:14:51.443531700Z  [  OK  ] Finished minikube automount.
	2022-06-02T19:14:51.460213600Z  [  OK  ] Started OpenBSD Secure Shell server.
	2022-06-02T19:14:51.591717200Z  [  OK  ] Started containerd container runtime.
	2022-06-02T19:14:51.595461100Z           Starting Docker Application Container Engine...
	2022-06-02T19:14:53.339215000Z  [  OK  ] Started Docker Application Container Engine.
	2022-06-02T19:14:53.339245700Z  [  OK  ] Reached target Multi-User System.
	2022-06-02T19:14:53.339254700Z  [  OK  ] Reached target Graphical Interface.
	2022-06-02T19:14:53.347127100Z           Starting Update UTMP about System Runlevel Changes...
	2022-06-02T19:14:53.359644700Z  [  OK  ] Finished Update UTMP about System Runlevel Changes.
	2022-06-02T19:16:24.822190300Z  [  OK  ] Stopped target Graphical Interface.
	2022-06-02T19:16:24.822251900Z  [  OK  ] Stopped target Multi-User System.
	2022-06-02T19:16:24.823206500Z  [  OK  ] Stopped target Timers.
	2022-06-02T19:16:24.823836800Z  [  OK  ] Stopped Daily Cleanup of Temporary Directories.
	2022-06-02T19:16:24.826973300Z           Stopping D-Bus System Message Bus...
	2022-06-02T19:16:24.834685900Z           Stopping Docker Application Container Engine...
	2022-06-02T19:16:24.836011600Z           Stopping kubelet: The Kubernetes Node Agent...
	2022-06-02T19:16:24.836939200Z           Stopping OpenBSD Secure Shell server...
	2022-06-02T19:16:24.839234900Z  [  OK  ] Stopped OpenBSD Secure Shell server.
	2022-06-02T19:16:24.840617100Z  [  OK  ] Stopped D-Bus System Message Bus.
	2022-06-02T19:16:25.341556400Z  [  OK  ] Stopped kubelet: The Kubernetes Node Agent.
	2022-06-02T19:16:25.734754900Z  [  OK  ] Unmounted /var/lib/docker/…2e2c93a17b7961968108c4/merged.
	2022-06-02T19:16:25.740739700Z  [  OK  ] Unmounted /var/lib/docker/…f898b5f9b7498ab0adb5f8/merged.
	2022-06-02T19:16:25.836856300Z  [  OK  ] Unmounted /var/lib/docker/…c8684f4e1d80552b2b3276/merged.
	2022-06-02T19:16:25.843254000Z  [  OK  ] Unmounted /var/lib/docker/…ef29200644eb2cd619bdc6/merged.
	2022-06-02T19:16:25.850125500Z  [  OK  ] Unmounted /var/lib/docker/…495a747af6b9836e25e9e2/merged.
	2022-06-02T19:16:25.963954300Z  [  OK  ] Unmounted /var/lib/docker/…d634a69f2a60e39e5f11c4/merged.
	2022-06-02T19:16:25.988466400Z  [  OK  ] Unmounted /var/lib/docker/…df4611af5681e635dc8f99/merged.
	2022-06-02T19:16:26.159166000Z  [  OK  ] Unmounted /var/lib/docker/…8d3fd2c2462da874b7/mounts/shm.
	2022-06-02T19:16:26.159207000Z  [  OK  ] Unmounted /var/lib/docker/…fae84ff7913edc61ad05cd/merged.
	2022-06-02T19:16:26.207494500Z  [  OK  ] Unmounted /var/lib/docker/…a14c3f8c389f5ef8be/mounts/shm.
	2022-06-02T19:16:26.207534800Z  [  OK  ] Unmounted /var/lib/docker/…bc17c55e1ffdb608672548/merged.
	2022-06-02T19:16:26.347488700Z  [  OK  ] Unmounted /var/lib/docker/…2b772ac061c280da1e/mounts/shm.
	2022-06-02T19:16:26.347533900Z  [  OK  ] Unmounted /var/lib/docker/…89417157aa9d8f260a266e/merged.
	2022-06-02T19:16:26.368581100Z  [  OK  ] Unmounted /var/lib/docker/…f294596a1b4f7be8bc/mounts/shm.
	2022-06-02T19:16:26.368608700Z  [  OK  ] Unmounted /var/lib/docker/…7d3af5931a17ce711d3b20/merged.
	2022-06-02T19:16:26.383733500Z  [  OK  ] Unmounted /var/lib/docker/…c0bb8965aa3ff6bf5f/mounts/shm.
	2022-06-02T19:16:26.383762400Z  [  OK  ] Unmounted /var/lib/docker/…9a63aa66332e0f57852400/merged.
	2022-06-02T19:16:26.412963900Z  [  OK  ] Unmounted /var/lib/docker/…d9a3f97b0bcc46df6c/mounts/shm.
	2022-06-02T19:16:26.412989500Z  [  OK  ] Unmounted /var/lib/docker/…96876f08ba21f943f34f3f/merged.
	2022-06-02T19:16:26.486873200Z  [  OK  ] Unmounted /run/docker/netns/8a340d127a4d.
	2022-06-02T19:16:26.494105400Z  [  OK  ] Unmounted /var/lib/docker/…0d980dec9f4635971f/mounts/shm.
	2022-06-02T19:16:26.495203300Z  [  OK  ] Unmounted /var/lib/docker/…32697e4c7392d1a760c334/merged.
	2022-06-02T19:16:26.585665600Z  [  OK  ] Stopped Docker Application Container Engine.
	2022-06-02T19:16:26.585973500Z  [  OK  ] Stopped target Network is Online.
	2022-06-02T19:16:26.586601400Z           Stopping containerd container runtime...
	2022-06-02T19:16:26.587950600Z  [  OK  ] Stopped minikube automount.
	2022-06-02T19:16:26.642568600Z  [  OK  ] Stopped containerd container runtime.
	2022-06-02T19:16:26.642608100Z  [  OK  ] Stopped target Basic System.
	2022-06-02T19:16:26.642719500Z  [  OK  ] Stopped target Paths.
	2022-06-02T19:16:26.643518000Z  [  OK  ] Stopped target Slices.
	2022-06-02T19:16:26.643539000Z  [  OK  ] Stopped target Sockets.
	2022-06-02T19:16:26.644927700Z  [  OK  ] Closed BuildKit.
	2022-06-02T19:16:26.646036500Z  [  OK  ] Closed D-Bus System Message Bus Socket.
	2022-06-02T19:16:26.651945000Z  [  OK  ] Closed Docker Socket for the API.
	2022-06-02T19:16:26.651964700Z  [  OK  ] Closed Podman API Socket.
	2022-06-02T19:16:26.651971000Z  [  OK  ] Stopped target System Initialization.
	2022-06-02T19:16:26.653065500Z  [  OK  ] Stopped target Local Encrypted Volumes.
	2022-06-02T19:16:26.687091500Z  [  OK  ] Stopped Dispatch Password …ts to Console Directory Watch.
	2022-06-02T19:16:26.687115000Z  [  OK  ] Stopped target Local File Systems.
	2022-06-02T19:16:26.710610700Z           Unmounting /data...
	2022-06-02T19:16:26.711818200Z           Unmounting /etc/hostname...
	2022-06-02T19:16:26.713167000Z           Unmounting /etc/hosts...
	2022-06-02T19:16:26.714604100Z           Unmounting /etc/resolv.conf...
	2022-06-02T19:16:26.717223100Z           Unmounting /run/docker/netns/default...
	2022-06-02T19:16:26.718580800Z           Unmounting /tmp/hostpath-provisioner...
	2022-06-02T19:16:26.719786400Z           Unmounting /tmp/hostpath_pv...
	2022-06-02T19:16:26.721087600Z           Unmounting /usr/lib/modules...
	2022-06-02T19:16:26.724233400Z           Unmounting /var/lib/kubele…~secret/coredns-token-l4wmk...
	2022-06-02T19:16:26.727578600Z           Unmounting /var/lib/kubele…age-provisioner-token-v7xx6...
	2022-06-02T19:16:26.734048300Z           Unmounting /var/lib/kubele…cret/kube-proxy-token-r99sx...
	2022-06-02T19:16:26.735376700Z  [  OK  ] Stopped Apply Kernel Variables.
	2022-06-02T19:16:26.737010100Z           Stopping Update UTMP about System Boot/Shutdown...
	2022-06-02T19:16:26.751045100Z  [  OK  ] Stopped Update UTMP about System Boot/Shutdown.
	2022-06-02T19:16:26.799533300Z  [  OK  ] Unmounted /data.
	2022-06-02T19:16:26.801420100Z  [  OK  ] Unmounted /etc/hostname.
	2022-06-02T19:16:26.802132000Z  [  OK  ] Unmounted /etc/hosts.
	2022-06-02T19:16:26.803605600Z  [  OK  ] Unmounted /etc/resolv.conf.
	2022-06-02T19:16:26.804719500Z  [  OK  ] Unmounted /run/docker/netns/default.
	2022-06-02T19:16:26.806346300Z  [  OK  ] Unmounted /tmp/hostpath-provisioner.
	2022-06-02T19:16:26.807718200Z  [  OK  ] Unmounted /tmp/hostpath_pv.
	2022-06-02T19:16:26.808987000Z  [  OK  ] Unmounted /usr/lib/modules.
	2022-06-02T19:16:26.810282500Z  [  OK  ] Unmounted /var/lib/kubelet…io~secret/coredns-token-l4wmk.
	2022-06-02T19:16:26.811790000Z  [  OK  ] Unmounted /var/lib/kubelet…orage-provisioner-token-v7xx6.
	2022-06-02T19:16:26.813034300Z  [  OK  ] Unmounted /var/lib/kubelet…secret/kube-proxy-token-r99sx.
	2022-06-02T19:16:26.814500000Z           Unmounting /tmp...
	2022-06-02T19:16:26.815719400Z           Unmounting /var...
	2022-06-02T19:16:26.821601900Z  [  OK  ] Unmounted /tmp.
	2022-06-02T19:16:26.822235000Z  [  OK  ] Unmounted /var.
	2022-06-02T19:16:26.822251900Z  [  OK  ] Stopped target Local File Systems (Pre).
	2022-06-02T19:16:26.823093800Z  [  OK  ] Stopped target Swap.
	2022-06-02T19:16:26.823111200Z  [  OK  ] Reached target Unmount All Filesystems.
	2022-06-02T19:16:26.824257000Z  [  OK  ] Stopped Create Static Device Nodes in /dev.
	2022-06-02T19:16:26.825777700Z  [  OK  ] Stopped Create System Users.
	2022-06-02T19:16:26.826958800Z  [  OK  ] Stopped Remount Root and Kernel File Systems.
	2022-06-02T19:16:26.826976600Z  [  OK  ] Reached target Shutdown.
	2022-06-02T19:16:26.826982600Z  [  OK  ] Reached target Final Step.
	2022-06-02T19:16:26.827880500Z  [  OK  ] Finished Power-Off.
	2022-06-02T19:16:26.827897700Z  [  OK  ] Reached target Power-Off.
	
	-- /stdout --
	I0602 19:16:48.375780     936 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0602 19:16:50.618494     936 cli_runner.go:217] Completed: docker system info --format "{{json .}}": (2.2425721s)
	I0602 19:16:50.618577     936 info.go:265] docker info: {ID:DTJ5:6HSD:HQYO:NIUY:6CLH:5UQL:6FBK:7BMO:GVE6:JAI7:26MB:GZA3 Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:59 OomKillDisable:true NGoroutines:53 SystemTime:2022-06-02 19:16:49.5088628 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:4 KernelVersion:5.10.102.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_
64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:53902323712 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.16 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16 Expected:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16} RuncCommit:{ID:v1.1.1-0-g52de29d Expected:v1.1.1-0-g52de29d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,
profile=default] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.6.0] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:C:\Program Files\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc.
Version:v0.17.0]] Warnings:<nil>}}
	I0602 19:16:50.618577     936 errors.go:98] postmortem docker info: {ID:DTJ5:6HSD:HQYO:NIUY:6CLH:5UQL:6FBK:7BMO:GVE6:JAI7:26MB:GZA3 Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:59 OomKillDisable:true NGoroutines:53 SystemTime:2022-06-02 19:16:49.5088628 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:4 KernelVersion:5.10.102.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Archi
tecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:53902323712 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.16 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16 Expected:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16} RuncCommit:{ID:v1.1.1-0-g52de29d Expected:v1.1.1-0-g52de29d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[n
ame=seccomp,profile=default] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.6.0] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:C:\Program Files\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor
:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0602 19:16:50.628509     936 network_create.go:272] running [docker network inspect kubernetes-upgrade-20220602191340-12108] to gather additional debugging logs...
	I0602 19:16:50.628509     936 cli_runner.go:164] Run: docker network inspect kubernetes-upgrade-20220602191340-12108
	W0602 19:16:51.735227     936 cli_runner.go:211] docker network inspect kubernetes-upgrade-20220602191340-12108 returned with exit code 1
	I0602 19:16:51.735404     936 cli_runner.go:217] Completed: docker network inspect kubernetes-upgrade-20220602191340-12108: (1.106679s)
	I0602 19:16:51.735455     936 network_create.go:275] error running [docker network inspect kubernetes-upgrade-20220602191340-12108]: docker network inspect kubernetes-upgrade-20220602191340-12108: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: kubernetes-upgrade-20220602191340-12108
	I0602 19:16:51.735581     936 network_create.go:277] output of [docker network inspect kubernetes-upgrade-20220602191340-12108]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: kubernetes-upgrade-20220602191340-12108
	
	** /stderr **
	I0602 19:16:51.745491     936 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0602 19:16:53.883934     936 cli_runner.go:217] Completed: docker system info --format "{{json .}}": (2.1378933s)
	I0602 19:16:53.883934     936 info.go:265] docker info: {ID:DTJ5:6HSD:HQYO:NIUY:6CLH:5UQL:6FBK:7BMO:GVE6:JAI7:26MB:GZA3 Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:59 OomKillDisable:true NGoroutines:53 SystemTime:2022-06-02 19:16:52.8318479 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:4 KernelVersion:5.10.102.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_
64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:53902323712 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.16 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16 Expected:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16} RuncCommit:{ID:v1.1.1-0-g52de29d Expected:v1.1.1-0-g52de29d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,
profile=default] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.6.0] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:C:\Program Files\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc.
Version:v0.17.0]] Warnings:<nil>}}
	I0602 19:16:53.893779     936 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kubernetes-upgrade-20220602191340-12108
	I0602 19:16:55.017409     936 cli_runner.go:217] Completed: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kubernetes-upgrade-20220602191340-12108: (1.1236247s)
	I0602 19:16:55.017409     936 profile.go:148] Saving config to C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\kubernetes-upgrade-20220602191340-12108\config.json ...
	I0602 19:16:55.019427     936 machine.go:88] provisioning docker machine ...
	I0602 19:16:55.019427     936 ubuntu.go:169] provisioning hostname "kubernetes-upgrade-20220602191340-12108"
	I0602 19:16:55.026414     936 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220602191340-12108
	W0602 19:16:56.123154     936 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220602191340-12108 returned with exit code 1
	I0602 19:16:56.123207     936 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220602191340-12108: (1.0966001s)
	I0602 19:16:56.123437     936 machine.go:91] provisioned docker machine in 1.1040051s
	I0602 19:16:56.137895     936 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0602 19:16:56.147476     936 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220602191340-12108
	W0602 19:16:57.232948     936 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220602191340-12108 returned with exit code 1
	I0602 19:16:57.232948     936 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220602191340-12108: (1.0848908s)
	I0602 19:16:57.232948     936 retry.go:31] will retry after 276.165072ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0602 19:16:57.522992     936 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220602191340-12108
	W0602 19:16:58.634887     936 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220602191340-12108 returned with exit code 1
	I0602 19:16:58.634887     936 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220602191340-12108: (1.1118894s)
	W0602 19:16:58.634887     936 start.go:281] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	W0602 19:16:58.634887     936 start.go:248] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0602 19:16:58.645358     936 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0602 19:16:58.652402     936 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220602191340-12108
	W0602 19:16:59.739449     936 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220602191340-12108 returned with exit code 1
	I0602 19:16:59.739449     936 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220602191340-12108: (1.0870412s)
	I0602 19:16:59.739449     936 retry.go:31] will retry after 291.140013ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0602 19:17:00.047228     936 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220602191340-12108
	W0602 19:17:01.205480     936 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220602191340-12108 returned with exit code 1
	I0602 19:17:01.205480     936 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220602191340-12108: (1.1582468s)
	W0602 19:17:01.205480     936 start.go:296] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	W0602 19:17:01.205480     936 start.go:253] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0602 19:17:01.205480     936 fix.go:57] fixHost completed within 17.7647963s
	I0602 19:17:01.205480     936 start.go:81] releasing machines lock for "kubernetes-upgrade-20220602191340-12108", held for 17.7647963s
	W0602 19:17:01.205480     936 start.go:599] error starting host: provision: get ssh host-port: unable to inspect a not running container to get SSH port
	W0602 19:17:01.205480     936 out.go:239] ! StartHost failed, but will try again: provision: get ssh host-port: unable to inspect a not running container to get SSH port
	! StartHost failed, but will try again: provision: get ssh host-port: unable to inspect a not running container to get SSH port
	I0602 19:17:01.205480     936 start.go:614] Will try again in 5 seconds ...
	I0602 19:17:06.210806     936 start.go:352] acquiring machines lock for kubernetes-upgrade-20220602191340-12108: {Name:mkbaca63acce43a03a0803ba4a0d56470a4248b7 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0602 19:17:06.211267     936 start.go:356] acquired machines lock for "kubernetes-upgrade-20220602191340-12108" in 262.9µs
	I0602 19:17:06.211582     936 start.go:94] Skipping create...Using existing machine configuration
	I0602 19:17:06.211678     936 fix.go:55] fixHost starting: 
	I0602 19:17:06.235585     936 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-20220602191340-12108 --format={{.State.Status}}
	I0602 19:17:07.473498     936 cli_runner.go:217] Completed: docker container inspect kubernetes-upgrade-20220602191340-12108 --format={{.State.Status}}: (1.2379079s)
	I0602 19:17:07.473498     936 fix.go:103] recreateIfNeeded on kubernetes-upgrade-20220602191340-12108: state=Stopped err=<nil>
	W0602 19:17:07.473498     936 fix.go:129] unexpected machine state, will restart: <nil>
	I0602 19:17:07.476477     936 out.go:177] * Restarting existing docker container for "kubernetes-upgrade-20220602191340-12108" ...
	I0602 19:17:07.486503     936 cli_runner.go:164] Run: docker start kubernetes-upgrade-20220602191340-12108
	W0602 19:17:09.557204     936 cli_runner.go:211] docker start kubernetes-upgrade-20220602191340-12108 returned with exit code 1
	I0602 19:17:09.557204     936 cli_runner.go:217] Completed: docker start kubernetes-upgrade-20220602191340-12108: (2.0706919s)
	I0602 19:17:09.563185     936 cli_runner.go:164] Run: docker inspect kubernetes-upgrade-20220602191340-12108
	I0602 19:17:10.684490     936 cli_runner.go:217] Completed: docker inspect kubernetes-upgrade-20220602191340-12108: (1.1212993s)
	I0602 19:17:10.684490     936 errors.go:84] Postmortem inspect ("docker inspect kubernetes-upgrade-20220602191340-12108"): -- stdout --
	[
	    {
	        "Id": "e425b5d64014fc112e2c6213cca7749ecbd803cdfd50c69d90cb4a22f6676024",
	        "Created": "2022-06-02T19:14:47.8422731Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "exited",
	            "Running": false,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 0,
	            "ExitCode": 130,
	            "Error": "network 322f4233d6e4b7b1eb6c0630668710dda5d7f72c8791e2d7e090bf7c854cb1c3 not found",
	            "StartedAt": "2022-06-02T19:14:50.2110413Z",
	            "FinishedAt": "2022-06-02T19:16:27.1435289Z"
	        },
	        "Image": "sha256:462790409a0e2520bcd6c4a009da80732d87146c973d78561764290fa691ea41",
	        "ResolvConfPath": "/var/lib/docker/containers/e425b5d64014fc112e2c6213cca7749ecbd803cdfd50c69d90cb4a22f6676024/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/e425b5d64014fc112e2c6213cca7749ecbd803cdfd50c69d90cb4a22f6676024/hostname",
	        "HostsPath": "/var/lib/docker/containers/e425b5d64014fc112e2c6213cca7749ecbd803cdfd50c69d90cb4a22f6676024/hosts",
	        "LogPath": "/var/lib/docker/containers/e425b5d64014fc112e2c6213cca7749ecbd803cdfd50c69d90cb4a22f6676024/e425b5d64014fc112e2c6213cca7749ecbd803cdfd50c69d90cb4a22f6676024-json.log",
	        "Name": "/kubernetes-upgrade-20220602191340-12108",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "kubernetes-upgrade-20220602191340-12108:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "kubernetes-upgrade-20220602191340-12108",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 2306867200,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/dd7f480705ee377d00cab0a1c3708bcaa5fd0a0169bf492baa593faa194ce02c-init/diff:/var/lib/docker/overlay2/dfce970b43800856c522d9750e5e1364e8adf4be4cf71ca7c53d79b33355f5a7/diff:/var/lib/docker/overlay2/4fd23a1b84854239f1bb855d05e42ecd6acbd1b0944b347813a56f5f45356a42/diff:/var/lib/docker/overlay2/864c5b1fbc297750771bb843fdeb4bafa10868a71716f4a01f1119609fb34667/diff:/var/lib/docker/overlay2/0f11f6855118857c743b90ca120ff7aa550f8157d475abf59df950433a5bc6e8/diff:/var/lib/docker/overlay2/2ae7f559725a060dc3b3a9c2fbd554b98114ae47dbf8db75f13bd8a95cbae19a/diff:/var/lib/docker/overlay2/48f41ac288d1037223ac101e6bc07f05729cdcecd98cc85971db99e90765c437/diff:/var/lib/docker/overlay2/8d4eaae639ade3ad3459b4fb67dbcac83774b72a2550b0a4bca1f21d122b20e6/diff:/var/lib/docker/overlay2/e06515bb91756221300de52336376d32ef9bd8685a92352e522936c4947b88ee/diff:/var/lib/docker/overlay2/a2f615fb794b704dc3823080c47e2c357cf4826ec91f6ae190c7497bb18a80cd/diff:/var/lib/docker/overlay2/22f99f
8a3da21c6e2be4c5c5e9d969af73e7695aaf9b0c7d0d09b5795ba76416/diff:/var/lib/docker/overlay2/9c0266785c64b9f6c471863067ca9db045a5aa61167a7817217cf01825a7d868/diff:/var/lib/docker/overlay2/b8a0250c9ae7d899ee3e46414c2db7f7ba363793900f8fcbf1b470586ebe7bd9/diff:/var/lib/docker/overlay2/00afbeac619cb9c06d4da311f5fc5aa3f5147b88b291acf06d4c4b36984ad5a2/diff:/var/lib/docker/overlay2/da51241ed08bd861b9d27902198eae13c3e4aac5c79f522e9f3fa209ea35e8d3/diff:/var/lib/docker/overlay2/b01176f7dbe98e3004db7c0fe45d94616a803dd8ae9cbdf3a1f2a188604178af/diff:/var/lib/docker/overlay2/0ebb0ff0177c8116e72a14ac704b161f75922cea05fe804ad1f7b83f4cd3dd70/diff:/var/lib/docker/overlay2/bae8d175bc3e334a70aaa239643efa0e8b453ab163f077d9cef60e3840c717ba/diff:/var/lib/docker/overlay2/e72a79f763a44dc32f9a2e84dc5e28a060e7fbb9f4624cb8aaa084dd356522ec/diff:/var/lib/docker/overlay2/2e1bc304b205033ad7f49fb8db243b0991596e0eec913fd13e8382aa25767e21/diff:/var/lib/docker/overlay2/ebb9b39dedfc09f9f34ea879f56a8ffd24ab9f9bf8acc93aa9df5eb93dba58e8/diff:/var/lib/d
ocker/overlay2/bffdca36eba4bce9086f2c269bcfe5b915d807483717f0e27acbd51b5bbfc11b/diff:/var/lib/docker/overlay2/96c321cbf06c0050c8a0a7897e9533db1ee5788eb09b1e1d605bdd1134af8eca/diff:/var/lib/docker/overlay2/735422b44af98e330209fe1c4273bf57aa33fcfd770f3e9d6f1a6e59f7545920/diff:/var/lib/docker/overlay2/8dc177c0589f67ded7d9c229d3c587fe77b3d1c68cf0a5af871bc23768d67d84/diff:/var/lib/docker/overlay2/9a29541ccfee3849e0691950c599bb7e4e51d9026724b1ad13abc8d8e9c140e0/diff:/var/lib/docker/overlay2/50fe1dc8f357b5d624681e6f14d98e6d33a8b6b53d70293ba90ac4435a1e18d8/diff:/var/lib/docker/overlay2/86f301a296dbb7422a3d55a008a9f38278a7a19d68a0f735d298c0c2a431ee30/diff:/var/lib/docker/overlay2/dc8087ea592587f8cb5392cc0ee739c33f2724c47b83767d593b3065914820b0/diff:/var/lib/docker/overlay2/15163601889f0d414f35ccd64ae33a52958605b5b7e50618ed5d4f4bd06ec65b/diff:/var/lib/docker/overlay2/a50cf19d9d69b9c68c6c66a918cbde678b49e8d566d06772af22bf99191b08f3/diff:/var/lib/docker/overlay2/621f3b0fc578721c5d0465771ad007f022ed238fa5a2076f807c077680c
26d27/diff:/var/lib/docker/overlay2/2652f9ffde92786a77e3bb35fe07c03a623aaad541f0ca9710839800c4b470e4/diff:/var/lib/docker/overlay2/c853755ee76ea55ad6c00f5eaff82196f4953ee6fb2d27e27ba35f86d56bfc32/diff:/var/lib/docker/overlay2/a0f70e6416a8e618ea7475b5e7f4cdc9a66ac39f0a6c1969c569d8e4f0b5e9eb/diff:/var/lib/docker/overlay2/275d2c643ecb011298df16e0794bebb9a7ec82e190aea53a90369288c521f75e/diff:/var/lib/docker/overlay2/a7e78f238badc23c2c38b7e9b9c4428c0614e825744076161295740d46a20957/diff:/var/lib/docker/overlay2/39fcd4c392271449973511a31d445289c1f8d378d01759fef12c430c9f44f2b8/diff:/var/lib/docker/overlay2/e1c51360d327e86575fe8248415fae12e9dbdde580db0e6f4f4e485ac9f92e3b/diff:/var/lib/docker/overlay2/fecd88783858177cbe3b751f0717b370c5556d7cf0ef163e2710f16fce09d53c/diff:/var/lib/docker/overlay2/3b4c7afaac6f5818bc33bec8c0ec442eb5a1010d0de6fe488460ee83a3901b21/diff:/var/lib/docker/overlay2/47d0047bc42c34ea02c33c1500f96c5109f27f84f973a5636832bbc855761e3f/diff",
	                "MergedDir": "/var/lib/docker/overlay2/dd7f480705ee377d00cab0a1c3708bcaa5fd0a0169bf492baa593faa194ce02c/merged",
	                "UpperDir": "/var/lib/docker/overlay2/dd7f480705ee377d00cab0a1c3708bcaa5fd0a0169bf492baa593faa194ce02c/diff",
	                "WorkDir": "/var/lib/docker/overlay2/dd7f480705ee377d00cab0a1c3708bcaa5fd0a0169bf492baa593faa194ce02c/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "kubernetes-upgrade-20220602191340-12108",
	                "Source": "/var/lib/docker/volumes/kubernetes-upgrade-20220602191340-12108/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "kubernetes-upgrade-20220602191340-12108",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "kubernetes-upgrade-20220602191340-12108",
	                "name.minikube.sigs.k8s.io": "kubernetes-upgrade-20220602191340-12108",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "7ed50d5ab7698ef031105a3fedaf6e5918caf31e28958792e67e2c629831ecb0",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {},
	            "SandboxKey": "/var/run/docker/netns/7ed50d5ab769",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "kubernetes-upgrade-20220602191340-12108": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.58.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "e425b5d64014",
	                        "kubernetes-upgrade-20220602191340-12108"
	                    ],
	                    "NetworkID": "322f4233d6e4b7b1eb6c0630668710dda5d7f72c8791e2d7e090bf7c854cb1c3",
	                    "EndpointID": "",
	                    "Gateway": "",
	                    "IPAddress": "",
	                    "IPPrefixLen": 0,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]
	
	-- /stdout --
	I0602 19:17:10.691474     936 cli_runner.go:164] Run: docker logs --timestamps --details kubernetes-upgrade-20220602191340-12108
	I0602 19:17:11.850398     936 cli_runner.go:217] Completed: docker logs --timestamps --details kubernetes-upgrade-20220602191340-12108: (1.1589191s)
	I0602 19:17:11.850398     936 errors.go:91] Postmortem logs ("docker logs --timestamps --details kubernetes-upgrade-20220602191340-12108"): -- stdout --
	2022-06-02T19:14:50.208962100Z  + userns=
	2022-06-02T19:14:50.209003100Z  + grep -Eqv '0[[:space:]]+0[[:space:]]+4294967295' /proc/self/uid_map
	2022-06-02T19:14:50.215897100Z  + validate_userns
	2022-06-02T19:14:50.215923800Z  + [[ -z '' ]]
	2022-06-02T19:14:50.215933200Z  + return
	2022-06-02T19:14:50.215940400Z  + configure_containerd
	2022-06-02T19:14:50.215947700Z  + local snapshotter=
	2022-06-02T19:14:50.215954700Z  + [[ -n '' ]]
	2022-06-02T19:14:50.215961500Z  + [[ -z '' ]]
	2022-06-02T19:14:50.216747300Z  ++ stat -f -c %T /kind
	2022-06-02T19:14:50.218271300Z  + '[[overlayfs' == zfs ']]'
	2022-06-02T19:14:50.219106500Z  /usr/local/bin/entrypoint: line 112: [[overlayfs: command not found
	2022-06-02T19:14:50.220142800Z  + [[ -n '' ]]
	2022-06-02T19:14:50.220160700Z  + configure_proxy
	2022-06-02T19:14:50.220168300Z  + mkdir -p /etc/systemd/system.conf.d/
	2022-06-02T19:14:50.222374800Z  + [[ ! -z '' ]]
	2022-06-02T19:14:50.222393200Z  + cat
	2022-06-02T19:14:50.224164700Z  + fix_kmsg
	2022-06-02T19:14:50.224182900Z  + [[ ! -e /dev/kmsg ]]
	2022-06-02T19:14:50.224188800Z  + fix_mount
	2022-06-02T19:14:50.224194900Z  + echo 'INFO: ensuring we can execute mount/umount even with userns-remap'
	2022-06-02T19:14:50.224199600Z  INFO: ensuring we can execute mount/umount even with userns-remap
	2022-06-02T19:14:50.225458300Z  ++ which mount
	2022-06-02T19:14:50.227484800Z  ++ which umount
	2022-06-02T19:14:50.229527300Z  + chown root:root /usr/bin/mount /usr/bin/umount
	2022-06-02T19:14:50.292349200Z  ++ which mount
	2022-06-02T19:14:50.295658700Z  ++ which umount
	2022-06-02T19:14:50.299295600Z  + chmod -s /usr/bin/mount /usr/bin/umount
	2022-06-02T19:14:50.303523700Z  +++ which mount
	2022-06-02T19:14:50.306732900Z  ++ stat -f -c %T /usr/bin/mount
	2022-06-02T19:14:50.309597500Z  + [[ overlayfs == \a\u\f\s ]]
	2022-06-02T19:14:50.309626100Z  + echo 'INFO: remounting /sys read-only'
	2022-06-02T19:14:50.309636500Z  INFO: remounting /sys read-only
	2022-06-02T19:14:50.309643700Z  + mount -o remount,ro /sys
	2022-06-02T19:14:50.314827100Z  + echo 'INFO: making mounts shared'
	2022-06-02T19:14:50.314857700Z  INFO: making mounts shared
	2022-06-02T19:14:50.314869700Z  + mount --make-rshared /
	2022-06-02T19:14:50.317807700Z  + retryable_fix_cgroup
	2022-06-02T19:14:50.318621300Z  ++ seq 0 10
	2022-06-02T19:14:50.320614200Z  + for i in $(seq 0 10)
	2022-06-02T19:14:50.320633700Z  + fix_cgroup
	2022-06-02T19:14:50.320642500Z  + [[ -f /sys/fs/cgroup/cgroup.controllers ]]
	2022-06-02T19:14:50.320650300Z  + echo 'INFO: detected cgroup v1'
	2022-06-02T19:14:50.320692200Z  INFO: detected cgroup v1
	2022-06-02T19:14:50.320701400Z  + local current_cgroup
	2022-06-02T19:14:50.323730400Z  ++ grep -E '^[^:]*:([^:]*,)?cpu(,[^,:]*)?:.*' /proc/self/cgroup
	2022-06-02T19:14:50.323756800Z  ++ cut -d: -f3
	2022-06-02T19:14:50.326667300Z  + current_cgroup=/docker/e425b5d64014fc112e2c6213cca7749ecbd803cdfd50c69d90cb4a22f6676024
	2022-06-02T19:14:50.326715300Z  + '[' /docker/e425b5d64014fc112e2c6213cca7749ecbd803cdfd50c69d90cb4a22f6676024 = / ']'
	2022-06-02T19:14:50.326727600Z  + echo 'WARN: cgroupns not enabled! Please use cgroup v2, or cgroup v1 with cgroupns enabled.'
	2022-06-02T19:14:50.326735000Z  WARN: cgroupns not enabled! Please use cgroup v2, or cgroup v1 with cgroupns enabled.
	2022-06-02T19:14:50.326742000Z  + echo 'INFO: fix cgroup mounts for all subsystems'
	2022-06-02T19:14:50.326748700Z  INFO: fix cgroup mounts for all subsystems
	2022-06-02T19:14:50.326766200Z  + local cgroup_subsystems
	2022-06-02T19:14:50.328389500Z  ++ findmnt -lun -o source,target -t cgroup
	2022-06-02T19:14:50.328431100Z  ++ grep /docker/e425b5d64014fc112e2c6213cca7749ecbd803cdfd50c69d90cb4a22f6676024
	2022-06-02T19:14:50.329738500Z  ++ awk '{print $2}'
	2022-06-02T19:14:50.334821400Z  + cgroup_subsystems='/sys/fs/cgroup/cpuset
	2022-06-02T19:14:50.334848200Z  /sys/fs/cgroup/cpu
	2022-06-02T19:14:50.334857700Z  /sys/fs/cgroup/cpuacct
	2022-06-02T19:14:50.334877000Z  /sys/fs/cgroup/blkio
	2022-06-02T19:14:50.334890000Z  /sys/fs/cgroup/memory
	2022-06-02T19:14:50.334900800Z  /sys/fs/cgroup/devices
	2022-06-02T19:14:50.334907700Z  /sys/fs/cgroup/freezer
	2022-06-02T19:14:50.334921900Z  /sys/fs/cgroup/net_cls
	2022-06-02T19:14:50.334934100Z  /sys/fs/cgroup/perf_event
	2022-06-02T19:14:50.335048300Z  /sys/fs/cgroup/net_prio
	2022-06-02T19:14:50.335058000Z  /sys/fs/cgroup/hugetlb
	2022-06-02T19:14:50.335080700Z  /sys/fs/cgroup/pids
	2022-06-02T19:14:50.335091100Z  /sys/fs/cgroup/rdma
	2022-06-02T19:14:50.335100900Z  /sys/fs/cgroup/systemd'
	2022-06-02T19:14:50.335107500Z  + local unsupported_cgroups
	2022-06-02T19:14:50.337396700Z  ++ findmnt -lun -o source,target -t cgroup
	2022-06-02T19:14:50.337421200Z  ++ grep_allow_nomatch -v /docker/e425b5d64014fc112e2c6213cca7749ecbd803cdfd50c69d90cb4a22f6676024
	2022-06-02T19:14:50.337431700Z  ++ grep -v /docker/e425b5d64014fc112e2c6213cca7749ecbd803cdfd50c69d90cb4a22f6676024
	2022-06-02T19:14:50.338394800Z  ++ awk '{print $2}'
	2022-06-02T19:14:50.341699000Z  ++ [[ 1 == 1 ]]
	2022-06-02T19:14:50.343746000Z  + unsupported_cgroups=
	2022-06-02T19:14:50.343768900Z  + '[' -n '' ']'
	2022-06-02T19:14:50.343776200Z  + local cgroup_mounts
	2022-06-02T19:14:50.345168800Z  ++ grep -E -o '/[[:alnum:]].* /sys/fs/cgroup.*.*cgroup' /proc/self/mountinfo
	2022-06-02T19:14:50.350792000Z  + cgroup_mounts='/docker/e425b5d64014fc112e2c6213cca7749ecbd803cdfd50c69d90cb4a22f6676024 /sys/fs/cgroup/cpuset rw,nosuid,nodev,noexec,relatime shared:450 master:58 - cgroup
	2022-06-02T19:14:50.350816100Z  /docker/e425b5d64014fc112e2c6213cca7749ecbd803cdfd50c69d90cb4a22f6676024 /sys/fs/cgroup/cpu rw,nosuid,nodev,noexec,relatime shared:451 master:59 - cgroup
	2022-06-02T19:14:50.350830100Z  /docker/e425b5d64014fc112e2c6213cca7749ecbd803cdfd50c69d90cb4a22f6676024 /sys/fs/cgroup/cpuacct rw,nosuid,nodev,noexec,relatime shared:452 master:60 - cgroup
	2022-06-02T19:14:50.350838400Z  /docker/e425b5d64014fc112e2c6213cca7749ecbd803cdfd50c69d90cb4a22f6676024 /sys/fs/cgroup/blkio rw,nosuid,nodev,noexec,relatime shared:453 master:61 - cgroup
	2022-06-02T19:14:50.350845800Z  /docker/e425b5d64014fc112e2c6213cca7749ecbd803cdfd50c69d90cb4a22f6676024 /sys/fs/cgroup/memory rw,nosuid,nodev,noexec,relatime shared:454 master:62 - cgroup
	2022-06-02T19:14:50.350879000Z  /docker/e425b5d64014fc112e2c6213cca7749ecbd803cdfd50c69d90cb4a22f6676024 /sys/fs/cgroup/devices rw,nosuid,nodev,noexec,relatime shared:455 master:63 - cgroup
	2022-06-02T19:14:50.350886800Z  /docker/e425b5d64014fc112e2c6213cca7749ecbd803cdfd50c69d90cb4a22f6676024 /sys/fs/cgroup/freezer rw,nosuid,nodev,noexec,relatime shared:456 master:64 - cgroup
	2022-06-02T19:14:50.350893700Z  /docker/e425b5d64014fc112e2c6213cca7749ecbd803cdfd50c69d90cb4a22f6676024 /sys/fs/cgroup/net_cls rw,nosuid,nodev,noexec,relatime shared:457 master:65 - cgroup
	2022-06-02T19:14:50.350902900Z  /docker/e425b5d64014fc112e2c6213cca7749ecbd803cdfd50c69d90cb4a22f6676024 /sys/fs/cgroup/perf_event rw,nosuid,nodev,noexec,relatime shared:458 master:66 - cgroup
	2022-06-02T19:14:50.350910200Z  /docker/e425b5d64014fc112e2c6213cca7749ecbd803cdfd50c69d90cb4a22f6676024 /sys/fs/cgroup/net_prio rw,nosuid,nodev,noexec,relatime shared:459 master:67 - cgroup
	2022-06-02T19:14:50.350923800Z  /docker/e425b5d64014fc112e2c6213cca7749ecbd803cdfd50c69d90cb4a22f6676024 /sys/fs/cgroup/hugetlb rw,nosuid,nodev,noexec,relatime shared:460 master:68 - cgroup
	2022-06-02T19:14:50.351025300Z  /docker/e425b5d64014fc112e2c6213cca7749ecbd803cdfd50c69d90cb4a22f6676024 /sys/fs/cgroup/pids rw,nosuid,nodev,noexec,relatime shared:461 master:69 - cgroup
	2022-06-02T19:14:50.351041500Z  /docker/e425b5d64014fc112e2c6213cca7749ecbd803cdfd50c69d90cb4a22f6676024 /sys/fs/cgroup/rdma rw,nosuid,nodev,noexec,relatime shared:462 master:70 - cgroup
	2022-06-02T19:14:50.351059400Z  /docker/e425b5d64014fc112e2c6213cca7749ecbd803cdfd50c69d90cb4a22f6676024 /sys/fs/cgroup/systemd rw,nosuid,nodev,noexec,relatime shared:463 master:71 - cgroup cgroup'
	2022-06-02T19:14:50.351071300Z  + [[ -n /docker/e425b5d64014fc112e2c6213cca7749ecbd803cdfd50c69d90cb4a22f6676024 /sys/fs/cgroup/cpuset rw,nosuid,nodev,noexec,relatime shared:450 master:58 - cgroup
	2022-06-02T19:14:50.351081100Z  /docker/e425b5d64014fc112e2c6213cca7749ecbd803cdfd50c69d90cb4a22f6676024 /sys/fs/cgroup/cpu rw,nosuid,nodev,noexec,relatime shared:451 master:59 - cgroup
	2022-06-02T19:14:50.351088500Z  /docker/e425b5d64014fc112e2c6213cca7749ecbd803cdfd50c69d90cb4a22f6676024 /sys/fs/cgroup/cpuacct rw,nosuid,nodev,noexec,relatime shared:452 master:60 - cgroup
	2022-06-02T19:14:50.351095500Z  /docker/e425b5d64014fc112e2c6213cca7749ecbd803cdfd50c69d90cb4a22f6676024 /sys/fs/cgroup/blkio rw,nosuid,nodev,noexec,relatime shared:453 master:61 - cgroup
	2022-06-02T19:14:50.351123500Z  /docker/e425b5d64014fc112e2c6213cca7749ecbd803cdfd50c69d90cb4a22f6676024 /sys/fs/cgroup/memory rw,nosuid,nodev,noexec,relatime shared:454 master:62 - cgroup
	2022-06-02T19:14:50.351132400Z  /docker/e425b5d64014fc112e2c6213cca7749ecbd803cdfd50c69d90cb4a22f6676024 /sys/fs/cgroup/devices rw,nosuid,nodev,noexec,relatime shared:455 master:63 - cgroup
	2022-06-02T19:14:50.351139600Z  /docker/e425b5d64014fc112e2c6213cca7749ecbd803cdfd50c69d90cb4a22f6676024 /sys/fs/cgroup/freezer rw,nosuid,nodev,noexec,relatime shared:456 master:64 - cgroup
	2022-06-02T19:14:50.351148800Z  /docker/e425b5d64014fc112e2c6213cca7749ecbd803cdfd50c69d90cb4a22f6676024 /sys/fs/cgroup/net_cls rw,nosuid,nodev,noexec,relatime shared:457 master:65 - cgroup
	2022-06-02T19:14:50.351155900Z  /docker/e425b5d64014fc112e2c6213cca7749ecbd803cdfd50c69d90cb4a22f6676024 /sys/fs/cgroup/perf_event rw,nosuid,nodev,noexec,relatime shared:458 master:66 - cgroup
	2022-06-02T19:14:50.351162900Z  /docker/e425b5d64014fc112e2c6213cca7749ecbd803cdfd50c69d90cb4a22f6676024 /sys/fs/cgroup/net_prio rw,nosuid,nodev,noexec,relatime shared:459 master:67 - cgroup
	2022-06-02T19:14:50.351180500Z  /docker/e425b5d64014fc112e2c6213cca7749ecbd803cdfd50c69d90cb4a22f6676024 /sys/fs/cgroup/hugetlb rw,nosuid,nodev,noexec,relatime shared:460 master:68 - cgroup
	2022-06-02T19:14:50.352579300Z  /docker/e425b5d64014fc112e2c6213cca7749ecbd803cdfd50c69d90cb4a22f6676024 /sys/fs/cgroup/pids rw,nosuid,nodev,noexec,relatime shared:461 master:69 - cgroup
	2022-06-02T19:14:50.352603200Z  /docker/e425b5d64014fc112e2c6213cca7749ecbd803cdfd50c69d90cb4a22f6676024 /sys/fs/cgroup/rdma rw,nosuid,nodev,noexec,relatime shared:462 master:70 - cgroup
	2022-06-02T19:14:50.352632900Z  /docker/e425b5d64014fc112e2c6213cca7749ecbd803cdfd50c69d90cb4a22f6676024 /sys/fs/cgroup/systemd rw,nosuid,nodev,noexec,relatime shared:463 master:71 - cgroup cgroup ]]
	2022-06-02T19:14:50.352646500Z  + local mount_root
	2022-06-02T19:14:50.352652900Z  ++ head -n 1
	2022-06-02T19:14:50.352660900Z  ++ cut '-d ' -f1
	2022-06-02T19:14:50.354479100Z  + mount_root=/docker/e425b5d64014fc112e2c6213cca7749ecbd803cdfd50c69d90cb4a22f6676024
	2022-06-02T19:14:50.356187600Z  ++ echo '/docker/e425b5d64014fc112e2c6213cca7749ecbd803cdfd50c69d90cb4a22f6676024 /sys/fs/cgroup/cpuset rw,nosuid,nodev,noexec,relatime shared:450 master:58 - cgroup
	2022-06-02T19:14:50.356216200Z  /docker/e425b5d64014fc112e2c6213cca7749ecbd803cdfd50c69d90cb4a22f6676024 /sys/fs/cgroup/cpu rw,nosuid,nodev,noexec,relatime shared:451 master:59 - cgroup
	2022-06-02T19:14:50.356227500Z  /docker/e425b5d64014fc112e2c6213cca7749ecbd803cdfd50c69d90cb4a22f6676024 /sys/fs/cgroup/cpuacct rw,nosuid,nodev,noexec,relatime shared:452 master:60 - cgroup
	2022-06-02T19:14:50.356237000Z  /docker/e425b5d64014fc112e2c6213cca7749ecbd803cdfd50c69d90cb4a22f6676024 /sys/fs/cgroup/blkio rw,nosuid,nodev,noexec,relatime shared:453 master:61 - cgroup
	2022-06-02T19:14:50.356250400Z  /docker/e425b5d64014fc112e2c6213cca7749ecbd803cdfd50c69d90cb4a22f6676024 /sys/fs/cgroup/memory rw,nosuid,nodev,noexec,relatime shared:454 master:62 - cgroup
	2022-06-02T19:14:50.356260400Z  /docker/e425b5d64014fc112e2c6213cca7749ecbd803cdfd50c69d90cb4a22f6676024 /sys/fs/cgroup/devices rw,nosuid,nodev,noexec,relatime shared:455 master:63 - cgroup
	2022-06-02T19:14:50.356292300Z  /docker/e425b5d64014fc112e2c6213cca7749ecbd803cdfd50c69d90cb4a22f6676024 /sys/fs/cgroup/freezer rw,nosuid,nodev,noexec,relatime shared:456 master:64 - cgroup
	2022-06-02T19:14:50.356308000Z  /docker/e425b5d64014fc112e2c6213cca7749ecbd803cdfd50c69d90cb4a22f6676024 /sys/fs/cgroup/net_cls rw,nosuid,nodev,noexec,relatime shared:457 master:65 - cgroup
	2022-06-02T19:14:50.356317700Z  /docker/e425b5d64014fc112e2c6213cca7749ecbd803cdfd50c69d90cb4a22f6676024 /sys/fs/cgroup/perf_event rw,nosuid,nodev,noexec,relatime shared:458 master:66 - cgroup
	2022-06-02T19:14:50.356327200Z  /docker/e425b5d64014fc112e2c6213cca7749ecbd803cdfd50c69d90cb4a22f6676024 /sys/fs/cgroup/net_prio rw,nosuid,nodev,noexec,relatime shared:459 master:67 - cgroup
	2022-06-02T19:14:50.356336500Z  /docker/e425b5d64014fc112e2c6213cca7749ecbd803cdfd50c69d90cb4a22f6676024 /sys/fs/cgroup/hugetlb rw,nosuid,nodev,noexec,relatime shared:460 master:68 - cgroup
	2022-06-02T19:14:50.356349500Z  /docker/e425b5d64014fc112e2c6213cca7749ecbd803cdfd50c69d90cb4a22f6676024 /sys/fs/cgroup/pids rw,nosuid,nodev,noexec,relatime shared:461 master:69 - cgroup
	2022-06-02T19:14:50.356361900Z  /docker/e425b5d64014fc112e2c6213cca7749ecbd803cdfd50c69d90cb4a22f6676024 /sys/fs/cgroup/rdma rw,nosuid,nodev,noexec,relatime shared:462 master:70 - cgroup
	2022-06-02T19:14:50.356380200Z  /docker/e425b5d64014fc112e2c6213cca7749ecbd803cdfd50c69d90cb4a22f6676024 /sys/fs/cgroup/systemd rw,nosuid,nodev,noexec,relatime shared:463 master:71 - cgroup cgroup'
	2022-06-02T19:14:50.356392700Z  ++ cut '-d ' -f 2
	2022-06-02T19:14:50.359198700Z  + for mount_point in $(echo "${cgroup_mounts}" | cut -d' ' -f 2)
	2022-06-02T19:14:50.359220900Z  + local target=/sys/fs/cgroup/cpuset/docker/e425b5d64014fc112e2c6213cca7749ecbd803cdfd50c69d90cb4a22f6676024
	2022-06-02T19:14:50.359230600Z  + findmnt /sys/fs/cgroup/cpuset/docker/e425b5d64014fc112e2c6213cca7749ecbd803cdfd50c69d90cb4a22f6676024
	2022-06-02T19:14:50.363517300Z  + mkdir -p /sys/fs/cgroup/cpuset/docker/e425b5d64014fc112e2c6213cca7749ecbd803cdfd50c69d90cb4a22f6676024
	2022-06-02T19:14:50.427299000Z  + mount --bind /sys/fs/cgroup/cpuset /sys/fs/cgroup/cpuset/docker/e425b5d64014fc112e2c6213cca7749ecbd803cdfd50c69d90cb4a22f6676024
	2022-06-02T19:14:50.430718400Z  + for mount_point in $(echo "${cgroup_mounts}" | cut -d' ' -f 2)
	2022-06-02T19:14:50.430741100Z  + local target=/sys/fs/cgroup/cpu/docker/e425b5d64014fc112e2c6213cca7749ecbd803cdfd50c69d90cb4a22f6676024
	2022-06-02T19:14:50.430751700Z  + findmnt /sys/fs/cgroup/cpu/docker/e425b5d64014fc112e2c6213cca7749ecbd803cdfd50c69d90cb4a22f6676024
	2022-06-02T19:14:50.434475400Z  + mkdir -p /sys/fs/cgroup/cpu/docker/e425b5d64014fc112e2c6213cca7749ecbd803cdfd50c69d90cb4a22f6676024
	2022-06-02T19:14:50.438694200Z  + mount --bind /sys/fs/cgroup/cpu /sys/fs/cgroup/cpu/docker/e425b5d64014fc112e2c6213cca7749ecbd803cdfd50c69d90cb4a22f6676024
	2022-06-02T19:14:50.440930400Z  + for mount_point in $(echo "${cgroup_mounts}" | cut -d' ' -f 2)
	2022-06-02T19:14:50.440952200Z  + local target=/sys/fs/cgroup/cpuacct/docker/e425b5d64014fc112e2c6213cca7749ecbd803cdfd50c69d90cb4a22f6676024
	2022-06-02T19:14:50.440960800Z  + findmnt /sys/fs/cgroup/cpuacct/docker/e425b5d64014fc112e2c6213cca7749ecbd803cdfd50c69d90cb4a22f6676024
	2022-06-02T19:14:50.445325100Z  + mkdir -p /sys/fs/cgroup/cpuacct/docker/e425b5d64014fc112e2c6213cca7749ecbd803cdfd50c69d90cb4a22f6676024
	2022-06-02T19:14:50.448231000Z  + mount --bind /sys/fs/cgroup/cpuacct /sys/fs/cgroup/cpuacct/docker/e425b5d64014fc112e2c6213cca7749ecbd803cdfd50c69d90cb4a22f6676024
	2022-06-02T19:14:50.451531300Z  + for mount_point in $(echo "${cgroup_mounts}" | cut -d' ' -f 2)
	2022-06-02T19:14:50.451553000Z  + local target=/sys/fs/cgroup/blkio/docker/e425b5d64014fc112e2c6213cca7749ecbd803cdfd50c69d90cb4a22f6676024
	2022-06-02T19:14:50.451562000Z  + findmnt /sys/fs/cgroup/blkio/docker/e425b5d64014fc112e2c6213cca7749ecbd803cdfd50c69d90cb4a22f6676024
	2022-06-02T19:14:50.455984700Z  + mkdir -p /sys/fs/cgroup/blkio/docker/e425b5d64014fc112e2c6213cca7749ecbd803cdfd50c69d90cb4a22f6676024
	2022-06-02T19:14:50.458189000Z  + mount --bind /sys/fs/cgroup/blkio /sys/fs/cgroup/blkio/docker/e425b5d64014fc112e2c6213cca7749ecbd803cdfd50c69d90cb4a22f6676024
	2022-06-02T19:14:50.461672000Z  + for mount_point in $(echo "${cgroup_mounts}" | cut -d' ' -f 2)
	2022-06-02T19:14:50.461694700Z  + local target=/sys/fs/cgroup/memory/docker/e425b5d64014fc112e2c6213cca7749ecbd803cdfd50c69d90cb4a22f6676024
	2022-06-02T19:14:50.461704300Z  + findmnt /sys/fs/cgroup/memory/docker/e425b5d64014fc112e2c6213cca7749ecbd803cdfd50c69d90cb4a22f6676024
	2022-06-02T19:14:50.465852800Z  + mkdir -p /sys/fs/cgroup/memory/docker/e425b5d64014fc112e2c6213cca7749ecbd803cdfd50c69d90cb4a22f6676024
	2022-06-02T19:14:50.468788200Z  + mount --bind /sys/fs/cgroup/memory /sys/fs/cgroup/memory/docker/e425b5d64014fc112e2c6213cca7749ecbd803cdfd50c69d90cb4a22f6676024
	2022-06-02T19:14:50.471344400Z  + for mount_point in $(echo "${cgroup_mounts}" | cut -d' ' -f 2)
	2022-06-02T19:14:50.471366900Z  + local target=/sys/fs/cgroup/devices/docker/e425b5d64014fc112e2c6213cca7749ecbd803cdfd50c69d90cb4a22f6676024
	2022-06-02T19:14:50.471376800Z  + findmnt /sys/fs/cgroup/devices/docker/e425b5d64014fc112e2c6213cca7749ecbd803cdfd50c69d90cb4a22f6676024
	2022-06-02T19:14:50.475133400Z  + mkdir -p /sys/fs/cgroup/devices/docker/e425b5d64014fc112e2c6213cca7749ecbd803cdfd50c69d90cb4a22f6676024
	2022-06-02T19:14:50.477082700Z  + mount --bind /sys/fs/cgroup/devices /sys/fs/cgroup/devices/docker/e425b5d64014fc112e2c6213cca7749ecbd803cdfd50c69d90cb4a22f6676024
	2022-06-02T19:14:50.479168000Z  + for mount_point in $(echo "${cgroup_mounts}" | cut -d' ' -f 2)
	2022-06-02T19:14:50.479186400Z  + local target=/sys/fs/cgroup/freezer/docker/e425b5d64014fc112e2c6213cca7749ecbd803cdfd50c69d90cb4a22f6676024
	2022-06-02T19:14:50.479192300Z  + findmnt /sys/fs/cgroup/freezer/docker/e425b5d64014fc112e2c6213cca7749ecbd803cdfd50c69d90cb4a22f6676024
	2022-06-02T19:14:50.482450300Z  + mkdir -p /sys/fs/cgroup/freezer/docker/e425b5d64014fc112e2c6213cca7749ecbd803cdfd50c69d90cb4a22f6676024
	2022-06-02T19:14:50.484448200Z  + mount --bind /sys/fs/cgroup/freezer /sys/fs/cgroup/freezer/docker/e425b5d64014fc112e2c6213cca7749ecbd803cdfd50c69d90cb4a22f6676024
	2022-06-02T19:14:50.488261400Z  + for mount_point in $(echo "${cgroup_mounts}" | cut -d' ' -f 2)
	2022-06-02T19:14:50.488298800Z  + local target=/sys/fs/cgroup/net_cls/docker/e425b5d64014fc112e2c6213cca7749ecbd803cdfd50c69d90cb4a22f6676024
	2022-06-02T19:14:50.488307200Z  + findmnt /sys/fs/cgroup/net_cls/docker/e425b5d64014fc112e2c6213cca7749ecbd803cdfd50c69d90cb4a22f6676024
	2022-06-02T19:14:50.491445200Z  + mkdir -p /sys/fs/cgroup/net_cls/docker/e425b5d64014fc112e2c6213cca7749ecbd803cdfd50c69d90cb4a22f6676024
	2022-06-02T19:14:50.493419100Z  + mount --bind /sys/fs/cgroup/net_cls /sys/fs/cgroup/net_cls/docker/e425b5d64014fc112e2c6213cca7749ecbd803cdfd50c69d90cb4a22f6676024
	2022-06-02T19:14:50.496059900Z  + for mount_point in $(echo "${cgroup_mounts}" | cut -d' ' -f 2)
	2022-06-02T19:14:50.496083900Z  + local target=/sys/fs/cgroup/perf_event/docker/e425b5d64014fc112e2c6213cca7749ecbd803cdfd50c69d90cb4a22f6676024
	2022-06-02T19:14:50.496092600Z  + findmnt /sys/fs/cgroup/perf_event/docker/e425b5d64014fc112e2c6213cca7749ecbd803cdfd50c69d90cb4a22f6676024
	2022-06-02T19:14:50.500089400Z  + mkdir -p /sys/fs/cgroup/perf_event/docker/e425b5d64014fc112e2c6213cca7749ecbd803cdfd50c69d90cb4a22f6676024
	2022-06-02T19:14:50.502349600Z  + mount --bind /sys/fs/cgroup/perf_event /sys/fs/cgroup/perf_event/docker/e425b5d64014fc112e2c6213cca7749ecbd803cdfd50c69d90cb4a22f6676024
	2022-06-02T19:14:50.504773100Z  + for mount_point in $(echo "${cgroup_mounts}" | cut -d' ' -f 2)
	2022-06-02T19:14:50.504797100Z  + local target=/sys/fs/cgroup/net_prio/docker/e425b5d64014fc112e2c6213cca7749ecbd803cdfd50c69d90cb4a22f6676024
	2022-06-02T19:14:50.504807600Z  + findmnt /sys/fs/cgroup/net_prio/docker/e425b5d64014fc112e2c6213cca7749ecbd803cdfd50c69d90cb4a22f6676024
	2022-06-02T19:14:50.507397100Z  + mkdir -p /sys/fs/cgroup/net_prio/docker/e425b5d64014fc112e2c6213cca7749ecbd803cdfd50c69d90cb4a22f6676024
	2022-06-02T19:14:50.510043900Z  + mount --bind /sys/fs/cgroup/net_prio /sys/fs/cgroup/net_prio/docker/e425b5d64014fc112e2c6213cca7749ecbd803cdfd50c69d90cb4a22f6676024
	2022-06-02T19:14:50.512661600Z  + for mount_point in $(echo "${cgroup_mounts}" | cut -d' ' -f 2)
	2022-06-02T19:14:50.512699100Z  + local target=/sys/fs/cgroup/hugetlb/docker/e425b5d64014fc112e2c6213cca7749ecbd803cdfd50c69d90cb4a22f6676024
	2022-06-02T19:14:50.512717400Z  + findmnt /sys/fs/cgroup/hugetlb/docker/e425b5d64014fc112e2c6213cca7749ecbd803cdfd50c69d90cb4a22f6676024
	2022-06-02T19:14:50.517050500Z  + mkdir -p /sys/fs/cgroup/hugetlb/docker/e425b5d64014fc112e2c6213cca7749ecbd803cdfd50c69d90cb4a22f6676024
	2022-06-02T19:14:50.519238100Z  + mount --bind /sys/fs/cgroup/hugetlb /sys/fs/cgroup/hugetlb/docker/e425b5d64014fc112e2c6213cca7749ecbd803cdfd50c69d90cb4a22f6676024
	2022-06-02T19:14:50.522241900Z  + for mount_point in $(echo "${cgroup_mounts}" | cut -d' ' -f 2)
	2022-06-02T19:14:50.522255000Z  + local target=/sys/fs/cgroup/pids/docker/e425b5d64014fc112e2c6213cca7749ecbd803cdfd50c69d90cb4a22f6676024
	2022-06-02T19:14:50.522279000Z  + findmnt /sys/fs/cgroup/pids/docker/e425b5d64014fc112e2c6213cca7749ecbd803cdfd50c69d90cb4a22f6676024
	2022-06-02T19:14:50.525687400Z  + mkdir -p /sys/fs/cgroup/pids/docker/e425b5d64014fc112e2c6213cca7749ecbd803cdfd50c69d90cb4a22f6676024
	2022-06-02T19:14:50.528323500Z  + mount --bind /sys/fs/cgroup/pids /sys/fs/cgroup/pids/docker/e425b5d64014fc112e2c6213cca7749ecbd803cdfd50c69d90cb4a22f6676024
	2022-06-02T19:14:50.531332800Z  + for mount_point in $(echo "${cgroup_mounts}" | cut -d' ' -f 2)
	2022-06-02T19:14:50.531352600Z  + local target=/sys/fs/cgroup/rdma/docker/e425b5d64014fc112e2c6213cca7749ecbd803cdfd50c69d90cb4a22f6676024
	2022-06-02T19:14:50.531362300Z  + findmnt /sys/fs/cgroup/rdma/docker/e425b5d64014fc112e2c6213cca7749ecbd803cdfd50c69d90cb4a22f6676024
	2022-06-02T19:14:50.535075900Z  + mkdir -p /sys/fs/cgroup/rdma/docker/e425b5d64014fc112e2c6213cca7749ecbd803cdfd50c69d90cb4a22f6676024
	2022-06-02T19:14:50.537765700Z  + mount --bind /sys/fs/cgroup/rdma /sys/fs/cgroup/rdma/docker/e425b5d64014fc112e2c6213cca7749ecbd803cdfd50c69d90cb4a22f6676024
	2022-06-02T19:14:50.541808500Z  + for mount_point in $(echo "${cgroup_mounts}" | cut -d' ' -f 2)
	2022-06-02T19:14:50.541818500Z  + local target=/sys/fs/cgroup/systemd/docker/e425b5d64014fc112e2c6213cca7749ecbd803cdfd50c69d90cb4a22f6676024
	2022-06-02T19:14:50.541826100Z  + findmnt /sys/fs/cgroup/systemd/docker/e425b5d64014fc112e2c6213cca7749ecbd803cdfd50c69d90cb4a22f6676024
	2022-06-02T19:14:50.547185800Z  + mkdir -p /sys/fs/cgroup/systemd/docker/e425b5d64014fc112e2c6213cca7749ecbd803cdfd50c69d90cb4a22f6676024
	2022-06-02T19:14:50.549581900Z  + mount --bind /sys/fs/cgroup/systemd /sys/fs/cgroup/systemd/docker/e425b5d64014fc112e2c6213cca7749ecbd803cdfd50c69d90cb4a22f6676024
	2022-06-02T19:14:50.552456200Z  + mount --make-rprivate /sys/fs/cgroup
	2022-06-02T19:14:50.557241700Z  + echo '/sys/fs/cgroup/cpuset
	2022-06-02T19:14:50.557269500Z  /sys/fs/cgroup/cpu
	2022-06-02T19:14:50.557281000Z  /sys/fs/cgroup/cpuacct
	2022-06-02T19:14:50.557288900Z  /sys/fs/cgroup/blkio
	2022-06-02T19:14:50.557295900Z  /sys/fs/cgroup/memory
	2022-06-02T19:14:50.557304800Z  /sys/fs/cgroup/devices
	2022-06-02T19:14:50.557312300Z  /sys/fs/cgroup/freezer
	2022-06-02T19:14:50.557319300Z  /sys/fs/cgroup/net_cls
	2022-06-02T19:14:50.557326200Z  /sys/fs/cgroup/perf_event
	2022-06-02T19:14:50.557333300Z  /sys/fs/cgroup/net_prio
	2022-06-02T19:14:50.557340300Z  /sys/fs/cgroup/hugetlb
	2022-06-02T19:14:50.557348200Z  /sys/fs/cgroup/pids
	2022-06-02T19:14:50.557356800Z  /sys/fs/cgroup/rdma
	2022-06-02T19:14:50.557365200Z  /sys/fs/cgroup/systemd'
	2022-06-02T19:14:50.557372400Z  + IFS=
	2022-06-02T19:14:50.557379500Z  + read -r subsystem
	2022-06-02T19:14:50.557386900Z  + mount_kubelet_cgroup_root /kubelet /sys/fs/cgroup/cpuset
	2022-06-02T19:14:50.557397200Z  + local cgroup_root=/kubelet
	2022-06-02T19:14:50.557408600Z  + local subsystem=/sys/fs/cgroup/cpuset
	2022-06-02T19:14:50.557416200Z  + '[' -z /kubelet ']'
	2022-06-02T19:14:50.557423200Z  + mkdir -p /sys/fs/cgroup/cpuset//kubelet
	2022-06-02T19:14:50.626737300Z  + '[' /sys/fs/cgroup/cpuset == /sys/fs/cgroup/cpuset ']'
	2022-06-02T19:14:50.626805000Z  + cat /sys/fs/cgroup/cpuset/cpuset.cpus
	2022-06-02T19:14:50.630539000Z  + cat /sys/fs/cgroup/cpuset/cpuset.mems
	2022-06-02T19:14:50.635384100Z  + mount --bind /sys/fs/cgroup/cpuset//kubelet /sys/fs/cgroup/cpuset//kubelet
	2022-06-02T19:14:50.639406700Z  + mount_kubelet_cgroup_root /kubelet.slice /sys/fs/cgroup/cpuset
	2022-06-02T19:14:50.639428300Z  + local cgroup_root=/kubelet.slice
	2022-06-02T19:14:50.639437800Z  + local subsystem=/sys/fs/cgroup/cpuset
	2022-06-02T19:14:50.639444700Z  + '[' -z /kubelet.slice ']'
	2022-06-02T19:14:50.639451300Z  + mkdir -p /sys/fs/cgroup/cpuset//kubelet.slice
	2022-06-02T19:14:50.642141300Z  + '[' /sys/fs/cgroup/cpuset == /sys/fs/cgroup/cpuset ']'
	2022-06-02T19:14:50.642163900Z  + cat /sys/fs/cgroup/cpuset/cpuset.cpus
	2022-06-02T19:14:50.647798800Z  + cat /sys/fs/cgroup/cpuset/cpuset.mems
	2022-06-02T19:14:50.649799900Z  + mount --bind /sys/fs/cgroup/cpuset//kubelet.slice /sys/fs/cgroup/cpuset//kubelet.slice
	2022-06-02T19:14:50.653207700Z  + IFS=
	2022-06-02T19:14:50.653228400Z  + read -r subsystem
	2022-06-02T19:14:50.653238000Z  + mount_kubelet_cgroup_root /kubelet /sys/fs/cgroup/cpu
	2022-06-02T19:14:50.653251400Z  + local cgroup_root=/kubelet
	2022-06-02T19:14:50.653262800Z  + local subsystem=/sys/fs/cgroup/cpu
	2022-06-02T19:14:50.653271200Z  + '[' -z /kubelet ']'
	2022-06-02T19:14:50.653278000Z  + mkdir -p /sys/fs/cgroup/cpu//kubelet
	2022-06-02T19:14:50.656424100Z  + '[' /sys/fs/cgroup/cpu == /sys/fs/cgroup/cpuset ']'
	2022-06-02T19:14:50.656445800Z  + mount --bind /sys/fs/cgroup/cpu//kubelet /sys/fs/cgroup/cpu//kubelet
	2022-06-02T19:14:50.659861600Z  + mount_kubelet_cgroup_root /kubelet.slice /sys/fs/cgroup/cpu
	2022-06-02T19:14:50.659877200Z  + local cgroup_root=/kubelet.slice
	2022-06-02T19:14:50.659889200Z  + local subsystem=/sys/fs/cgroup/cpu
	2022-06-02T19:14:50.659896600Z  + '[' -z /kubelet.slice ']'
	2022-06-02T19:14:50.659903700Z  + mkdir -p /sys/fs/cgroup/cpu//kubelet.slice
	2022-06-02T19:14:50.662154200Z  + '[' /sys/fs/cgroup/cpu == /sys/fs/cgroup/cpuset ']'
	2022-06-02T19:14:50.662173800Z  + mount --bind /sys/fs/cgroup/cpu//kubelet.slice /sys/fs/cgroup/cpu//kubelet.slice
	2022-06-02T19:14:50.665303600Z  + IFS=
	2022-06-02T19:14:50.665327500Z  + read -r subsystem
	2022-06-02T19:14:50.665337200Z  + mount_kubelet_cgroup_root /kubelet /sys/fs/cgroup/cpuacct
	2022-06-02T19:14:50.665345400Z  + local cgroup_root=/kubelet
	2022-06-02T19:14:50.665352600Z  + local subsystem=/sys/fs/cgroup/cpuacct
	2022-06-02T19:14:50.665360100Z  + '[' -z /kubelet ']'
	2022-06-02T19:14:50.665367000Z  + mkdir -p /sys/fs/cgroup/cpuacct//kubelet
	2022-06-02T19:14:50.667896500Z  + '[' /sys/fs/cgroup/cpuacct == /sys/fs/cgroup/cpuset ']'
	2022-06-02T19:14:50.667934100Z  + mount --bind /sys/fs/cgroup/cpuacct//kubelet /sys/fs/cgroup/cpuacct//kubelet
	2022-06-02T19:14:50.671382200Z  + mount_kubelet_cgroup_root /kubelet.slice /sys/fs/cgroup/cpuacct
	2022-06-02T19:14:50.671403800Z  + local cgroup_root=/kubelet.slice
	2022-06-02T19:14:50.671413000Z  + local subsystem=/sys/fs/cgroup/cpuacct
	2022-06-02T19:14:50.671420200Z  + '[' -z /kubelet.slice ']'
	2022-06-02T19:14:50.671429200Z  + mkdir -p /sys/fs/cgroup/cpuacct//kubelet.slice
	2022-06-02T19:14:50.673418700Z  + '[' /sys/fs/cgroup/cpuacct == /sys/fs/cgroup/cpuset ']'
	2022-06-02T19:14:50.673441900Z  + mount --bind /sys/fs/cgroup/cpuacct//kubelet.slice /sys/fs/cgroup/cpuacct//kubelet.slice
	2022-06-02T19:14:50.676144600Z  + IFS=
	2022-06-02T19:14:50.676166200Z  + read -r subsystem
	2022-06-02T19:14:50.676173700Z  + mount_kubelet_cgroup_root /kubelet /sys/fs/cgroup/blkio
	2022-06-02T19:14:50.676182200Z  + local cgroup_root=/kubelet
	2022-06-02T19:14:50.676189000Z  + local subsystem=/sys/fs/cgroup/blkio
	2022-06-02T19:14:50.676195700Z  + '[' -z /kubelet ']'
	2022-06-02T19:14:50.676202300Z  + mkdir -p /sys/fs/cgroup/blkio//kubelet
	2022-06-02T19:14:50.678909900Z  + '[' /sys/fs/cgroup/blkio == /sys/fs/cgroup/cpuset ']'
	2022-06-02T19:14:50.678931300Z  + mount --bind /sys/fs/cgroup/blkio//kubelet /sys/fs/cgroup/blkio//kubelet
	2022-06-02T19:14:50.680760500Z  + mount_kubelet_cgroup_root /kubelet.slice /sys/fs/cgroup/blkio
	2022-06-02T19:14:50.680783400Z  + local cgroup_root=/kubelet.slice
	2022-06-02T19:14:50.680793300Z  + local subsystem=/sys/fs/cgroup/blkio
	2022-06-02T19:14:50.680800400Z  + '[' -z /kubelet.slice ']'
	2022-06-02T19:14:50.680807500Z  + mkdir -p /sys/fs/cgroup/blkio//kubelet.slice
	2022-06-02T19:14:50.682970200Z  + '[' /sys/fs/cgroup/blkio == /sys/fs/cgroup/cpuset ']'
	2022-06-02T19:14:50.683058300Z  + mount --bind /sys/fs/cgroup/blkio//kubelet.slice /sys/fs/cgroup/blkio//kubelet.slice
	2022-06-02T19:14:50.686365900Z  + IFS=
	2022-06-02T19:14:50.686384200Z  + read -r subsystem
	2022-06-02T19:14:50.686389700Z  + mount_kubelet_cgroup_root /kubelet /sys/fs/cgroup/memory
	2022-06-02T19:14:50.686394300Z  + local cgroup_root=/kubelet
	2022-06-02T19:14:50.686398700Z  + local subsystem=/sys/fs/cgroup/memory
	2022-06-02T19:14:50.686403200Z  + '[' -z /kubelet ']'
	2022-06-02T19:14:50.686407600Z  + mkdir -p /sys/fs/cgroup/memory//kubelet
	2022-06-02T19:14:50.688478900Z  + '[' /sys/fs/cgroup/memory == /sys/fs/cgroup/cpuset ']'
	2022-06-02T19:14:50.688497000Z  + mount --bind /sys/fs/cgroup/memory//kubelet /sys/fs/cgroup/memory//kubelet
	2022-06-02T19:14:50.690775200Z  + mount_kubelet_cgroup_root /kubelet.slice /sys/fs/cgroup/memory
	2022-06-02T19:14:50.690794700Z  + local cgroup_root=/kubelet.slice
	2022-06-02T19:14:50.690803900Z  + local subsystem=/sys/fs/cgroup/memory
	2022-06-02T19:14:50.690825100Z  + '[' -z /kubelet.slice ']'
	2022-06-02T19:14:50.690835400Z  + mkdir -p /sys/fs/cgroup/memory//kubelet.slice
	2022-06-02T19:14:50.692611800Z  + '[' /sys/fs/cgroup/memory == /sys/fs/cgroup/cpuset ']'
	2022-06-02T19:14:50.692618200Z  + mount --bind /sys/fs/cgroup/memory//kubelet.slice /sys/fs/cgroup/memory//kubelet.slice
	2022-06-02T19:14:50.694936900Z  + IFS=
	2022-06-02T19:14:50.694961900Z  + read -r subsystem
	2022-06-02T19:14:50.694975900Z  + mount_kubelet_cgroup_root /kubelet /sys/fs/cgroup/devices
	2022-06-02T19:14:50.695066200Z  + local cgroup_root=/kubelet
	2022-06-02T19:14:50.695085300Z  + local subsystem=/sys/fs/cgroup/devices
	2022-06-02T19:14:50.695094900Z  + '[' -z /kubelet ']'
	2022-06-02T19:14:50.695102700Z  + mkdir -p /sys/fs/cgroup/devices//kubelet
	2022-06-02T19:14:50.697617600Z  + '[' /sys/fs/cgroup/devices == /sys/fs/cgroup/cpuset ']'
	2022-06-02T19:14:50.697635000Z  + mount --bind /sys/fs/cgroup/devices//kubelet /sys/fs/cgroup/devices//kubelet
	2022-06-02T19:14:50.700338200Z  + mount_kubelet_cgroup_root /kubelet.slice /sys/fs/cgroup/devices
	2022-06-02T19:14:50.700359800Z  + local cgroup_root=/kubelet.slice
	2022-06-02T19:14:50.700368400Z  + local subsystem=/sys/fs/cgroup/devices
	2022-06-02T19:14:50.700376000Z  + '[' -z /kubelet.slice ']'
	2022-06-02T19:14:50.700382700Z  + mkdir -p /sys/fs/cgroup/devices//kubelet.slice
	2022-06-02T19:14:50.702196600Z  + '[' /sys/fs/cgroup/devices == /sys/fs/cgroup/cpuset ']'
	2022-06-02T19:14:50.702221900Z  + mount --bind /sys/fs/cgroup/devices//kubelet.slice /sys/fs/cgroup/devices//kubelet.slice
	2022-06-02T19:14:50.704303600Z  + IFS=
	2022-06-02T19:14:50.704325300Z  + read -r subsystem
	2022-06-02T19:14:50.704333700Z  + mount_kubelet_cgroup_root /kubelet /sys/fs/cgroup/freezer
	2022-06-02T19:14:50.704338400Z  + local cgroup_root=/kubelet
	2022-06-02T19:14:50.704342700Z  + local subsystem=/sys/fs/cgroup/freezer
	2022-06-02T19:14:50.704347200Z  + '[' -z /kubelet ']'
	2022-06-02T19:14:50.704351500Z  + mkdir -p /sys/fs/cgroup/freezer//kubelet
	2022-06-02T19:14:50.706414000Z  + '[' /sys/fs/cgroup/freezer == /sys/fs/cgroup/cpuset ']'
	2022-06-02T19:14:50.706430400Z  + mount --bind /sys/fs/cgroup/freezer//kubelet /sys/fs/cgroup/freezer//kubelet
	2022-06-02T19:14:50.708873200Z  + mount_kubelet_cgroup_root /kubelet.slice /sys/fs/cgroup/freezer
	2022-06-02T19:14:50.708905200Z  + local cgroup_root=/kubelet.slice
	2022-06-02T19:14:50.708910200Z  + local subsystem=/sys/fs/cgroup/freezer
	2022-06-02T19:14:50.708914700Z  + '[' -z /kubelet.slice ']'
	2022-06-02T19:14:50.708920700Z  + mkdir -p /sys/fs/cgroup/freezer//kubelet.slice
	2022-06-02T19:14:50.710904700Z  + '[' /sys/fs/cgroup/freezer == /sys/fs/cgroup/cpuset ']'
	2022-06-02T19:14:50.710930200Z  + mount --bind /sys/fs/cgroup/freezer//kubelet.slice /sys/fs/cgroup/freezer//kubelet.slice
	2022-06-02T19:14:50.713307000Z  + IFS=
	2022-06-02T19:14:50.713324500Z  + read -r subsystem
	2022-06-02T19:14:50.713330400Z  + mount_kubelet_cgroup_root /kubelet /sys/fs/cgroup/net_cls
	2022-06-02T19:14:50.713336900Z  + local cgroup_root=/kubelet
	2022-06-02T19:14:50.713341500Z  + local subsystem=/sys/fs/cgroup/net_cls
	2022-06-02T19:14:50.713345900Z  + '[' -z /kubelet ']'
	2022-06-02T19:14:50.713350400Z  + mkdir -p /sys/fs/cgroup/net_cls//kubelet
	2022-06-02T19:14:50.715809100Z  + '[' /sys/fs/cgroup/net_cls == /sys/fs/cgroup/cpuset ']'
	2022-06-02T19:14:50.715826000Z  + mount --bind /sys/fs/cgroup/net_cls//kubelet /sys/fs/cgroup/net_cls//kubelet
	2022-06-02T19:14:50.718136500Z  + mount_kubelet_cgroup_root /kubelet.slice /sys/fs/cgroup/net_cls
	2022-06-02T19:14:50.718184100Z  + local cgroup_root=/kubelet.slice
	2022-06-02T19:14:50.718194800Z  + local subsystem=/sys/fs/cgroup/net_cls
	2022-06-02T19:14:50.718202200Z  + '[' -z /kubelet.slice ']'
	2022-06-02T19:14:50.718209100Z  + mkdir -p /sys/fs/cgroup/net_cls//kubelet.slice
	2022-06-02T19:14:50.720366500Z  + '[' /sys/fs/cgroup/net_cls == /sys/fs/cgroup/cpuset ']'
	2022-06-02T19:14:50.720377900Z  + mount --bind /sys/fs/cgroup/net_cls//kubelet.slice /sys/fs/cgroup/net_cls//kubelet.slice
	2022-06-02T19:14:50.723945100Z  + IFS=
	2022-06-02T19:14:50.723970900Z  + read -r subsystem
	2022-06-02T19:14:50.723982500Z  + mount_kubelet_cgroup_root /kubelet /sys/fs/cgroup/perf_event
	2022-06-02T19:14:50.723990300Z  + local cgroup_root=/kubelet
	2022-06-02T19:14:50.723997900Z  + local subsystem=/sys/fs/cgroup/perf_event
	2022-06-02T19:14:50.724004800Z  + '[' -z /kubelet ']'
	2022-06-02T19:14:50.724018800Z  + mkdir -p /sys/fs/cgroup/perf_event//kubelet
	2022-06-02T19:14:50.725699100Z  + '[' /sys/fs/cgroup/perf_event == /sys/fs/cgroup/cpuset ']'
	2022-06-02T19:14:50.725716500Z  + mount --bind /sys/fs/cgroup/perf_event//kubelet /sys/fs/cgroup/perf_event//kubelet
	2022-06-02T19:14:50.728610600Z  + mount_kubelet_cgroup_root /kubelet.slice /sys/fs/cgroup/perf_event
	2022-06-02T19:14:50.728710800Z  + local cgroup_root=/kubelet.slice
	2022-06-02T19:14:50.728722000Z  + local subsystem=/sys/fs/cgroup/perf_event
	2022-06-02T19:14:50.728729500Z  + '[' -z /kubelet.slice ']'
	2022-06-02T19:14:50.728738200Z  + mkdir -p /sys/fs/cgroup/perf_event//kubelet.slice
	2022-06-02T19:14:50.730989800Z  + '[' /sys/fs/cgroup/perf_event == /sys/fs/cgroup/cpuset ']'
	2022-06-02T19:14:50.731016200Z  + mount --bind /sys/fs/cgroup/perf_event//kubelet.slice /sys/fs/cgroup/perf_event//kubelet.slice
	2022-06-02T19:14:50.733910600Z  + IFS=
	2022-06-02T19:14:50.733936800Z  + read -r subsystem
	2022-06-02T19:14:50.733948700Z  + mount_kubelet_cgroup_root /kubelet /sys/fs/cgroup/net_prio
	2022-06-02T19:14:50.734058400Z  + local cgroup_root=/kubelet
	2022-06-02T19:14:50.734068200Z  + local subsystem=/sys/fs/cgroup/net_prio
	2022-06-02T19:14:50.734075200Z  + '[' -z /kubelet ']'
	2022-06-02T19:14:50.734081800Z  + mkdir -p /sys/fs/cgroup/net_prio//kubelet
	2022-06-02T19:14:50.737271000Z  + '[' /sys/fs/cgroup/net_prio == /sys/fs/cgroup/cpuset ']'
	2022-06-02T19:14:50.737287600Z  + mount --bind /sys/fs/cgroup/net_prio//kubelet /sys/fs/cgroup/net_prio//kubelet
	2022-06-02T19:14:50.741574700Z  + mount_kubelet_cgroup_root /kubelet.slice /sys/fs/cgroup/net_prio
	2022-06-02T19:14:50.741805700Z  + local cgroup_root=/kubelet.slice
	2022-06-02T19:14:50.741820400Z  + local subsystem=/sys/fs/cgroup/net_prio
	2022-06-02T19:14:50.741825100Z  + '[' -z /kubelet.slice ']'
	2022-06-02T19:14:50.741829500Z  + mkdir -p /sys/fs/cgroup/net_prio//kubelet.slice
	2022-06-02T19:14:50.743231700Z  + '[' /sys/fs/cgroup/net_prio == /sys/fs/cgroup/cpuset ']'
	2022-06-02T19:14:50.743250900Z  + mount --bind /sys/fs/cgroup/net_prio//kubelet.slice /sys/fs/cgroup/net_prio//kubelet.slice
	2022-06-02T19:14:50.746578800Z  + IFS=
	2022-06-02T19:14:50.746600100Z  + read -r subsystem
	2022-06-02T19:14:50.746623200Z  + mount_kubelet_cgroup_root /kubelet /sys/fs/cgroup/hugetlb
	2022-06-02T19:14:50.746629500Z  + local cgroup_root=/kubelet
	2022-06-02T19:14:50.746635700Z  + local subsystem=/sys/fs/cgroup/hugetlb
	2022-06-02T19:14:50.746642100Z  + '[' -z /kubelet ']'
	2022-06-02T19:14:50.746648400Z  + mkdir -p /sys/fs/cgroup/hugetlb//kubelet
	2022-06-02T19:14:50.748713000Z  + '[' /sys/fs/cgroup/hugetlb == /sys/fs/cgroup/cpuset ']'
	2022-06-02T19:14:50.748735300Z  + mount --bind /sys/fs/cgroup/hugetlb//kubelet /sys/fs/cgroup/hugetlb//kubelet
	2022-06-02T19:14:50.751380300Z  + mount_kubelet_cgroup_root /kubelet.slice /sys/fs/cgroup/hugetlb
	2022-06-02T19:14:50.751402900Z  + local cgroup_root=/kubelet.slice
	2022-06-02T19:14:50.751411400Z  + local subsystem=/sys/fs/cgroup/hugetlb
	2022-06-02T19:14:50.751513300Z  + '[' -z /kubelet.slice ']'
	2022-06-02T19:14:50.751732100Z  + mkdir -p /sys/fs/cgroup/hugetlb//kubelet.slice
	2022-06-02T19:14:50.754310600Z  + '[' /sys/fs/cgroup/hugetlb == /sys/fs/cgroup/cpuset ']'
	2022-06-02T19:14:50.754326700Z  + mount --bind /sys/fs/cgroup/hugetlb//kubelet.slice /sys/fs/cgroup/hugetlb//kubelet.slice
	2022-06-02T19:14:50.756979200Z  + IFS=
	2022-06-02T19:14:50.757072600Z  + read -r subsystem
	2022-06-02T19:14:50.757092300Z  + mount_kubelet_cgroup_root /kubelet /sys/fs/cgroup/pids
	2022-06-02T19:14:50.757099700Z  + local cgroup_root=/kubelet
	2022-06-02T19:14:50.757106800Z  + local subsystem=/sys/fs/cgroup/pids
	2022-06-02T19:14:50.757662700Z  + '[' -z /kubelet ']'
	2022-06-02T19:14:50.757717700Z  + mkdir -p /sys/fs/cgroup/pids//kubelet
	2022-06-02T19:14:50.760184200Z  + '[' /sys/fs/cgroup/pids == /sys/fs/cgroup/cpuset ']'
	2022-06-02T19:14:50.760202300Z  + mount --bind /sys/fs/cgroup/pids//kubelet /sys/fs/cgroup/pids//kubelet
	2022-06-02T19:14:50.762758900Z  + mount_kubelet_cgroup_root /kubelet.slice /sys/fs/cgroup/pids
	2022-06-02T19:14:50.762781300Z  + local cgroup_root=/kubelet.slice
	2022-06-02T19:14:50.762797800Z  + local subsystem=/sys/fs/cgroup/pids
	2022-06-02T19:14:50.762805400Z  + '[' -z /kubelet.slice ']'
	2022-06-02T19:14:50.762812400Z  + mkdir -p /sys/fs/cgroup/pids//kubelet.slice
	2022-06-02T19:14:50.765008100Z  + '[' /sys/fs/cgroup/pids == /sys/fs/cgroup/cpuset ']'
	2022-06-02T19:14:50.765033100Z  + mount --bind /sys/fs/cgroup/pids//kubelet.slice /sys/fs/cgroup/pids//kubelet.slice
	2022-06-02T19:14:50.768915600Z  + IFS=
	2022-06-02T19:14:50.768939900Z  + read -r subsystem
	2022-06-02T19:14:50.768953500Z  + mount_kubelet_cgroup_root /kubelet /sys/fs/cgroup/rdma
	2022-06-02T19:14:50.768961500Z  + local cgroup_root=/kubelet
	2022-06-02T19:14:50.770134900Z  + local subsystem=/sys/fs/cgroup/rdma
	2022-06-02T19:14:50.770150800Z  + '[' -z /kubelet ']'
	2022-06-02T19:14:50.770155700Z  + mkdir -p /sys/fs/cgroup/rdma//kubelet
	2022-06-02T19:14:50.772625100Z  + '[' /sys/fs/cgroup/rdma == /sys/fs/cgroup/cpuset ']'
	2022-06-02T19:14:50.772641200Z  + mount --bind /sys/fs/cgroup/rdma//kubelet /sys/fs/cgroup/rdma//kubelet
	2022-06-02T19:14:50.774988700Z  + mount_kubelet_cgroup_root /kubelet.slice /sys/fs/cgroup/rdma
	2022-06-02T19:14:50.775014900Z  + local cgroup_root=/kubelet.slice
	2022-06-02T19:14:50.775023200Z  + local subsystem=/sys/fs/cgroup/rdma
	2022-06-02T19:14:50.775030600Z  + '[' -z /kubelet.slice ']'
	2022-06-02T19:14:50.775037800Z  + mkdir -p /sys/fs/cgroup/rdma//kubelet.slice
	2022-06-02T19:14:50.777036100Z  + '[' /sys/fs/cgroup/rdma == /sys/fs/cgroup/cpuset ']'
	2022-06-02T19:14:50.777058300Z  + mount --bind /sys/fs/cgroup/rdma//kubelet.slice /sys/fs/cgroup/rdma//kubelet.slice
	2022-06-02T19:14:50.780622600Z  + IFS=
	2022-06-02T19:14:50.780639700Z  + read -r subsystem
	2022-06-02T19:14:50.780644700Z  + mount_kubelet_cgroup_root /kubelet /sys/fs/cgroup/systemd
	2022-06-02T19:14:50.780649000Z  + local cgroup_root=/kubelet
	2022-06-02T19:14:50.780653200Z  + local subsystem=/sys/fs/cgroup/systemd
	2022-06-02T19:14:50.780657200Z  + '[' -z /kubelet ']'
	2022-06-02T19:14:50.780661200Z  + mkdir -p /sys/fs/cgroup/systemd//kubelet
	2022-06-02T19:14:50.782786400Z  + '[' /sys/fs/cgroup/systemd == /sys/fs/cgroup/cpuset ']'
	2022-06-02T19:14:50.782804800Z  + mount --bind /sys/fs/cgroup/systemd//kubelet /sys/fs/cgroup/systemd//kubelet
	2022-06-02T19:14:50.785767300Z  + mount_kubelet_cgroup_root /kubelet.slice /sys/fs/cgroup/systemd
	2022-06-02T19:14:50.785803400Z  + local cgroup_root=/kubelet.slice
	2022-06-02T19:14:50.785811700Z  + local subsystem=/sys/fs/cgroup/systemd
	2022-06-02T19:14:50.785819100Z  + '[' -z /kubelet.slice ']'
	2022-06-02T19:14:50.785826400Z  + mkdir -p /sys/fs/cgroup/systemd//kubelet.slice
	2022-06-02T19:14:50.787815200Z  + '[' /sys/fs/cgroup/systemd == /sys/fs/cgroup/cpuset ']'
	2022-06-02T19:14:50.787831800Z  + mount --bind /sys/fs/cgroup/systemd//kubelet.slice /sys/fs/cgroup/systemd//kubelet.slice
	2022-06-02T19:14:50.790942900Z  + IFS=
	2022-06-02T19:14:50.790959200Z  + read -r subsystem
	2022-06-02T19:14:50.790964300Z  + return
	2022-06-02T19:14:50.790969000Z  + fix_machine_id
	2022-06-02T19:14:50.790975300Z  + echo 'INFO: clearing and regenerating /etc/machine-id'
	2022-06-02T19:14:50.791498800Z  INFO: clearing and regenerating /etc/machine-id
	2022-06-02T19:14:50.791517300Z  + rm -f /etc/machine-id
	2022-06-02T19:14:50.794022200Z  + systemd-machine-id-setup
	2022-06-02T19:14:50.801127500Z  Initializing machine ID from D-Bus machine ID.
	2022-06-02T19:14:50.821163900Z  + fix_product_name
	2022-06-02T19:14:50.821183400Z  + [[ -f /sys/class/dmi/id/product_name ]]
	2022-06-02T19:14:50.821188900Z  + fix_product_uuid
	2022-06-02T19:14:50.821193600Z  + [[ ! -f /kind/product_uuid ]]
	2022-06-02T19:14:50.821200800Z  + cat /proc/sys/kernel/random/uuid
	2022-06-02T19:14:50.823503700Z  + [[ -f /sys/class/dmi/id/product_uuid ]]
	2022-06-02T19:14:50.823602700Z  + [[ -f /sys/devices/virtual/dmi/id/product_uuid ]]
	2022-06-02T19:14:50.823613400Z  + select_iptables
	2022-06-02T19:14:50.823618400Z  + local mode num_legacy_lines num_nft_lines
	2022-06-02T19:14:50.825450600Z  ++ grep -c '^-'
	2022-06-02T19:14:50.835300900Z  + num_legacy_lines=6
	2022-06-02T19:14:50.836987500Z  ++ grep -c '^-'
	2022-06-02T19:14:50.844414000Z  ++ true
	2022-06-02T19:14:50.844442100Z  + num_nft_lines=0
	2022-06-02T19:14:50.844453200Z  + '[' 6 -ge 0 ']'
	2022-06-02T19:14:50.844461000Z  + mode=legacy
	2022-06-02T19:14:50.844872000Z  + echo 'INFO: setting iptables to detected mode: legacy'
	2022-06-02T19:14:50.844896700Z  INFO: setting iptables to detected mode: legacy
	2022-06-02T19:14:50.844906200Z  + update-alternatives --set iptables /usr/sbin/iptables-legacy
	2022-06-02T19:14:50.844913700Z  + echo 'retryable update-alternatives: --set iptables /usr/sbin/iptables-legacy'
	2022-06-02T19:14:50.846306600Z  + local 'args=--set iptables /usr/sbin/iptables-legacy'
	2022-06-02T19:14:50.846326400Z  ++ seq 0 15
	2022-06-02T19:14:50.848247500Z  + for i in $(seq 0 15)
	2022-06-02T19:14:50.848270600Z  + /usr/bin/update-alternatives --set iptables /usr/sbin/iptables-legacy
	2022-06-02T19:14:50.861315500Z  + return
	2022-06-02T19:14:50.862343300Z  + update-alternatives --set ip6tables /usr/sbin/ip6tables-legacy
	2022-06-02T19:14:50.862360700Z  + echo 'retryable update-alternatives: --set ip6tables /usr/sbin/ip6tables-legacy'
	2022-06-02T19:14:50.862514700Z  + local 'args=--set ip6tables /usr/sbin/ip6tables-legacy'
	2022-06-02T19:14:50.863719600Z  ++ seq 0 15
	2022-06-02T19:14:50.864845400Z  + for i in $(seq 0 15)
	2022-06-02T19:14:50.864868200Z  + /usr/bin/update-alternatives --set ip6tables /usr/sbin/ip6tables-legacy
	2022-06-02T19:14:50.885144200Z  + return
	2022-06-02T19:14:50.885170500Z  + enable_network_magic
	2022-06-02T19:14:50.885180600Z  + local docker_embedded_dns_ip=127.0.0.11
	2022-06-02T19:14:50.885190500Z  + local docker_host_ip
	2022-06-02T19:14:50.887395500Z  ++ cut '-d ' -f1
	2022-06-02T19:14:50.887416900Z  ++ head -n1 /dev/fd/63
	2022-06-02T19:14:50.887426000Z  +++ getent ahostsv4 host.docker.internal
	2022-06-02T19:14:50.899108300Z  + docker_host_ip=192.168.65.2
	2022-06-02T19:14:50.899135900Z  + [[ -z 192.168.65.2 ]]
	2022-06-02T19:14:50.899148100Z  + [[ 192.168.65.2 =~ ^127\.[0-9]+\.[0-9]+\.[0-9]+$ ]]
	2022-06-02T19:14:50.899156500Z  + iptables-save
	2022-06-02T19:14:50.900104200Z  + iptables-restore
	2022-06-02T19:14:50.902605500Z  + sed -e 's/-d 127.0.0.11/-d 192.168.65.2/g' -e 's/-A OUTPUT \(.*\) -j DOCKER_OUTPUT/\0\n-A PREROUTING \1 -j DOCKER_OUTPUT/' -e 's/--to-source :53/--to-source 192.168.65.2:53/g'
	2022-06-02T19:14:50.907713700Z  + cp /etc/resolv.conf /etc/resolv.conf.original
	2022-06-02T19:14:50.910805200Z  + sed -e s/127.0.0.11/192.168.65.2/g /etc/resolv.conf.original
	2022-06-02T19:14:50.915844200Z  ++ cut '-d ' -f1
	2022-06-02T19:14:50.915868100Z  ++ head -n1 /dev/fd/63
	2022-06-02T19:14:50.916817200Z  ++++ hostname
	2022-06-02T19:14:50.918417300Z  +++ getent ahostsv4 kubernetes-upgrade-20220602191340-12108
	2022-06-02T19:14:50.922052900Z  + curr_ipv4=192.168.58.2
	2022-06-02T19:14:50.922075200Z  + echo 'INFO: Detected IPv4 address: 192.168.58.2'
	2022-06-02T19:14:50.922084900Z  INFO: Detected IPv4 address: 192.168.58.2
	2022-06-02T19:14:50.922092500Z  + '[' -f /kind/old-ipv4 ']'
	2022-06-02T19:14:50.922099700Z  + [[ -n 192.168.58.2 ]]
	2022-06-02T19:14:50.922113700Z  + echo -n 192.168.58.2
	2022-06-02T19:14:50.924330900Z  ++ cut '-d ' -f1
	2022-06-02T19:14:50.924353000Z  ++ head -n1 /dev/fd/63
	2022-06-02T19:14:50.925374800Z  ++++ hostname
	2022-06-02T19:14:50.926764900Z  +++ getent ahostsv6 kubernetes-upgrade-20220602191340-12108
	2022-06-02T19:14:50.930022200Z  + curr_ipv6=
	2022-06-02T19:14:50.930044200Z  + echo 'INFO: Detected IPv6 address: '
	2022-06-02T19:14:50.930067400Z  INFO: Detected IPv6 address: 
	2022-06-02T19:14:50.930075400Z  + '[' -f /kind/old-ipv6 ']'
	2022-06-02T19:14:50.930472500Z  + [[ -n '' ]]
	2022-06-02T19:14:50.931714600Z  ++ uname -a
	2022-06-02T19:14:50.933410300Z  + echo 'entrypoint completed: Linux kubernetes-upgrade-20220602191340-12108 5.10.102.1-microsoft-standard-WSL2 #1 SMP Wed Mar 2 00:30:59 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux'
	2022-06-02T19:14:50.933426400Z  entrypoint completed: Linux kubernetes-upgrade-20220602191340-12108 5.10.102.1-microsoft-standard-WSL2 #1 SMP Wed Mar 2 00:30:59 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux
	2022-06-02T19:14:50.933431900Z  + exec /sbin/init
	2022-06-02T19:14:50.944506800Z  systemd 245.4-4ubuntu3.17 running in system mode. (+PAM +AUDIT +SELINUX +IMA +APPARMOR +SMACK +SYSVINIT +UTMP +LIBCRYPTSETUP +GCRYPT +GNUTLS +ACL +XZ +LZ4 +SECCOMP +BLKID +ELFUTILS +KMOD +IDN2 -IDN +PCRE2 default-hierarchy=hybrid)
	2022-06-02T19:14:50.944533100Z  Detected virtualization wsl.
	2022-06-02T19:14:50.944547400Z  Detected architecture x86-64.
	2022-06-02T19:14:50.944558600Z  Failed to create symlink /sys/fs/cgroup/cpuacct: File exists
	2022-06-02T19:14:50.944567000Z  Failed to create symlink /sys/fs/cgroup/cpu: File exists
	2022-06-02T19:14:50.944574500Z  Failed to create symlink /sys/fs/cgroup/net_cls: File exists
	2022-06-02T19:14:50.944581700Z  Failed to create symlink /sys/fs/cgroup/net_prio: File exists
	2022-06-02T19:14:50.944589100Z  
	2022-06-02T19:14:50.944735500Z  Welcome to Ubuntu 20.04.4 LTS!
	2022-06-02T19:14:50.944746300Z  
	2022-06-02T19:14:50.944753200Z  Set hostname to <kubernetes-upgrade-20220602191340-12108>.
	2022-06-02T19:14:51.009827900Z  [  OK  ] Started Dispatch Password …ts to Console Directory Watch.
	2022-06-02T19:14:51.009863700Z  [  OK  ] Set up automount Arbitrary…s File System Automount Point.
	2022-06-02T19:14:51.009874200Z  [  OK  ] Reached target Local Encrypted Volumes.
	2022-06-02T19:14:51.009907000Z  [  OK  ] Reached target Network is Online.
	2022-06-02T19:14:51.010355900Z  [  OK  ] Reached target Paths.
	2022-06-02T19:14:51.010380100Z  [  OK  ] Reached target Slices.
	2022-06-02T19:14:51.010391000Z  [  OK  ] Reached target Swap.
	2022-06-02T19:14:51.012438200Z  [  OK  ] Listening on Journal Socket (/dev/log).
	2022-06-02T19:14:51.012457400Z  [  OK  ] Listening on Journal Socket.
	2022-06-02T19:14:51.015334600Z           Mounting Huge Pages File System...
	2022-06-02T19:14:51.018129400Z           Mounting Kernel Debug File System...
	2022-06-02T19:14:51.020483400Z           Mounting Kernel Trace File System...
	2022-06-02T19:14:51.029572300Z           Starting Journal Service...
	2022-06-02T19:14:51.032604500Z           Mounting FUSE Control File System...
	2022-06-02T19:14:51.035745200Z           Starting Remount Root and Kernel File Systems...
	2022-06-02T19:14:51.039067700Z           Starting Apply Kernel Variables...
	2022-06-02T19:14:51.043204800Z  [  OK  ] Mounted Huge Pages File System.
	2022-06-02T19:14:51.043227200Z  [  OK  ] Mounted Kernel Debug File System.
	2022-06-02T19:14:51.043885500Z  [  OK  ] Mounted Kernel Trace File System.
	2022-06-02T19:14:51.043915500Z  [  OK  ] Mounted FUSE Control File System.
	2022-06-02T19:14:51.048860000Z  [  OK  ] Finished Remount Root and Kernel File Systems.
	2022-06-02T19:14:51.055112100Z           Starting Create System Users...
	2022-06-02T19:14:51.059600400Z           Starting Update UTMP about System Boot/Shutdown...
	2022-06-02T19:14:51.062223500Z  [  OK  ] Finished Apply Kernel Variables.
	2022-06-02T19:14:51.075082200Z  [  OK  ] Finished Update UTMP about System Boot/Shutdown.
	2022-06-02T19:14:51.090738300Z  [  OK  ] Started Journal Service.
	2022-06-02T19:14:51.093257300Z           Starting Flush Journal to Persistent Storage...
	2022-06-02T19:14:51.102388200Z  [  OK  ] Finished Flush Journal to Persistent Storage.
	2022-06-02T19:14:51.376486400Z  [  OK  ] Finished Create System Users.
	2022-06-02T19:14:51.378728400Z           Starting Create Static Device Nodes in /dev...
	2022-06-02T19:14:51.388995100Z  [  OK  ] Finished Create Static Device Nodes in /dev.
	2022-06-02T19:14:51.389027500Z  [  OK  ] Reached target Local File Systems (Pre).
	2022-06-02T19:14:51.389481500Z  [  OK  ] Reached target Local File Systems.
	2022-06-02T19:14:51.390031800Z  [  OK  ] Reached target System Initialization.
	2022-06-02T19:14:51.390911400Z  [  OK  ] Started Daily Cleanup of Temporary Directories.
	2022-06-02T19:14:51.390942900Z  [  OK  ] Reached target Timers.
	2022-06-02T19:14:51.391858200Z  [  OK  ] Listening on BuildKit.
	2022-06-02T19:14:51.392704400Z  [  OK  ] Listening on D-Bus System Message Bus Socket.
	2022-06-02T19:14:51.394666900Z           Starting Docker Socket for the API.
	2022-06-02T19:14:51.396946800Z           Starting Podman API Socket.
	2022-06-02T19:14:51.397910000Z  [  OK  ] Listening on Docker Socket for the API.
	2022-06-02T19:14:51.398877500Z  [  OK  ] Listening on Podman API Socket.
	2022-06-02T19:14:51.398893100Z  [  OK  ] Reached target Sockets.
	2022-06-02T19:14:51.399407100Z  [  OK  ] Reached target Basic System.
	2022-06-02T19:14:51.401570300Z           Starting containerd container runtime...
	2022-06-02T19:14:51.403719200Z  [  OK  ] Started D-Bus System Message Bus.
	2022-06-02T19:14:51.407847300Z           Starting minikube automount...
	2022-06-02T19:14:51.410746200Z           Starting OpenBSD Secure Shell server...
	2022-06-02T19:14:51.443531700Z  [  OK  ] Finished minikube automount.
	2022-06-02T19:14:51.460213600Z  [  OK  ] Started OpenBSD Secure Shell server.
	2022-06-02T19:14:51.591717200Z  [  OK  ] Started containerd container runtime.
	2022-06-02T19:14:51.595461100Z           Starting Docker Application Container Engine...
	2022-06-02T19:14:53.339215000Z  [  OK  ] Started Docker Application Container Engine.
	2022-06-02T19:14:53.339245700Z  [  OK  ] Reached target Multi-User System.
	2022-06-02T19:14:53.339254700Z  [  OK  ] Reached target Graphical Interface.
	2022-06-02T19:14:53.347127100Z           Starting Update UTMP about System Runlevel Changes...
	2022-06-02T19:14:53.359644700Z  [  OK  ] Finished Update UTMP about System Runlevel Changes.
	2022-06-02T19:16:24.822190300Z  [  OK  ] Stopped target Graphical Interface.
	2022-06-02T19:16:24.822251900Z  [  OK  ] Stopped target Multi-User System.
	2022-06-02T19:16:24.823206500Z  [  OK  ] Stopped target Timers.
	2022-06-02T19:16:24.823836800Z  [  OK  ] Stopped Daily Cleanup of Temporary Directories.
	2022-06-02T19:16:24.826973300Z           Stopping D-Bus System Message Bus...
	2022-06-02T19:16:24.834685900Z           Stopping Docker Application Container Engine...
	2022-06-02T19:16:24.836011600Z           Stopping kubelet: The Kubernetes Node Agent...
	2022-06-02T19:16:24.836939200Z           Stopping OpenBSD Secure Shell server...
	2022-06-02T19:16:24.839234900Z  [  OK  ] Stopped OpenBSD Secure Shell server.
	2022-06-02T19:16:24.840617100Z  [  OK  ] Stopped D-Bus System Message Bus.
	2022-06-02T19:16:25.341556400Z  [  OK  ] Stopped kubelet: The Kubernetes Node Agent.
	2022-06-02T19:16:25.734754900Z  [  OK  ] Unmounted /var/lib/docker/…2e2c93a17b7961968108c4/merged.
	2022-06-02T19:16:25.740739700Z  [  OK  ] Unmounted /var/lib/docker/…f898b5f9b7498ab0adb5f8/merged.
	2022-06-02T19:16:25.836856300Z  [  OK  ] Unmounted /var/lib/docker/…c8684f4e1d80552b2b3276/merged.
	2022-06-02T19:16:25.843254000Z  [  OK  ] Unmounted /var/lib/docker/…ef29200644eb2cd619bdc6/merged.
	2022-06-02T19:16:25.850125500Z  [  OK  ] Unmounted /var/lib/docker/…495a747af6b9836e25e9e2/merged.
	2022-06-02T19:16:25.963954300Z  [  OK  ] Unmounted /var/lib/docker/…d634a69f2a60e39e5f11c4/merged.
	2022-06-02T19:16:25.988466400Z  [  OK  ] Unmounted /var/lib/docker/…df4611af5681e635dc8f99/merged.
	2022-06-02T19:16:26.159166000Z  [  OK  ] Unmounted /var/lib/docker/…8d3fd2c2462da874b7/mounts/shm.
	2022-06-02T19:16:26.159207000Z  [  OK  ] Unmounted /var/lib/docker/…fae84ff7913edc61ad05cd/merged.
	2022-06-02T19:16:26.207494500Z  [  OK  ] Unmounted /var/lib/docker/…a14c3f8c389f5ef8be/mounts/shm.
	2022-06-02T19:16:26.207534800Z  [  OK  ] Unmounted /var/lib/docker/…bc17c55e1ffdb608672548/merged.
	2022-06-02T19:16:26.347488700Z  [  OK  ] Unmounted /var/lib/docker/…2b772ac061c280da1e/mounts/shm.
	2022-06-02T19:16:26.347533900Z  [  OK  ] Unmounted /var/lib/docker/…89417157aa9d8f260a266e/merged.
	2022-06-02T19:16:26.368581100Z  [  OK  ] Unmounted /var/lib/docker/…f294596a1b4f7be8bc/mounts/shm.
	2022-06-02T19:16:26.368608700Z  [  OK  ] Unmounted /var/lib/docker/…7d3af5931a17ce711d3b20/merged.
	2022-06-02T19:16:26.383733500Z  [  OK  ] Unmounted /var/lib/docker/…c0bb8965aa3ff6bf5f/mounts/shm.
	2022-06-02T19:16:26.383762400Z  [  OK  ] Unmounted /var/lib/docker/…9a63aa66332e0f57852400/merged.
	2022-06-02T19:16:26.412963900Z  [  OK  ] Unmounted /var/lib/docker/…d9a3f97b0bcc46df6c/mounts/shm.
	2022-06-02T19:16:26.412989500Z  [  OK  ] Unmounted /var/lib/docker/…96876f08ba21f943f34f3f/merged.
	2022-06-02T19:16:26.486873200Z  [  OK  ] Unmounted /run/docker/netns/8a340d127a4d.
	2022-06-02T19:16:26.494105400Z  [  OK  ] Unmounted /var/lib/docker/…0d980dec9f4635971f/mounts/shm.
	2022-06-02T19:16:26.495203300Z  [  OK  ] Unmounted /var/lib/docker/…32697e4c7392d1a760c334/merged.
	2022-06-02T19:16:26.585665600Z  [  OK  ] Stopped Docker Application Container Engine.
	2022-06-02T19:16:26.585973500Z  [  OK  ] Stopped target Network is Online.
	2022-06-02T19:16:26.586601400Z           Stopping containerd container runtime...
	2022-06-02T19:16:26.587950600Z  [  OK  ] Stopped minikube automount.
	2022-06-02T19:16:26.642568600Z  [  OK  ] Stopped containerd container runtime.
	2022-06-02T19:16:26.642608100Z  [  OK  ] Stopped target Basic System.
	2022-06-02T19:16:26.642719500Z  [  OK  ] Stopped target Paths.
	2022-06-02T19:16:26.643518000Z  [  OK  ] Stopped target Slices.
	2022-06-02T19:16:26.643539000Z  [  OK  ] Stopped target Sockets.
	2022-06-02T19:16:26.644927700Z  [  OK  ] Closed BuildKit.
	2022-06-02T19:16:26.646036500Z  [  OK  ] Closed D-Bus System Message Bus Socket.
	2022-06-02T19:16:26.651945000Z  [  OK  ] Closed Docker Socket for the API.
	2022-06-02T19:16:26.651964700Z  [  OK  ] Closed Podman API Socket.
	2022-06-02T19:16:26.651971000Z  [  OK  ] Stopped target System Initialization.
	2022-06-02T19:16:26.653065500Z  [  OK  ] Stopped target Local Encrypted Volumes.
	2022-06-02T19:16:26.687091500Z  [  OK  ] Stopped Dispatch Password …ts to Console Directory Watch.
	2022-06-02T19:16:26.687115000Z  [  OK  ] Stopped target Local File Systems.
	2022-06-02T19:16:26.710610700Z           Unmounting /data...
	2022-06-02T19:16:26.711818200Z           Unmounting /etc/hostname...
	2022-06-02T19:16:26.713167000Z           Unmounting /etc/hosts...
	2022-06-02T19:16:26.714604100Z           Unmounting /etc/resolv.conf...
	2022-06-02T19:16:26.717223100Z           Unmounting /run/docker/netns/default...
	2022-06-02T19:16:26.718580800Z           Unmounting /tmp/hostpath-provisioner...
	2022-06-02T19:16:26.719786400Z           Unmounting /tmp/hostpath_pv...
	2022-06-02T19:16:26.721087600Z           Unmounting /usr/lib/modules...
	2022-06-02T19:16:26.724233400Z           Unmounting /var/lib/kubele…~secret/coredns-token-l4wmk...
	2022-06-02T19:16:26.727578600Z           Unmounting /var/lib/kubele…age-provisioner-token-v7xx6...
	2022-06-02T19:16:26.734048300Z           Unmounting /var/lib/kubele…cret/kube-proxy-token-r99sx...
	2022-06-02T19:16:26.735376700Z  [  OK  ] Stopped Apply Kernel Variables.
	2022-06-02T19:16:26.737010100Z           Stopping Update UTMP about System Boot/Shutdown...
	2022-06-02T19:16:26.751045100Z  [  OK  ] Stopped Update UTMP about System Boot/Shutdown.
	2022-06-02T19:16:26.799533300Z  [  OK  ] Unmounted /data.
	2022-06-02T19:16:26.801420100Z  [  OK  ] Unmounted /etc/hostname.
	2022-06-02T19:16:26.802132000Z  [  OK  ] Unmounted /etc/hosts.
	2022-06-02T19:16:26.803605600Z  [  OK  ] Unmounted /etc/resolv.conf.
	2022-06-02T19:16:26.804719500Z  [  OK  ] Unmounted /run/docker/netns/default.
	2022-06-02T19:16:26.806346300Z  [  OK  ] Unmounted /tmp/hostpath-provisioner.
	2022-06-02T19:16:26.807718200Z  [  OK  ] Unmounted /tmp/hostpath_pv.
	2022-06-02T19:16:26.808987000Z  [  OK  ] Unmounted /usr/lib/modules.
	2022-06-02T19:16:26.810282500Z  [  OK  ] Unmounted /var/lib/kubelet…io~secret/coredns-token-l4wmk.
	2022-06-02T19:16:26.811790000Z  [  OK  ] Unmounted /var/lib/kubelet…orage-provisioner-token-v7xx6.
	2022-06-02T19:16:26.813034300Z  [  OK  ] Unmounted /var/lib/kubelet…secret/kube-proxy-token-r99sx.
	2022-06-02T19:16:26.814500000Z           Unmounting /tmp...
	2022-06-02T19:16:26.815719400Z           Unmounting /var...
	2022-06-02T19:16:26.821601900Z  [  OK  ] Unmounted /tmp.
	2022-06-02T19:16:26.822235000Z  [  OK  ] Unmounted /var.
	2022-06-02T19:16:26.822251900Z  [  OK  ] Stopped target Local File Systems (Pre).
	2022-06-02T19:16:26.823093800Z  [  OK  ] Stopped target Swap.
	2022-06-02T19:16:26.823111200Z  [  OK  ] Reached target Unmount All Filesystems.
	2022-06-02T19:16:26.824257000Z  [  OK  ] Stopped Create Static Device Nodes in /dev.
	2022-06-02T19:16:26.825777700Z  [  OK  ] Stopped Create System Users.
	2022-06-02T19:16:26.826958800Z  [  OK  ] Stopped Remount Root and Kernel File Systems.
	2022-06-02T19:16:26.826976600Z  [  OK  ] Reached target Shutdown.
	2022-06-02T19:16:26.826982600Z  [  OK  ] Reached target Final Step.
	2022-06-02T19:16:26.827880500Z  [  OK  ] Finished Power-Off.
	2022-06-02T19:16:26.827897700Z  [  OK  ] Reached target Power-Off.
	
	-- /stdout --
	I0602 19:17:11.858404     936 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0602 19:17:14.008332     936 cli_runner.go:217] Completed: docker system info --format "{{json .}}": (2.1499184s)
	I0602 19:17:14.008332     936 info.go:265] docker info: {ID:DTJ5:6HSD:HQYO:NIUY:6CLH:5UQL:6FBK:7BMO:GVE6:JAI7:26MB:GZA3 Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:68 OomKillDisable:true NGoroutines:65 SystemTime:2022-06-02 19:17:12.9492766 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:4 KernelVersion:5.10.102.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_
64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:53902323712 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.16 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16 Expected:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16} RuncCommit:{ID:v1.1.1-0-g52de29d Expected:v1.1.1-0-g52de29d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,
profile=default] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.6.0] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:C:\Program Files\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc.
Version:v0.17.0]] Warnings:<nil>}}
	I0602 19:17:14.008332     936 errors.go:98] postmortem docker info: {ID:DTJ5:6HSD:HQYO:NIUY:6CLH:5UQL:6FBK:7BMO:GVE6:JAI7:26MB:GZA3 Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:68 OomKillDisable:true NGoroutines:65 SystemTime:2022-06-02 19:17:12.9492766 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:4 KernelVersion:5.10.102.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Archi
tecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:53902323712 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.16 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16 Expected:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16} RuncCommit:{ID:v1.1.1-0-g52de29d Expected:v1.1.1-0-g52de29d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[n
ame=seccomp,profile=default] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.6.0] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:C:\Program Files\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor
:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0602 19:17:14.016994     936 network_create.go:272] running [docker network inspect kubernetes-upgrade-20220602191340-12108] to gather additional debugging logs...
	I0602 19:17:14.016994     936 cli_runner.go:164] Run: docker network inspect kubernetes-upgrade-20220602191340-12108
	W0602 19:17:15.158335     936 cli_runner.go:211] docker network inspect kubernetes-upgrade-20220602191340-12108 returned with exit code 1
	I0602 19:17:15.158335     936 cli_runner.go:217] Completed: docker network inspect kubernetes-upgrade-20220602191340-12108: (1.1413352s)
	I0602 19:17:15.158335     936 network_create.go:275] error running [docker network inspect kubernetes-upgrade-20220602191340-12108]: docker network inspect kubernetes-upgrade-20220602191340-12108: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: kubernetes-upgrade-20220602191340-12108
	I0602 19:17:15.158335     936 network_create.go:277] output of [docker network inspect kubernetes-upgrade-20220602191340-12108]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: kubernetes-upgrade-20220602191340-12108
	
	** /stderr **
	I0602 19:17:15.167317     936 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0602 19:17:17.353506     936 cli_runner.go:217] Completed: docker system info --format "{{json .}}": (2.186037s)
	I0602 19:17:17.353986     936 info.go:265] docker info: {ID:DTJ5:6HSD:HQYO:NIUY:6CLH:5UQL:6FBK:7BMO:GVE6:JAI7:26MB:GZA3 Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:74 OomKillDisable:true NGoroutines:72 SystemTime:2022-06-02 19:17:16.2824507 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:4 KernelVersion:5.10.102.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_
64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:53902323712 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.16 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16 Expected:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16} RuncCommit:{ID:v1.1.1-0-g52de29d Expected:v1.1.1-0-g52de29d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,
profile=default] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.6.0] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:C:\Program Files\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc.
Version:v0.17.0]] Warnings:<nil>}}
	I0602 19:17:17.367293     936 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kubernetes-upgrade-20220602191340-12108
	I0602 19:17:18.486372     936 cli_runner.go:217] Completed: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kubernetes-upgrade-20220602191340-12108: (1.1190742s)
	I0602 19:17:18.486372     936 profile.go:148] Saving config to C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\kubernetes-upgrade-20220602191340-12108\config.json ...
	I0602 19:17:18.489371     936 machine.go:88] provisioning docker machine ...
	I0602 19:17:18.489371     936 ubuntu.go:169] provisioning hostname "kubernetes-upgrade-20220602191340-12108"
	I0602 19:17:18.496356     936 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220602191340-12108
	W0602 19:17:19.638070     936 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220602191340-12108 returned with exit code 1
	I0602 19:17:19.638070     936 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220602191340-12108: (1.1417088s)
	I0602 19:17:19.638070     936 machine.go:91] provisioned docker machine in 1.1486943s
	I0602 19:17:19.647074     936 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0602 19:17:19.654055     936 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220602191340-12108
	W0602 19:17:20.830977     936 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220602191340-12108 returned with exit code 1
	I0602 19:17:20.830977     936 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220602191340-12108: (1.1769161s)
	I0602 19:17:20.830977     936 retry.go:31] will retry after 231.159374ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0602 19:17:21.073636     936 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220602191340-12108
	W0602 19:17:22.228828     936 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220602191340-12108 returned with exit code 1
	I0602 19:17:22.228828     936 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220602191340-12108: (1.1551863s)
	W0602 19:17:22.228828     936 start.go:281] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	W0602 19:17:22.228828     936 start.go:248] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0602 19:17:22.239814     936 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0602 19:17:22.247784     936 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220602191340-12108
	W0602 19:17:23.398840     936 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220602191340-12108 returned with exit code 1
	I0602 19:17:23.398840     936 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220602191340-12108: (1.1510506s)
	I0602 19:17:23.398840     936 retry.go:31] will retry after 141.409254ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0602 19:17:23.563301     936 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220602191340-12108
	W0602 19:17:24.726795     936 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220602191340-12108 returned with exit code 1
	I0602 19:17:24.726795     936 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220602191340-12108: (1.1634888s)
	W0602 19:17:24.726795     936 start.go:296] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	W0602 19:17:24.726795     936 start.go:253] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0602 19:17:24.726795     936 fix.go:57] fixHost completed within 18.5150343s
	I0602 19:17:24.726795     936 start.go:81] releasing machines lock for "kubernetes-upgrade-20220602191340-12108", held for 18.515273s
	W0602 19:17:24.727741     936 out.go:239] * Failed to start docker container. Running "minikube delete -p kubernetes-upgrade-20220602191340-12108" may fix it: provision: get ssh host-port: unable to inspect a not running container to get SSH port
	* Failed to start docker container. Running "minikube delete -p kubernetes-upgrade-20220602191340-12108" may fix it: provision: get ssh host-port: unable to inspect a not running container to get SSH port
	I0602 19:17:25.186947     936 out.go:177] 
	W0602 19:17:25.597024     936 out.go:239] X Exiting due to GUEST_PROVISION_CONTAINER_EXITED: Docker container exited prematurely after it was created, consider investigating Docker's performance/health.
	X Exiting due to GUEST_PROVISION_CONTAINER_EXITED: Docker container exited prematurely after it was created, consider investigating Docker's performance/health.
	I0602 19:17:25.693418     936 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:252: failed to upgrade with newest k8s version. args: out/minikube-windows-amd64.exe start -p kubernetes-upgrade-20220602191340-12108 --memory=2200 --kubernetes-version=v1.23.6 --alsologtostderr -v=1 --driver=docker : exit status 80
version_upgrade_test.go:255: (dbg) Run:  kubectl --context kubernetes-upgrade-20220602191340-12108 version --output=json
version_upgrade_test.go:255: (dbg) Non-zero exit: kubectl --context kubernetes-upgrade-20220602191340-12108 version --output=json: exit status 1 (186.6271ms)

                                                
                                                
** stderr ** 
	error: context "kubernetes-upgrade-20220602191340-12108" does not exist

                                                
                                                
** /stderr **
version_upgrade_test.go:257: error running kubectl: exit status 1
panic.go:482: *** TestKubernetesUpgrade FAILED at 2022-06-02 19:17:26.9177284 +0000 GMT m=+7522.993672301
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestKubernetesUpgrade]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect kubernetes-upgrade-20220602191340-12108
helpers_test.go:231: (dbg) Done: docker inspect kubernetes-upgrade-20220602191340-12108: (1.1897759s)
helpers_test.go:235: (dbg) docker inspect kubernetes-upgrade-20220602191340-12108:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "e425b5d64014fc112e2c6213cca7749ecbd803cdfd50c69d90cb4a22f6676024",
	        "Created": "2022-06-02T19:14:47.8422731Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "exited",
	            "Running": false,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 0,
	            "ExitCode": 130,
	            "Error": "network 322f4233d6e4b7b1eb6c0630668710dda5d7f72c8791e2d7e090bf7c854cb1c3 not found",
	            "StartedAt": "2022-06-02T19:14:50.2110413Z",
	            "FinishedAt": "2022-06-02T19:16:27.1435289Z"
	        },
	        "Image": "sha256:462790409a0e2520bcd6c4a009da80732d87146c973d78561764290fa691ea41",
	        "ResolvConfPath": "/var/lib/docker/containers/e425b5d64014fc112e2c6213cca7749ecbd803cdfd50c69d90cb4a22f6676024/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/e425b5d64014fc112e2c6213cca7749ecbd803cdfd50c69d90cb4a22f6676024/hostname",
	        "HostsPath": "/var/lib/docker/containers/e425b5d64014fc112e2c6213cca7749ecbd803cdfd50c69d90cb4a22f6676024/hosts",
	        "LogPath": "/var/lib/docker/containers/e425b5d64014fc112e2c6213cca7749ecbd803cdfd50c69d90cb4a22f6676024/e425b5d64014fc112e2c6213cca7749ecbd803cdfd50c69d90cb4a22f6676024-json.log",
	        "Name": "/kubernetes-upgrade-20220602191340-12108",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "kubernetes-upgrade-20220602191340-12108:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "kubernetes-upgrade-20220602191340-12108",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 2306867200,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/dd7f480705ee377d00cab0a1c3708bcaa5fd0a0169bf492baa593faa194ce02c-init/diff:/var/lib/docker/overlay2/dfce970b43800856c522d9750e5e1364e8adf4be4cf71ca7c53d79b33355f5a7/diff:/var/lib/docker/overlay2/4fd23a1b84854239f1bb855d05e42ecd6acbd1b0944b347813a56f5f45356a42/diff:/var/lib/docker/overlay2/864c5b1fbc297750771bb843fdeb4bafa10868a71716f4a01f1119609fb34667/diff:/var/lib/docker/overlay2/0f11f6855118857c743b90ca120ff7aa550f8157d475abf59df950433a5bc6e8/diff:/var/lib/docker/overlay2/2ae7f559725a060dc3b3a9c2fbd554b98114ae47dbf8db75f13bd8a95cbae19a/diff:/var/lib/docker/overlay2/48f41ac288d1037223ac101e6bc07f05729cdcecd98cc85971db99e90765c437/diff:/var/lib/docker/overlay2/8d4eaae639ade3ad3459b4fb67dbcac83774b72a2550b0a4bca1f21d122b20e6/diff:/var/lib/docker/overlay2/e06515bb91756221300de52336376d32ef9bd8685a92352e522936c4947b88ee/diff:/var/lib/docker/overlay2/a2f615fb794b704dc3823080c47e2c357cf4826ec91f6ae190c7497bb18a80cd/diff:/var/lib/docker/overlay2/22f99f
8a3da21c6e2be4c5c5e9d969af73e7695aaf9b0c7d0d09b5795ba76416/diff:/var/lib/docker/overlay2/9c0266785c64b9f6c471863067ca9db045a5aa61167a7817217cf01825a7d868/diff:/var/lib/docker/overlay2/b8a0250c9ae7d899ee3e46414c2db7f7ba363793900f8fcbf1b470586ebe7bd9/diff:/var/lib/docker/overlay2/00afbeac619cb9c06d4da311f5fc5aa3f5147b88b291acf06d4c4b36984ad5a2/diff:/var/lib/docker/overlay2/da51241ed08bd861b9d27902198eae13c3e4aac5c79f522e9f3fa209ea35e8d3/diff:/var/lib/docker/overlay2/b01176f7dbe98e3004db7c0fe45d94616a803dd8ae9cbdf3a1f2a188604178af/diff:/var/lib/docker/overlay2/0ebb0ff0177c8116e72a14ac704b161f75922cea05fe804ad1f7b83f4cd3dd70/diff:/var/lib/docker/overlay2/bae8d175bc3e334a70aaa239643efa0e8b453ab163f077d9cef60e3840c717ba/diff:/var/lib/docker/overlay2/e72a79f763a44dc32f9a2e84dc5e28a060e7fbb9f4624cb8aaa084dd356522ec/diff:/var/lib/docker/overlay2/2e1bc304b205033ad7f49fb8db243b0991596e0eec913fd13e8382aa25767e21/diff:/var/lib/docker/overlay2/ebb9b39dedfc09f9f34ea879f56a8ffd24ab9f9bf8acc93aa9df5eb93dba58e8/diff:/var/lib/d
ocker/overlay2/bffdca36eba4bce9086f2c269bcfe5b915d807483717f0e27acbd51b5bbfc11b/diff:/var/lib/docker/overlay2/96c321cbf06c0050c8a0a7897e9533db1ee5788eb09b1e1d605bdd1134af8eca/diff:/var/lib/docker/overlay2/735422b44af98e330209fe1c4273bf57aa33fcfd770f3e9d6f1a6e59f7545920/diff:/var/lib/docker/overlay2/8dc177c0589f67ded7d9c229d3c587fe77b3d1c68cf0a5af871bc23768d67d84/diff:/var/lib/docker/overlay2/9a29541ccfee3849e0691950c599bb7e4e51d9026724b1ad13abc8d8e9c140e0/diff:/var/lib/docker/overlay2/50fe1dc8f357b5d624681e6f14d98e6d33a8b6b53d70293ba90ac4435a1e18d8/diff:/var/lib/docker/overlay2/86f301a296dbb7422a3d55a008a9f38278a7a19d68a0f735d298c0c2a431ee30/diff:/var/lib/docker/overlay2/dc8087ea592587f8cb5392cc0ee739c33f2724c47b83767d593b3065914820b0/diff:/var/lib/docker/overlay2/15163601889f0d414f35ccd64ae33a52958605b5b7e50618ed5d4f4bd06ec65b/diff:/var/lib/docker/overlay2/a50cf19d9d69b9c68c6c66a918cbde678b49e8d566d06772af22bf99191b08f3/diff:/var/lib/docker/overlay2/621f3b0fc578721c5d0465771ad007f022ed238fa5a2076f807c077680c
26d27/diff:/var/lib/docker/overlay2/2652f9ffde92786a77e3bb35fe07c03a623aaad541f0ca9710839800c4b470e4/diff:/var/lib/docker/overlay2/c853755ee76ea55ad6c00f5eaff82196f4953ee6fb2d27e27ba35f86d56bfc32/diff:/var/lib/docker/overlay2/a0f70e6416a8e618ea7475b5e7f4cdc9a66ac39f0a6c1969c569d8e4f0b5e9eb/diff:/var/lib/docker/overlay2/275d2c643ecb011298df16e0794bebb9a7ec82e190aea53a90369288c521f75e/diff:/var/lib/docker/overlay2/a7e78f238badc23c2c38b7e9b9c4428c0614e825744076161295740d46a20957/diff:/var/lib/docker/overlay2/39fcd4c392271449973511a31d445289c1f8d378d01759fef12c430c9f44f2b8/diff:/var/lib/docker/overlay2/e1c51360d327e86575fe8248415fae12e9dbdde580db0e6f4f4e485ac9f92e3b/diff:/var/lib/docker/overlay2/fecd88783858177cbe3b751f0717b370c5556d7cf0ef163e2710f16fce09d53c/diff:/var/lib/docker/overlay2/3b4c7afaac6f5818bc33bec8c0ec442eb5a1010d0de6fe488460ee83a3901b21/diff:/var/lib/docker/overlay2/47d0047bc42c34ea02c33c1500f96c5109f27f84f973a5636832bbc855761e3f/diff",
	                "MergedDir": "/var/lib/docker/overlay2/dd7f480705ee377d00cab0a1c3708bcaa5fd0a0169bf492baa593faa194ce02c/merged",
	                "UpperDir": "/var/lib/docker/overlay2/dd7f480705ee377d00cab0a1c3708bcaa5fd0a0169bf492baa593faa194ce02c/diff",
	                "WorkDir": "/var/lib/docker/overlay2/dd7f480705ee377d00cab0a1c3708bcaa5fd0a0169bf492baa593faa194ce02c/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "kubernetes-upgrade-20220602191340-12108",
	                "Source": "/var/lib/docker/volumes/kubernetes-upgrade-20220602191340-12108/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "kubernetes-upgrade-20220602191340-12108",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "kubernetes-upgrade-20220602191340-12108",
	                "name.minikube.sigs.k8s.io": "kubernetes-upgrade-20220602191340-12108",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "7ed50d5ab7698ef031105a3fedaf6e5918caf31e28958792e67e2c629831ecb0",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {},
	            "SandboxKey": "/var/run/docker/netns/7ed50d5ab769",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "kubernetes-upgrade-20220602191340-12108": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.58.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "e425b5d64014",
	                        "kubernetes-upgrade-20220602191340-12108"
	                    ],
	                    "NetworkID": "322f4233d6e4b7b1eb6c0630668710dda5d7f72c8791e2d7e090bf7c854cb1c3",
	                    "EndpointID": "",
	                    "Gateway": "",
	                    "IPAddress": "",
	                    "IPPrefixLen": 0,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p kubernetes-upgrade-20220602191340-12108 -n kubernetes-upgrade-20220602191340-12108
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p kubernetes-upgrade-20220602191340-12108 -n kubernetes-upgrade-20220602191340-12108: exit status 7 (3.4276488s)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "kubernetes-upgrade-20220602191340-12108" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "kubernetes-upgrade-20220602191340-12108" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p kubernetes-upgrade-20220602191340-12108
E0602 19:17:41.000915   12108 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\ingress-addon-legacy-20220602180932-12108\client.crt: The system cannot find the path specified.
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p kubernetes-upgrade-20220602191340-12108: (23.1944481s)
--- FAIL: TestKubernetesUpgrade (254.19s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (42.25s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-windows-amd64.exe start -p NoKubernetes-20220602190816-12108 --no-kubernetes --driver=docker
E0602 19:12:40.997560   12108 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\ingress-addon-legacy-20220602180932-12108\client.crt: The system cannot find the path specified.

                                                
                                                
=== CONT  TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p NoKubernetes-20220602190816-12108 --no-kubernetes --driver=docker: exit status 1 (36.5042533s)

                                                
                                                
-- stdout --
	* [NoKubernetes-20220602190816-12108] minikube v1.26.0-beta.1 on Microsoft Windows 10 Enterprise N 10.0.19044 Build 19044
	  - KUBECONFIG=C:\Users\jenkins.minikube7\minikube-integration\kubeconfig
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube7\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=14269
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	* Using the docker driver based on user configuration
	* Using Docker Desktop driver with the root privilege
	* Starting minikube without Kubernetes NoKubernetes-20220602190816-12108 in cluster NoKubernetes-20220602190816-12108
	* Pulling base image ...
	* Creating docker container (CPUs=2, Memory=16300MB) ...

                                                
                                                
-- /stdout --
no_kubernetes_test.go:138: failed to start minikube with args: "out/minikube-windows-amd64.exe start -p NoKubernetes-20220602190816-12108 --no-kubernetes --driver=docker" : exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestNoKubernetes/serial/Start]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect NoKubernetes-20220602190816-12108
helpers_test.go:231: (dbg) Done: docker inspect NoKubernetes-20220602190816-12108: (1.4569024s)
helpers_test.go:235: (dbg) docker inspect NoKubernetes-20220602190816-12108:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "NoKubernetes-20220602190816-12108",
	        "Id": "34dfa9a5bb61bf8a05caff2ce5ffe45f7a95bf026ff04ad9eb389fe36cff2a52",
	        "Created": "2022-06-02T19:13:05.2110609Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.49.0/24",
	                    "Gateway": "192.168.49.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "1500"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p NoKubernetes-20220602190816-12108 -n NoKubernetes-20220602190816-12108
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p NoKubernetes-20220602190816-12108 -n NoKubernetes-20220602190816-12108: exit status 7 (4.2668082s)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0602 19:13:22.482618   10128 status.go:247] status error: host: state: unknown state "NoKubernetes-20220602190816-12108": docker container inspect NoKubernetes-20220602190816-12108 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: NoKubernetes-20220602190816-12108

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-20220602190816-12108" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestNoKubernetes/serial/Start (42.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (988.87s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:101: (dbg) Run:  out/minikube-windows-amd64.exe start -p calico-20220602191616-12108 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=calico --driver=docker
E0602 19:36:57.286808   12108 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\addons-20220602171403-12108\client.crt: The system cannot find the path specified.
E0602 19:37:09.904822   12108 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\functional-20220602172845-12108\client.crt: The system cannot find the path specified.

                                                
                                                
=== CONT  TestNetworkPlugins/group/calico/Start
net_test.go:101: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p calico-20220602191616-12108 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=calico --driver=docker: exit status 80 (16m28.3022812s)

                                                
                                                
-- stdout --
	* [calico-20220602191616-12108] minikube v1.26.0-beta.1 on Microsoft Windows 10 Enterprise N 10.0.19044 Build 19044
	  - KUBECONFIG=C:\Users\jenkins.minikube7\minikube-integration\kubeconfig
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube7\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=14269
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	* Using the docker driver based on user configuration
	* Using Docker Desktop driver with the root privilege
	* Starting control plane node calico-20220602191616-12108 in cluster calico-20220602191616-12108
	* Pulling base image ...
	* Creating docker container (CPUs=2, Memory=2048MB) ...
	* docker "calico-20220602191616-12108" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=2048MB) ...
	* Preparing Kubernetes v1.23.6 on Docker 20.10.16 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Configuring RBAC rules ...
	* Configuring Calico (Container Networking Interface) ...
	* Verifying Kubernetes components...
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	* Enabled addons: default-storageclass, storage-provisioner
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0602 19:36:51.386784   12568 out.go:296] Setting OutFile to fd 1684 ...
	I0602 19:36:51.443770   12568 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0602 19:36:51.443770   12568 out.go:309] Setting ErrFile to fd 1884...
	I0602 19:36:51.443770   12568 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0602 19:36:51.468000   12568 out.go:303] Setting JSON to false
	I0602 19:36:51.474173   12568 start.go:115] hostinfo: {"hostname":"minikube7","uptime":61753,"bootTime":1654136858,"procs":193,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19044 Build 19044","kernelVersion":"10.0.19044 Build 19044","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f66ed2ea-6c04-4a6b-8eea-b2fb0953e990"}
	W0602 19:36:51.474173   12568 start.go:123] gopshost.Virtualization returned error: not implemented yet
	I0602 19:36:51.482135   12568 out.go:177] * [calico-20220602191616-12108] minikube v1.26.0-beta.1 on Microsoft Windows 10 Enterprise N 10.0.19044 Build 19044
	I0602 19:36:51.486185   12568 notify.go:193] Checking for updates...
	I0602 19:36:51.488934   12568 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube7\minikube-integration\kubeconfig
	I0602 19:36:51.491721   12568 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube7\minikube-integration\.minikube
	I0602 19:36:51.493777   12568 out.go:177]   - MINIKUBE_LOCATION=14269
	I0602 19:36:51.496394   12568 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0602 19:36:51.499982   12568 config.go:178] Loaded profile config "default-k8s-different-port-20220602192441-12108": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.6
	I0602 19:36:51.500977   12568 config.go:178] Loaded profile config "newest-cni-20220602193528-12108": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.6
	I0602 19:36:51.500977   12568 driver.go:358] Setting default libvirt URI to qemu:///system
	I0602 19:36:54.498003   12568 docker.go:137] docker version: linux-20.10.16
	I0602 19:36:54.509434   12568 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0602 19:36:56.672284   12568 cli_runner.go:217] Completed: docker system info --format "{{json .}}": (2.16284s)
	I0602 19:36:56.673229   12568 info.go:265] docker info: {ID:DTJ5:6HSD:HQYO:NIUY:6CLH:5UQL:6FBK:7BMO:GVE6:JAI7:26MB:GZA3 Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:59 OomKillDisable:true NGoroutines:57 SystemTime:2022-06-02 19:36:55.6057635 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:4 KernelVersion:5.10.102.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_
64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:53902323712 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.16 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16 Expected:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16} RuncCommit:{ID:v1.1.1-0-g52de29d Expected:v1.1.1-0-g52de29d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,
profile=default] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.6.0] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:C:\Program Files\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc.
Version:v0.17.0]] Warnings:<nil>}}
	I0602 19:36:56.680708   12568 out.go:177] * Using the docker driver based on user configuration
	I0602 19:36:56.684920   12568 start.go:284] selected driver: docker
	I0602 19:36:56.684920   12568 start.go:806] validating driver "docker" against <nil>
	I0602 19:36:56.684920   12568 start.go:817] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0602 19:36:56.947955   12568 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0602 19:36:59.021668   12568 cli_runner.go:217] Completed: docker system info --format "{{json .}}": (2.0735384s)
	I0602 19:36:59.021668   12568 info.go:265] docker info: {ID:DTJ5:6HSD:HQYO:NIUY:6CLH:5UQL:6FBK:7BMO:GVE6:JAI7:26MB:GZA3 Containers:1 ContainersRunning:0 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:49 OomKillDisable:true NGoroutines:50 SystemTime:2022-06-02 19:36:57.9975685 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:4 KernelVersion:5.10.102.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_
64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:53902323712 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.16 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16 Expected:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16} RuncCommit:{ID:v1.1.1-0-g52de29d Expected:v1.1.1-0-g52de29d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,
profile=default] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.6.0] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:C:\Program Files\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc.
Version:v0.17.0]] Warnings:<nil>}}
	I0602 19:36:59.021668   12568 start_flags.go:292] no existing cluster config was found, will generate one from the flags 
	I0602 19:36:59.022387   12568 start_flags.go:847] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0602 19:36:59.026273   12568 out.go:177] * Using Docker Desktop driver with the root privilege
	I0602 19:36:59.028707   12568 cni.go:95] Creating CNI manager for "calico"
	I0602 19:36:59.028707   12568 start_flags.go:301] Found "Calico" CNI - setting NetworkPlugin=cni
	I0602 19:36:59.028707   12568 start_flags.go:306] config:
	{Name:calico-20220602191616-12108 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.6 ClusterName:calico-20220602191616-12108 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.loc
al ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:5m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube7:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0602 19:36:59.033184   12568 out.go:177] * Starting control plane node calico-20220602191616-12108 in cluster calico-20220602191616-12108
	I0602 19:36:59.036537   12568 cache.go:120] Beginning downloading kic base image for docker with docker
	I0602 19:36:59.039868   12568 out.go:177] * Pulling base image ...
	I0602 19:36:59.042129   12568 preload.go:132] Checking if preload exists for k8s version v1.23.6 and runtime docker
	I0602 19:36:59.042129   12568 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496 in local docker daemon
	I0602 19:36:59.042314   12568 preload.go:148] Found local preload: C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.23.6-docker-overlay2-amd64.tar.lz4
	I0602 19:36:59.042314   12568 cache.go:57] Caching tarball of preloaded images
	I0602 19:36:59.042908   12568 preload.go:174] Found C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.23.6-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0602 19:36:59.042908   12568 cache.go:60] Finished verifying existence of preloaded tar for  v1.23.6 on docker
	I0602 19:36:59.042908   12568 profile.go:148] Saving config to C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\calico-20220602191616-12108\config.json ...
	I0602 19:36:59.043611   12568 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\calico-20220602191616-12108\config.json: {Name:mkbd4539c0399f26ceb27f0bb0ed4fb674dc59e7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0602 19:37:00.136652   12568 image.go:79] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496 in local docker daemon, skipping pull
	I0602 19:37:00.136652   12568 cache.go:141] gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496 exists in daemon, skipping load
	I0602 19:37:00.136652   12568 cache.go:206] Successfully downloaded all kic artifacts
	I0602 19:37:00.136652   12568 start.go:352] acquiring machines lock for calico-20220602191616-12108: {Name:mkc77f98bc165e3d366b80ce1be2c2a0584e0dad Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0602 19:37:00.136652   12568 start.go:356] acquired machines lock for "calico-20220602191616-12108" in 0s
	I0602 19:37:00.136652   12568 start.go:91] Provisioning new machine with config: &{Name:calico-20220602191616-12108 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.6 ClusterName:calico-20220602191616-12108 Namespace:defaul
t APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.23.6 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:5m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube7:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:doc
ker BinaryMirror: DisableOptimizations:false DisableMetrics:false} &{Name: IP: Port:8443 KubernetesVersion:v1.23.6 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0602 19:37:00.136652   12568 start.go:131] createHost starting for "" (driver="docker")
	I0602 19:37:00.141664   12568 out.go:204] * Creating docker container (CPUs=2, Memory=2048MB) ...
	I0602 19:37:00.142654   12568 start.go:165] libmachine.API.Create for "calico-20220602191616-12108" (driver="docker")
	I0602 19:37:00.142654   12568 client.go:168] LocalClient.Create starting
	I0602 19:37:00.142654   12568 main.go:134] libmachine: Reading certificate data from C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem
	I0602 19:37:00.142654   12568 main.go:134] libmachine: Decoding PEM data...
	I0602 19:37:00.142654   12568 main.go:134] libmachine: Parsing certificate...
	I0602 19:37:00.142654   12568 main.go:134] libmachine: Reading certificate data from C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\cert.pem
	I0602 19:37:00.143653   12568 main.go:134] libmachine: Decoding PEM data...
	I0602 19:37:00.143653   12568 main.go:134] libmachine: Parsing certificate...
	I0602 19:37:00.152658   12568 cli_runner.go:164] Run: docker network inspect calico-20220602191616-12108 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0602 19:37:01.215612   12568 cli_runner.go:211] docker network inspect calico-20220602191616-12108 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0602 19:37:01.215612   12568 cli_runner.go:217] Completed: docker network inspect calico-20220602191616-12108 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}": (1.0629493s)
	I0602 19:37:01.223614   12568 network_create.go:272] running [docker network inspect calico-20220602191616-12108] to gather additional debugging logs...
	I0602 19:37:01.223614   12568 cli_runner.go:164] Run: docker network inspect calico-20220602191616-12108
	W0602 19:37:02.315077   12568 cli_runner.go:211] docker network inspect calico-20220602191616-12108 returned with exit code 1
	I0602 19:37:02.315077   12568 cli_runner.go:217] Completed: docker network inspect calico-20220602191616-12108: (1.0914585s)
	I0602 19:37:02.315077   12568 network_create.go:275] error running [docker network inspect calico-20220602191616-12108]: docker network inspect calico-20220602191616-12108: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: calico-20220602191616-12108
	I0602 19:37:02.315077   12568 network_create.go:277] output of [docker network inspect calico-20220602191616-12108]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: calico-20220602191616-12108
	
	** /stderr **
	I0602 19:37:02.322076   12568 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0602 19:37:09.701758   12568 cli_runner.go:217] Completed: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}": (7.3795688s)
	I0602 19:37:09.725581   12568 network.go:288] reserving subnet 192.168.49.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[] amended:true}} dirty:map[192.168.49.0:0xc0009863e8] misses:0}
	I0602 19:37:09.725581   12568 network.go:235] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0602 19:37:09.725581   12568 network_create.go:115] attempt to create docker network calico-20220602191616-12108 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0602 19:37:09.733951   12568 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true calico-20220602191616-12108
	I0602 19:37:10.926440   12568 cli_runner.go:217] Completed: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true calico-20220602191616-12108: (1.1924838s)
	I0602 19:37:10.926440   12568 network_create.go:99] docker network calico-20220602191616-12108 192.168.49.0/24 created
	I0602 19:37:10.926440   12568 kic.go:106] calculated static IP "192.168.49.2" for the "calico-20220602191616-12108" container
	I0602 19:37:10.942827   12568 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0602 19:37:12.029192   12568 cli_runner.go:217] Completed: docker ps -a --format {{.Names}}: (1.086361s)
	I0602 19:37:12.037201   12568 cli_runner.go:164] Run: docker volume create calico-20220602191616-12108 --label name.minikube.sigs.k8s.io=calico-20220602191616-12108 --label created_by.minikube.sigs.k8s.io=true
	I0602 19:37:13.212191   12568 cli_runner.go:217] Completed: docker volume create calico-20220602191616-12108 --label name.minikube.sigs.k8s.io=calico-20220602191616-12108 --label created_by.minikube.sigs.k8s.io=true: (1.1747423s)
	I0602 19:37:13.212191   12568 oci.go:103] Successfully created a docker volume calico-20220602191616-12108
	I0602 19:37:13.223402   12568 cli_runner.go:164] Run: docker run --rm --name calico-20220602191616-12108-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=calico-20220602191616-12108 --entrypoint /usr/bin/test -v calico-20220602191616-12108:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496 -d /var/lib
	I0602 19:37:15.986961   12568 cli_runner.go:217] Completed: docker run --rm --name calico-20220602191616-12108-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=calico-20220602191616-12108 --entrypoint /usr/bin/test -v calico-20220602191616-12108:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496 -d /var/lib: (2.7632915s)
	I0602 19:37:15.987150   12568 oci.go:107] Successfully prepared a docker volume calico-20220602191616-12108
	I0602 19:37:15.987286   12568 preload.go:132] Checking if preload exists for k8s version v1.23.6 and runtime docker
	I0602 19:37:15.987364   12568 kic.go:179] Starting extracting preloaded images to volume ...
	I0602 19:37:15.997056   12568 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.23.6-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v calico-20220602191616-12108:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496 -I lz4 -xf /preloaded.tar -C /extractDir
	I0602 19:38:01.817510   12568 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.23.6-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v calico-20220602191616-12108:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496 -I lz4 -xf /preloaded.tar -C /extractDir: (45.8201079s)
	I0602 19:38:01.817622   12568 kic.go:188] duration metric: took 45.830040 seconds to extract preloaded images to volume
	I0602 19:38:01.841214   12568 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0602 19:38:03.991931   12568 cli_runner.go:217] Completed: docker system info --format "{{json .}}": (2.1507072s)
	I0602 19:38:03.992308   12568 info.go:265] docker info: {ID:DTJ5:6HSD:HQYO:NIUY:6CLH:5UQL:6FBK:7BMO:GVE6:JAI7:26MB:GZA3 Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:51 OomKillDisable:true NGoroutines:63 SystemTime:2022-06-02 19:38:02.9395884 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:4 KernelVersion:5.10.102.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_
64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:53902323712 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.16 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16 Expected:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16} RuncCommit:{ID:v1.1.1-0-g52de29d Expected:v1.1.1-0-g52de29d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,
profile=default] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.6.0] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:C:\Program Files\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc.
Version:v0.17.0]] Warnings:<nil>}}
	I0602 19:38:04.000440   12568 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0602 19:38:06.192941   12568 cli_runner.go:217] Completed: docker info --format "'{{json .SecurityOptions}}'": (2.1924911s)
	I0602 19:38:06.200962   12568 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname calico-20220602191616-12108 --name calico-20220602191616-12108 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=calico-20220602191616-12108 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=calico-20220602191616-12108 --network calico-20220602191616-12108 --ip 192.168.49.2 --volume calico-20220602191616-12108:/var --security-opt apparmor=unconfined --memory=2048mb --memory-swap=2048mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496
	W0602 19:38:09.049411   12568 cli_runner.go:211] docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname calico-20220602191616-12108 --name calico-20220602191616-12108 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=calico-20220602191616-12108 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=calico-20220602191616-12108 --network calico-20220602191616-12108 --ip 192.168.49.2 --volume calico-20220602191616-12108:/var --security-opt apparmor=unconfined --memory=2048mb --memory-swap=2048mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496 returned with exit code 125
	I0602 19:38:09.049411   12568 cli_runner.go:217] Completed: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname calico-20220602191616-12108 --name calico-20220602191616-12108 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=calico-20220602191616-12108 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=calico-20220602191616-12108 --network calico-20220602191616-12108 --ip 192.168.49.2 --volume calico-20220602191616-12108:/var --security-opt apparmor=unconfined --memory=2048mb --memory-swap=2048mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496: (2.8484362s)
	I0602 19:38:09.049411   12568 client.go:171] LocalClient.Create took 1m8.9064517s
	I0602 19:38:11.074251   12568 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0602 19:38:11.082603   12568 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220602191616-12108
	W0602 19:38:12.150013   12568 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220602191616-12108 returned with exit code 1
	I0602 19:38:12.150013   12568 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220602191616-12108: (1.0674054s)
	I0602 19:38:12.150013   12568 retry.go:31] will retry after 276.165072ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0602 19:38:12.439568   12568 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220602191616-12108
	W0602 19:38:13.523732   12568 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220602191616-12108 returned with exit code 1
	I0602 19:38:13.523732   12568 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220602191616-12108: (1.0841591s)
	W0602 19:38:13.523732   12568 start.go:281] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	W0602 19:38:13.523732   12568 start.go:248] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0602 19:38:13.532728   12568 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0602 19:38:13.539732   12568 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220602191616-12108
	W0602 19:38:14.662083   12568 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220602191616-12108 returned with exit code 1
	I0602 19:38:14.662083   12568 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220602191616-12108: (1.1222052s)
	I0602 19:38:14.662083   12568 retry.go:31] will retry after 291.140013ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0602 19:38:14.969196   12568 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220602191616-12108
	W0602 19:38:16.052648   12568 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220602191616-12108 returned with exit code 1
	I0602 19:38:16.052648   12568 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220602191616-12108: (1.0834479s)
	W0602 19:38:16.052648   12568 start.go:296] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	W0602 19:38:16.052648   12568 start.go:253] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0602 19:38:16.052648   12568 start.go:134] duration metric: createHost completed in 1m15.9146559s
	I0602 19:38:16.052648   12568 start.go:81] releasing machines lock for "calico-20220602191616-12108", held for 1m15.9156606s
	W0602 19:38:16.052648   12568 start.go:599] error starting host: creating host: create: creating: create kic node: create container: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname calico-20220602191616-12108 --name calico-20220602191616-12108 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=calico-20220602191616-12108 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=calico-20220602191616-12108 --network calico-20220602191616-12108 --ip 192.168.49.2 --volume calico-20220602191616-12108:/var --security-opt apparmor=unconfined --memory=2048mb --memory-swap=2048mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496: exit st
atus 125
	stdout:
	3e1a55f004363eff64915f6088e2b87337053664fd80735b76c1ab19c2928df8
	
	stderr:
	docker: Error response from daemon: network calico-20220602191616-12108 not found.
	I0602 19:38:16.071214   12568 cli_runner.go:164] Run: docker container inspect calico-20220602191616-12108 --format={{.State.Status}}
	I0602 19:38:17.126055   12568 cli_runner.go:217] Completed: docker container inspect calico-20220602191616-12108 --format={{.State.Status}}: (1.0548364s)
	W0602 19:38:17.126055   12568 start.go:604] delete host: Docker machine "calico-20220602191616-12108" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	W0602 19:38:17.126055   12568 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: create kic node: create container: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname calico-20220602191616-12108 --name calico-20220602191616-12108 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=calico-20220602191616-12108 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=calico-20220602191616-12108 --network calico-20220602191616-12108 --ip 192.168.49.2 --volume calico-20220602191616-12108:/var --security-opt apparmor=unconfined --memory=2048mb --memory-swap=2048mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad
73fc3496: exit status 125
	stdout:
	3e1a55f004363eff64915f6088e2b87337053664fd80735b76c1ab19c2928df8
	
	stderr:
	docker: Error response from daemon: network calico-20220602191616-12108 not found.
	
	! StartHost failed, but will try again: creating host: create: creating: create kic node: create container: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname calico-20220602191616-12108 --name calico-20220602191616-12108 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=calico-20220602191616-12108 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=calico-20220602191616-12108 --network calico-20220602191616-12108 --ip 192.168.49.2 --volume calico-20220602191616-12108:/var --security-opt apparmor=unconfined --memory=2048mb --memory-swap=2048mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496: exit status 125
	stdout:
	3e1a55f004363eff64915f6088e2b87337053664fd80735b76c1ab19c2928df8
	
	stderr:
	docker: Error response from daemon: network calico-20220602191616-12108 not found.
	
	I0602 19:38:17.126055   12568 start.go:614] Will try again in 5 seconds ...
	I0602 19:38:22.127872   12568 start.go:352] acquiring machines lock for calico-20220602191616-12108: {Name:mkc77f98bc165e3d366b80ce1be2c2a0584e0dad Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0602 19:38:22.128553   12568 start.go:356] acquired machines lock for "calico-20220602191616-12108" in 188.1µs
	I0602 19:38:22.128760   12568 start.go:94] Skipping create...Using existing machine configuration
	I0602 19:38:22.128796   12568 fix.go:55] fixHost starting: 
	I0602 19:38:22.145042   12568 cli_runner.go:164] Run: docker container inspect calico-20220602191616-12108 --format={{.State.Status}}
	I0602 19:38:23.278994   12568 cli_runner.go:217] Completed: docker container inspect calico-20220602191616-12108 --format={{.State.Status}}: (1.1339473s)
	I0602 19:38:23.278994   12568 fix.go:103] recreateIfNeeded on calico-20220602191616-12108: state= err=<nil>
	I0602 19:38:23.278994   12568 fix.go:108] machineExists: false. err=machine does not exist
	I0602 19:38:23.283030   12568 out.go:177] * docker "calico-20220602191616-12108" container is missing, will recreate.
	I0602 19:38:23.285016   12568 delete.go:124] DEMOLISHING calico-20220602191616-12108 ...
	I0602 19:38:23.300427   12568 cli_runner.go:164] Run: docker container inspect calico-20220602191616-12108 --format={{.State.Status}}
	I0602 19:38:24.413878   12568 cli_runner.go:217] Completed: docker container inspect calico-20220602191616-12108 --format={{.State.Status}}: (1.1134458s)
	I0602 19:38:24.413878   12568 stop.go:79] host is in state 
	I0602 19:38:24.413878   12568 main.go:134] libmachine: Stopping "calico-20220602191616-12108"...
	I0602 19:38:24.426887   12568 cli_runner.go:164] Run: docker container inspect calico-20220602191616-12108 --format={{.State.Status}}
	I0602 19:38:25.521050   12568 cli_runner.go:217] Completed: docker container inspect calico-20220602191616-12108 --format={{.State.Status}}: (1.0941575s)
	I0602 19:38:25.536066   12568 kic_runner.go:93] Run: systemctl --version
	I0602 19:38:25.536066   12568 kic_runner.go:114] Args: [docker exec --privileged calico-20220602191616-12108 systemctl --version]
	I0602 19:38:26.647973   12568 kic_runner.go:93] Run: sudo service kubelet stop
	I0602 19:38:26.647973   12568 kic_runner.go:114] Args: [docker exec --privileged calico-20220602191616-12108 sudo service kubelet stop]
	I0602 19:38:27.810243   12568 openrc.go:165] stop output: 
	** stderr ** 
	Error response from daemon: Container 3e1a55f004363eff64915f6088e2b87337053664fd80735b76c1ab19c2928df8 is not running
	
	** /stderr **
	W0602 19:38:27.810288   12568 kic.go:439] couldn't stop kubelet. will continue with stop anyways: sudo service kubelet stop: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: Container 3e1a55f004363eff64915f6088e2b87337053664fd80735b76c1ab19c2928df8 is not running
	I0602 19:38:27.831061   12568 kic_runner.go:93] Run: sudo service kubelet stop
	I0602 19:38:27.831061   12568 kic_runner.go:114] Args: [docker exec --privileged calico-20220602191616-12108 sudo service kubelet stop]
	I0602 19:38:29.058302   12568 openrc.go:165] stop output: 
	** stderr ** 
	Error response from daemon: Container 3e1a55f004363eff64915f6088e2b87337053664fd80735b76c1ab19c2928df8 is not running
	
	** /stderr **
	W0602 19:38:29.058302   12568 kic.go:441] couldn't force stop kubelet. will continue with stop anyways: sudo service kubelet stop: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: Container 3e1a55f004363eff64915f6088e2b87337053664fd80735b76c1ab19c2928df8 is not running
	I0602 19:38:29.079286   12568 kic_runner.go:93] Run: docker ps -a --filter=name=k8s_.*_(kube-system|kubernetes-dashboard|storage-gluster|istio-operator)_ --format={{.ID}}
	I0602 19:38:29.080317   12568 kic_runner.go:114] Args: [docker exec --privileged calico-20220602191616-12108 docker ps -a --filter=name=k8s_.*_(kube-system|kubernetes-dashboard|storage-gluster|istio-operator)_ --format={{.ID}}]
	I0602 19:38:30.140739   12568 kic.go:452] unable list containers : docker: docker ps -a --filter=name=k8s_.*_(kube-system|kubernetes-dashboard|storage-gluster|istio-operator)_ --format={{.ID}}: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: Container 3e1a55f004363eff64915f6088e2b87337053664fd80735b76c1ab19c2928df8 is not running
	I0602 19:38:30.140739   12568 kic.go:462] successfully stopped kubernetes!
	I0602 19:38:30.156592   12568 kic_runner.go:93] Run: pgrep kube-apiserver
	I0602 19:38:30.156592   12568 kic_runner.go:114] Args: [docker exec --privileged calico-20220602191616-12108 pgrep kube-apiserver]
	I0602 19:38:32.356424   12568 cli_runner.go:164] Run: docker container inspect calico-20220602191616-12108 --format={{.State.Status}}
	I0602 19:38:33.395106   12568 cli_runner.go:217] Completed: docker container inspect calico-20220602191616-12108 --format={{.State.Status}}: (1.0383003s)
	I0602 19:38:36.425908   12568 cli_runner.go:164] Run: docker container inspect calico-20220602191616-12108 --format={{.State.Status}}
	I0602 19:38:37.468293   12568 cli_runner.go:217] Completed: docker container inspect calico-20220602191616-12108 --format={{.State.Status}}: (1.0417938s)
	I0602 19:38:40.492526   12568 cli_runner.go:164] Run: docker container inspect calico-20220602191616-12108 --format={{.State.Status}}
	I0602 19:38:41.563337   12568 cli_runner.go:217] Completed: docker container inspect calico-20220602191616-12108 --format={{.State.Status}}: (1.0707754s)
	I0602 19:38:44.586406   12568 cli_runner.go:164] Run: docker container inspect calico-20220602191616-12108 --format={{.State.Status}}
	I0602 19:38:45.745598   12568 cli_runner.go:217] Completed: docker container inspect calico-20220602191616-12108 --format={{.State.Status}}: (1.1591001s)
	I0602 19:38:48.761626   12568 cli_runner.go:164] Run: docker container inspect calico-20220602191616-12108 --format={{.State.Status}}
	I0602 19:38:49.859223   12568 cli_runner.go:217] Completed: docker container inspect calico-20220602191616-12108 --format={{.State.Status}}: (1.0975925s)
	I0602 19:38:52.880874   12568 cli_runner.go:164] Run: docker container inspect calico-20220602191616-12108 --format={{.State.Status}}
	I0602 19:38:53.920696   12568 cli_runner.go:217] Completed: docker container inspect calico-20220602191616-12108 --format={{.State.Status}}: (1.0396994s)
	I0602 19:38:56.945317   12568 cli_runner.go:164] Run: docker container inspect calico-20220602191616-12108 --format={{.State.Status}}
	I0602 19:38:58.050783   12568 cli_runner.go:217] Completed: docker container inspect calico-20220602191616-12108 --format={{.State.Status}}: (1.1054618s)
	I0602 19:39:01.080349   12568 cli_runner.go:164] Run: docker container inspect calico-20220602191616-12108 --format={{.State.Status}}
	I0602 19:39:02.158947   12568 cli_runner.go:217] Completed: docker container inspect calico-20220602191616-12108 --format={{.State.Status}}: (1.0785933s)
	I0602 19:39:05.186779   12568 cli_runner.go:164] Run: docker container inspect calico-20220602191616-12108 --format={{.State.Status}}
	I0602 19:39:06.269528   12568 cli_runner.go:217] Completed: docker container inspect calico-20220602191616-12108 --format={{.State.Status}}: (1.0827452s)
	I0602 19:39:09.284290   12568 cli_runner.go:164] Run: docker container inspect calico-20220602191616-12108 --format={{.State.Status}}
	I0602 19:39:10.410093   12568 cli_runner.go:217] Completed: docker container inspect calico-20220602191616-12108 --format={{.State.Status}}: (1.1257974s)
	I0602 19:39:13.441133   12568 cli_runner.go:164] Run: docker container inspect calico-20220602191616-12108 --format={{.State.Status}}
	I0602 19:39:14.565585   12568 cli_runner.go:217] Completed: docker container inspect calico-20220602191616-12108 --format={{.State.Status}}: (1.1244466s)
	I0602 19:39:17.582430   12568 cli_runner.go:164] Run: docker container inspect calico-20220602191616-12108 --format={{.State.Status}}
	I0602 19:39:18.663640   12568 cli_runner.go:217] Completed: docker container inspect calico-20220602191616-12108 --format={{.State.Status}}: (1.0811034s)
	I0602 19:39:21.693760   12568 cli_runner.go:164] Run: docker container inspect calico-20220602191616-12108 --format={{.State.Status}}
	I0602 19:39:22.927385   12568 cli_runner.go:217] Completed: docker container inspect calico-20220602191616-12108 --format={{.State.Status}}: (1.2333426s)
	I0602 19:39:25.957072   12568 cli_runner.go:164] Run: docker container inspect calico-20220602191616-12108 --format={{.State.Status}}
	I0602 19:39:27.087511   12568 cli_runner.go:217] Completed: docker container inspect calico-20220602191616-12108 --format={{.State.Status}}: (1.1304346s)
	I0602 19:39:30.102347   12568 cli_runner.go:164] Run: docker container inspect calico-20220602191616-12108 --format={{.State.Status}}
	I0602 19:39:31.222805   12568 cli_runner.go:217] Completed: docker container inspect calico-20220602191616-12108 --format={{.State.Status}}: (1.1204526s)
	I0602 19:39:34.250816   12568 cli_runner.go:164] Run: docker container inspect calico-20220602191616-12108 --format={{.State.Status}}
	I0602 19:39:35.382183   12568 cli_runner.go:217] Completed: docker container inspect calico-20220602191616-12108 --format={{.State.Status}}: (1.1303509s)
	I0602 19:39:38.400488   12568 cli_runner.go:164] Run: docker container inspect calico-20220602191616-12108 --format={{.State.Status}}
	I0602 19:39:39.575558   12568 cli_runner.go:217] Completed: docker container inspect calico-20220602191616-12108 --format={{.State.Status}}: (1.1750652s)
	I0602 19:39:42.598026   12568 cli_runner.go:164] Run: docker container inspect calico-20220602191616-12108 --format={{.State.Status}}
	I0602 19:39:43.692573   12568 cli_runner.go:217] Completed: docker container inspect calico-20220602191616-12108 --format={{.State.Status}}: (1.0943152s)
	I0602 19:39:46.716707   12568 cli_runner.go:164] Run: docker container inspect calico-20220602191616-12108 --format={{.State.Status}}
	I0602 19:39:47.853983   12568 cli_runner.go:217] Completed: docker container inspect calico-20220602191616-12108 --format={{.State.Status}}: (1.1371457s)
	I0602 19:39:50.872260   12568 cli_runner.go:164] Run: docker container inspect calico-20220602191616-12108 --format={{.State.Status}}
	I0602 19:39:52.265846   12568 cli_runner.go:217] Completed: docker container inspect calico-20220602191616-12108 --format={{.State.Status}}: (1.3935807s)
	I0602 19:39:55.297367   12568 cli_runner.go:164] Run: docker container inspect calico-20220602191616-12108 --format={{.State.Status}}
	I0602 19:39:56.683023   12568 cli_runner.go:217] Completed: docker container inspect calico-20220602191616-12108 --format={{.State.Status}}: (1.3856509s)
	I0602 19:39:59.714197   12568 cli_runner.go:164] Run: docker container inspect calico-20220602191616-12108 --format={{.State.Status}}
	I0602 19:40:01.022210   12568 cli_runner.go:217] Completed: docker container inspect calico-20220602191616-12108 --format={{.State.Status}}: (1.308007s)
	I0602 19:40:04.043469   12568 cli_runner.go:164] Run: docker container inspect calico-20220602191616-12108 --format={{.State.Status}}
	I0602 19:40:05.213121   12568 cli_runner.go:217] Completed: docker container inspect calico-20220602191616-12108 --format={{.State.Status}}: (1.1696472s)
	I0602 19:40:08.233372   12568 cli_runner.go:164] Run: docker container inspect calico-20220602191616-12108 --format={{.State.Status}}
	I0602 19:40:09.419959   12568 cli_runner.go:217] Completed: docker container inspect calico-20220602191616-12108 --format={{.State.Status}}: (1.1865825s)
	I0602 19:40:12.449604   12568 cli_runner.go:164] Run: docker container inspect calico-20220602191616-12108 --format={{.State.Status}}
	I0602 19:40:13.596543   12568 cli_runner.go:217] Completed: docker container inspect calico-20220602191616-12108 --format={{.State.Status}}: (1.146776s)
	I0602 19:40:16.625148   12568 cli_runner.go:164] Run: docker container inspect calico-20220602191616-12108 --format={{.State.Status}}
	I0602 19:40:17.829770   12568 cli_runner.go:217] Completed: docker container inspect calico-20220602191616-12108 --format={{.State.Status}}: (1.2046165s)
	I0602 19:40:20.868384   12568 cli_runner.go:164] Run: docker container inspect calico-20220602191616-12108 --format={{.State.Status}}
	I0602 19:40:22.013779   12568 cli_runner.go:217] Completed: docker container inspect calico-20220602191616-12108 --format={{.State.Status}}: (1.1453449s)
	I0602 19:40:25.048384   12568 cli_runner.go:164] Run: docker container inspect calico-20220602191616-12108 --format={{.State.Status}}
	I0602 19:40:26.198901   12568 cli_runner.go:217] Completed: docker container inspect calico-20220602191616-12108 --format={{.State.Status}}: (1.1504474s)
	I0602 19:40:29.215828   12568 cli_runner.go:164] Run: docker container inspect calico-20220602191616-12108 --format={{.State.Status}}
	I0602 19:40:30.385934   12568 cli_runner.go:217] Completed: docker container inspect calico-20220602191616-12108 --format={{.State.Status}}: (1.1691245s)
	I0602 19:40:33.410135   12568 cli_runner.go:164] Run: docker container inspect calico-20220602191616-12108 --format={{.State.Status}}
	I0602 19:40:34.552001   12568 cli_runner.go:217] Completed: docker container inspect calico-20220602191616-12108 --format={{.State.Status}}: (1.1418617s)
	I0602 19:40:37.583182   12568 cli_runner.go:164] Run: docker container inspect calico-20220602191616-12108 --format={{.State.Status}}
	I0602 19:40:38.769417   12568 cli_runner.go:217] Completed: docker container inspect calico-20220602191616-12108 --format={{.State.Status}}: (1.1862297s)
	I0602 19:40:41.792875   12568 cli_runner.go:164] Run: docker container inspect calico-20220602191616-12108 --format={{.State.Status}}
	I0602 19:40:42.936081   12568 cli_runner.go:217] Completed: docker container inspect calico-20220602191616-12108 --format={{.State.Status}}: (1.1432006s)
	I0602 19:40:45.960941   12568 cli_runner.go:164] Run: docker container inspect calico-20220602191616-12108 --format={{.State.Status}}
	I0602 19:40:47.122437   12568 cli_runner.go:217] Completed: docker container inspect calico-20220602191616-12108 --format={{.State.Status}}: (1.1613635s)
	I0602 19:40:50.156472   12568 cli_runner.go:164] Run: docker container inspect calico-20220602191616-12108 --format={{.State.Status}}
	I0602 19:40:51.336504   12568 cli_runner.go:217] Completed: docker container inspect calico-20220602191616-12108 --format={{.State.Status}}: (1.1800274s)
	I0602 19:40:54.370689   12568 cli_runner.go:164] Run: docker container inspect calico-20220602191616-12108 --format={{.State.Status}}
	I0602 19:40:55.457996   12568 cli_runner.go:217] Completed: docker container inspect calico-20220602191616-12108 --format={{.State.Status}}: (1.086728s)
	I0602 19:40:58.482366   12568 cli_runner.go:164] Run: docker container inspect calico-20220602191616-12108 --format={{.State.Status}}
	I0602 19:40:59.585787   12568 cli_runner.go:217] Completed: docker container inspect calico-20220602191616-12108 --format={{.State.Status}}: (1.1034167s)
	I0602 19:41:02.615622   12568 cli_runner.go:164] Run: docker container inspect calico-20220602191616-12108 --format={{.State.Status}}
	I0602 19:41:03.696390   12568 cli_runner.go:217] Completed: docker container inspect calico-20220602191616-12108 --format={{.State.Status}}: (1.0806227s)
	I0602 19:41:06.717293   12568 cli_runner.go:164] Run: docker container inspect calico-20220602191616-12108 --format={{.State.Status}}
	I0602 19:41:07.791160   12568 cli_runner.go:217] Completed: docker container inspect calico-20220602191616-12108 --format={{.State.Status}}: (1.0738629s)
	I0602 19:41:10.812915   12568 cli_runner.go:164] Run: docker container inspect calico-20220602191616-12108 --format={{.State.Status}}
	I0602 19:41:11.935498   12568 cli_runner.go:217] Completed: docker container inspect calico-20220602191616-12108 --format={{.State.Status}}: (1.1223717s)
	I0602 19:41:14.962220   12568 cli_runner.go:164] Run: docker container inspect calico-20220602191616-12108 --format={{.State.Status}}
	I0602 19:41:16.082637   12568 cli_runner.go:217] Completed: docker container inspect calico-20220602191616-12108 --format={{.State.Status}}: (1.1204121s)
	I0602 19:41:19.112153   12568 cli_runner.go:164] Run: docker container inspect calico-20220602191616-12108 --format={{.State.Status}}
	I0602 19:41:20.245702   12568 cli_runner.go:217] Completed: docker container inspect calico-20220602191616-12108 --format={{.State.Status}}: (1.1335444s)
	I0602 19:41:23.273043   12568 cli_runner.go:164] Run: docker container inspect calico-20220602191616-12108 --format={{.State.Status}}
	I0602 19:41:24.402000   12568 cli_runner.go:217] Completed: docker container inspect calico-20220602191616-12108 --format={{.State.Status}}: (1.1289523s)
	I0602 19:41:27.433230   12568 cli_runner.go:164] Run: docker container inspect calico-20220602191616-12108 --format={{.State.Status}}
	I0602 19:41:28.592948   12568 cli_runner.go:217] Completed: docker container inspect calico-20220602191616-12108 --format={{.State.Status}}: (1.1595358s)
	I0602 19:41:31.613881   12568 cli_runner.go:164] Run: docker container inspect calico-20220602191616-12108 --format={{.State.Status}}
	I0602 19:41:32.722144   12568 cli_runner.go:217] Completed: docker container inspect calico-20220602191616-12108 --format={{.State.Status}}: (1.1079766s)
	I0602 19:41:35.742026   12568 cli_runner.go:164] Run: docker container inspect calico-20220602191616-12108 --format={{.State.Status}}
	I0602 19:41:36.920968   12568 cli_runner.go:217] Completed: docker container inspect calico-20220602191616-12108 --format={{.State.Status}}: (1.1789366s)
	I0602 19:41:39.945886   12568 cli_runner.go:164] Run: docker container inspect calico-20220602191616-12108 --format={{.State.Status}}
	I0602 19:41:41.168014   12568 cli_runner.go:217] Completed: docker container inspect calico-20220602191616-12108 --format={{.State.Status}}: (1.2208864s)
	I0602 19:41:44.191863   12568 cli_runner.go:164] Run: docker container inspect calico-20220602191616-12108 --format={{.State.Status}}
	I0602 19:41:45.293038   12568 cli_runner.go:217] Completed: docker container inspect calico-20220602191616-12108 --format={{.State.Status}}: (1.10117s)
	I0602 19:41:48.310154   12568 cli_runner.go:164] Run: docker container inspect calico-20220602191616-12108 --format={{.State.Status}}
	I0602 19:41:49.451138   12568 cli_runner.go:217] Completed: docker container inspect calico-20220602191616-12108 --format={{.State.Status}}: (1.1409787s)
	I0602 19:41:52.473971   12568 cli_runner.go:164] Run: docker container inspect calico-20220602191616-12108 --format={{.State.Status}}
	I0602 19:41:53.648344   12568 cli_runner.go:217] Completed: docker container inspect calico-20220602191616-12108 --format={{.State.Status}}: (1.1743673s)
	I0602 19:41:56.672804   12568 cli_runner.go:164] Run: docker container inspect calico-20220602191616-12108 --format={{.State.Status}}
	I0602 19:41:57.816207   12568 cli_runner.go:217] Completed: docker container inspect calico-20220602191616-12108 --format={{.State.Status}}: (1.1433972s)
	I0602 19:42:00.835417   12568 cli_runner.go:164] Run: docker container inspect calico-20220602191616-12108 --format={{.State.Status}}
	I0602 19:42:01.957380   12568 cli_runner.go:217] Completed: docker container inspect calico-20220602191616-12108 --format={{.State.Status}}: (1.1219581s)
	I0602 19:42:04.984300   12568 cli_runner.go:164] Run: docker container inspect calico-20220602191616-12108 --format={{.State.Status}}
	I0602 19:42:06.086290   12568 cli_runner.go:217] Completed: docker container inspect calico-20220602191616-12108 --format={{.State.Status}}: (1.101985s)
	I0602 19:42:09.109312   12568 cli_runner.go:164] Run: docker container inspect calico-20220602191616-12108 --format={{.State.Status}}
	I0602 19:42:10.275461   12568 cli_runner.go:217] Completed: docker container inspect calico-20220602191616-12108 --format={{.State.Status}}: (1.1658513s)
	I0602 19:42:13.302717   12568 cli_runner.go:164] Run: docker container inspect calico-20220602191616-12108 --format={{.State.Status}}
	I0602 19:42:14.468822   12568 cli_runner.go:217] Completed: docker container inspect calico-20220602191616-12108 --format={{.State.Status}}: (1.1660998s)
	I0602 19:42:17.502010   12568 cli_runner.go:164] Run: docker container inspect calico-20220602191616-12108 --format={{.State.Status}}
	I0602 19:42:18.743477   12568 cli_runner.go:217] Completed: docker container inspect calico-20220602191616-12108 --format={{.State.Status}}: (1.241461s)
	I0602 19:42:21.771274   12568 cli_runner.go:164] Run: docker container inspect calico-20220602191616-12108 --format={{.State.Status}}
	I0602 19:42:22.955014   12568 cli_runner.go:217] Completed: docker container inspect calico-20220602191616-12108 --format={{.State.Status}}: (1.1837355s)
	I0602 19:42:25.973126   12568 cli_runner.go:164] Run: docker container inspect calico-20220602191616-12108 --format={{.State.Status}}
	I0602 19:42:27.073831   12568 cli_runner.go:217] Completed: docker container inspect calico-20220602191616-12108 --format={{.State.Status}}: (1.1005764s)
	I0602 19:42:30.102240   12568 cli_runner.go:164] Run: docker container inspect calico-20220602191616-12108 --format={{.State.Status}}
	I0602 19:42:31.292998   12568 cli_runner.go:217] Completed: docker container inspect calico-20220602191616-12108 --format={{.State.Status}}: (1.1897949s)
	I0602 19:42:34.310164   12568 cli_runner.go:164] Run: docker container inspect calico-20220602191616-12108 --format={{.State.Status}}
	I0602 19:42:35.510712   12568 cli_runner.go:217] Completed: docker container inspect calico-20220602191616-12108 --format={{.State.Status}}: (1.2005436s)
	I0602 19:42:38.550486   12568 cli_runner.go:164] Run: docker container inspect calico-20220602191616-12108 --format={{.State.Status}}
	I0602 19:42:39.685727   12568 cli_runner.go:217] Completed: docker container inspect calico-20220602191616-12108 --format={{.State.Status}}: (1.1349901s)
	I0602 19:42:42.686660   12568 stop.go:59] stop err: Maximum number of retries (60) exceeded
	I0602 19:42:42.686660   12568 delete.go:129] stophost failed (probably ok): Temporary Error: stop: Maximum number of retries (60) exceeded
	I0602 19:42:42.703997   12568 cli_runner.go:164] Run: docker container inspect calico-20220602191616-12108 --format={{.State.Status}}
	I0602 19:42:43.792412   12568 cli_runner.go:217] Completed: docker container inspect calico-20220602191616-12108 --format={{.State.Status}}: (1.088411s)
	W0602 19:42:43.792412   12568 delete.go:135] deletehost failed: Docker machine "calico-20220602191616-12108" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I0602 19:42:43.806397   12568 cli_runner.go:164] Run: docker container inspect -f {{.Id}} calico-20220602191616-12108
	I0602 19:42:44.908991   12568 cli_runner.go:217] Completed: docker container inspect -f {{.Id}} calico-20220602191616-12108: (1.1023829s)
	I0602 19:42:44.917590   12568 cli_runner.go:164] Run: docker container inspect calico-20220602191616-12108 --format={{.State.Status}}
	I0602 19:42:46.071248   12568 cli_runner.go:217] Completed: docker container inspect calico-20220602191616-12108 --format={{.State.Status}}: (1.153653s)
	I0602 19:42:46.079269   12568 cli_runner.go:164] Run: docker exec --privileged -t calico-20220602191616-12108 /bin/bash -c "sudo init 0"
	W0602 19:42:47.217458   12568 cli_runner.go:211] docker exec --privileged -t calico-20220602191616-12108 /bin/bash -c "sudo init 0" returned with exit code 1
	I0602 19:42:47.217518   12568 cli_runner.go:217] Completed: docker exec --privileged -t calico-20220602191616-12108 /bin/bash -c "sudo init 0": (1.1380666s)
	I0602 19:42:47.217518   12568 oci.go:625] error shutdown calico-20220602191616-12108: docker exec --privileged -t calico-20220602191616-12108 /bin/bash -c "sudo init 0": exit status 1
	stdout:
	
	stderr:
	Error response from daemon: Container 3e1a55f004363eff64915f6088e2b87337053664fd80735b76c1ab19c2928df8 is not running
	I0602 19:42:48.239210   12568 cli_runner.go:164] Run: docker container inspect calico-20220602191616-12108 --format={{.State.Status}}
	I0602 19:42:49.395037   12568 cli_runner.go:217] Completed: docker container inspect calico-20220602191616-12108 --format={{.State.Status}}: (1.1558222s)
	I0602 19:42:49.395037   12568 oci.go:639] temporary error: container calico-20220602191616-12108 status is  but expect it to be exited
	I0602 19:42:49.395037   12568 oci.go:645] Successfully shutdown container calico-20220602191616-12108
	I0602 19:42:49.402030   12568 cli_runner.go:164] Run: docker rm -f -v calico-20220602191616-12108
	I0602 19:42:50.573773   12568 cli_runner.go:217] Completed: docker rm -f -v calico-20220602191616-12108: (1.1717376s)
	I0602 19:42:50.579806   12568 cli_runner.go:164] Run: docker container inspect -f {{.Id}} calico-20220602191616-12108
	W0602 19:42:51.714688   12568 cli_runner.go:211] docker container inspect -f {{.Id}} calico-20220602191616-12108 returned with exit code 1
	I0602 19:42:51.714688   12568 cli_runner.go:217] Completed: docker container inspect -f {{.Id}} calico-20220602191616-12108: (1.1348767s)
	I0602 19:42:51.720692   12568 cli_runner.go:164] Run: docker network inspect calico-20220602191616-12108 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0602 19:42:52.905439   12568 cli_runner.go:211] docker network inspect calico-20220602191616-12108 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0602 19:42:52.905439   12568 cli_runner.go:217] Completed: docker network inspect calico-20220602191616-12108 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}": (1.1847422s)
	I0602 19:42:52.912450   12568 network_create.go:272] running [docker network inspect calico-20220602191616-12108] to gather additional debugging logs...
	I0602 19:42:52.912450   12568 cli_runner.go:164] Run: docker network inspect calico-20220602191616-12108
	W0602 19:42:54.051619   12568 cli_runner.go:211] docker network inspect calico-20220602191616-12108 returned with exit code 1
	I0602 19:42:54.051619   12568 cli_runner.go:217] Completed: docker network inspect calico-20220602191616-12108: (1.1391643s)
	I0602 19:42:54.051619   12568 network_create.go:275] error running [docker network inspect calico-20220602191616-12108]: docker network inspect calico-20220602191616-12108: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: calico-20220602191616-12108
	I0602 19:42:54.051619   12568 network_create.go:277] output of [docker network inspect calico-20220602191616-12108]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: calico-20220602191616-12108
	
	** /stderr **
	W0602 19:42:54.052617   12568 delete.go:139] delete failed (probably ok) <nil>
	I0602 19:42:54.052617   12568 fix.go:115] Sleeping 1 second for extra luck!
	I0602 19:42:55.065913   12568 start.go:131] createHost starting for "" (driver="docker")
	I0602 19:42:55.069913   12568 out.go:204] * Creating docker container (CPUs=2, Memory=2048MB) ...
	I0602 19:42:55.070904   12568 start.go:165] libmachine.API.Create for "calico-20220602191616-12108" (driver="docker")
	I0602 19:42:55.070904   12568 client.go:168] LocalClient.Create starting
	I0602 19:42:55.070904   12568 main.go:134] libmachine: Reading certificate data from C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem
	I0602 19:42:55.070904   12568 main.go:134] libmachine: Decoding PEM data...
	I0602 19:42:55.070904   12568 main.go:134] libmachine: Parsing certificate...
	I0602 19:42:55.070904   12568 main.go:134] libmachine: Reading certificate data from C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\cert.pem
	I0602 19:42:55.071917   12568 main.go:134] libmachine: Decoding PEM data...
	I0602 19:42:55.071917   12568 main.go:134] libmachine: Parsing certificate...
	I0602 19:42:55.081932   12568 cli_runner.go:164] Run: docker network inspect calico-20220602191616-12108 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0602 19:42:56.252432   12568 cli_runner.go:211] docker network inspect calico-20220602191616-12108 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0602 19:42:56.252432   12568 cli_runner.go:217] Completed: docker network inspect calico-20220602191616-12108 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}": (1.1695158s)
	I0602 19:42:56.259424   12568 network_create.go:272] running [docker network inspect calico-20220602191616-12108] to gather additional debugging logs...
	I0602 19:42:56.259424   12568 cli_runner.go:164] Run: docker network inspect calico-20220602191616-12108
	W0602 19:42:57.399303   12568 cli_runner.go:211] docker network inspect calico-20220602191616-12108 returned with exit code 1
	I0602 19:42:57.399303   12568 cli_runner.go:217] Completed: docker network inspect calico-20220602191616-12108: (1.139744s)
	I0602 19:42:57.399303   12568 network_create.go:275] error running [docker network inspect calico-20220602191616-12108]: docker network inspect calico-20220602191616-12108: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: calico-20220602191616-12108
	I0602 19:42:57.399303   12568 network_create.go:277] output of [docker network inspect calico-20220602191616-12108]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: calico-20220602191616-12108
	
	** /stderr **
	I0602 19:42:57.408398   12568 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0602 19:42:58.531495   12568 cli_runner.go:217] Completed: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}": (1.123046s)
	I0602 19:42:58.549490   12568 network.go:284] reusing subnet 192.168.49.0 that has expired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc0009863e8] amended:false}} dirty:map[] misses:0}
	I0602 19:42:58.550504   12568 network.go:235] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0602 19:42:58.550504   12568 network_create.go:115] attempt to create docker network calico-20220602191616-12108 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0602 19:42:58.557499   12568 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true calico-20220602191616-12108
	W0602 19:42:59.667000   12568 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true calico-20220602191616-12108 returned with exit code 1
	I0602 19:42:59.667000   12568 cli_runner.go:217] Completed: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true calico-20220602191616-12108: (1.1094962s)
	W0602 19:42:59.667000   12568 network_create.go:107] failed to create docker network calico-20220602191616-12108 192.168.49.0/24, will retry: subnet is taken
	I0602 19:42:59.683005   12568 network.go:279] skipping subnet 192.168.49.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc0009863e8] amended:false}} dirty:map[] misses:0}
	I0602 19:42:59.683005   12568 network.go:238] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0602 19:42:59.700633   12568 network.go:288] reserving subnet 192.168.58.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc0009863e8] amended:true}} dirty:map[192.168.49.0:0xc0009863e8 192.168.58.0:0xc000986540] misses:0}
	I0602 19:42:59.700633   12568 network.go:235] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0602 19:42:59.700633   12568 network_create.go:115] attempt to create docker network calico-20220602191616-12108 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
	I0602 19:42:59.709315   12568 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true calico-20220602191616-12108
	W0602 19:43:00.785412   12568 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true calico-20220602191616-12108 returned with exit code 1
	I0602 19:43:00.785412   12568 cli_runner.go:217] Completed: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true calico-20220602191616-12108: (1.0760507s)
	W0602 19:43:00.785412   12568 network_create.go:107] failed to create docker network calico-20220602191616-12108 192.168.58.0/24, will retry: subnet is taken
	I0602 19:43:00.801387   12568 network.go:279] skipping subnet 192.168.58.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc0009863e8] amended:true}} dirty:map[192.168.49.0:0xc0009863e8 192.168.58.0:0xc000986540] misses:1}
	I0602 19:43:00.801387   12568 network.go:238] skipping subnet 192.168.58.0/24 that is reserved: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0602 19:43:00.816382   12568 network.go:288] reserving subnet 192.168.67.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc0009863e8] amended:true}} dirty:map[192.168.49.0:0xc0009863e8 192.168.58.0:0xc000986540 192.168.67.0:0xc000986648] misses:1}
	I0602 19:43:00.816382   12568 network.go:235] using free private subnet 192.168.67.0/24: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0602 19:43:00.816382   12568 network_create.go:115] attempt to create docker network calico-20220602191616-12108 192.168.67.0/24 with gateway 192.168.67.1 and MTU of 1500 ...
	I0602 19:43:00.823383   12568 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true calico-20220602191616-12108
	I0602 19:43:02.071973   12568 cli_runner.go:217] Completed: docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true calico-20220602191616-12108: (1.248584s)
	I0602 19:43:02.071973   12568 network_create.go:99] docker network calico-20220602191616-12108 192.168.67.0/24 created
	I0602 19:43:02.071973   12568 kic.go:106] calculated static IP "192.168.67.2" for the "calico-20220602191616-12108" container
	I0602 19:43:02.088849   12568 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0602 19:43:03.236294   12568 cli_runner.go:217] Completed: docker ps -a --format {{.Names}}: (1.1474399s)
	I0602 19:43:03.245301   12568 cli_runner.go:164] Run: docker volume create calico-20220602191616-12108 --label name.minikube.sigs.k8s.io=calico-20220602191616-12108 --label created_by.minikube.sigs.k8s.io=true
	I0602 19:43:04.394243   12568 cli_runner.go:217] Completed: docker volume create calico-20220602191616-12108 --label name.minikube.sigs.k8s.io=calico-20220602191616-12108 --label created_by.minikube.sigs.k8s.io=true: (1.148937s)
	I0602 19:43:04.394243   12568 oci.go:103] Successfully created a docker volume calico-20220602191616-12108
	I0602 19:43:04.401239   12568 cli_runner.go:164] Run: docker run --rm --name calico-20220602191616-12108-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=calico-20220602191616-12108 --entrypoint /usr/bin/test -v calico-20220602191616-12108:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496 -d /var/lib
	I0602 19:43:06.997621   12568 cli_runner.go:217] Completed: docker run --rm --name calico-20220602191616-12108-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=calico-20220602191616-12108 --entrypoint /usr/bin/test -v calico-20220602191616-12108:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496 -d /var/lib: (2.5963705s)
	I0602 19:43:06.997621   12568 oci.go:107] Successfully prepared a docker volume calico-20220602191616-12108
	I0602 19:43:06.997621   12568 preload.go:132] Checking if preload exists for k8s version v1.23.6 and runtime docker
	I0602 19:43:06.997621   12568 kic.go:179] Starting extracting preloaded images to volume ...
	I0602 19:43:07.005610   12568 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.23.6-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v calico-20220602191616-12108:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496 -I lz4 -xf /preloaded.tar -C /extractDir
	I0602 19:43:41.000540   12568 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.23.6-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v calico-20220602191616-12108:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496 -I lz4 -xf /preloaded.tar -C /extractDir: (33.9946574s)
	I0602 19:43:41.000540   12568 kic.go:188] duration metric: took 34.002773 seconds to extract preloaded images to volume
	I0602 19:43:41.013269   12568 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0602 19:43:43.528006   12568 cli_runner.go:217] Completed: docker system info --format "{{json .}}": (2.514727s)
	I0602 19:43:43.528006   12568 info.go:265] docker info: {ID:DTJ5:6HSD:HQYO:NIUY:6CLH:5UQL:6FBK:7BMO:GVE6:JAI7:26MB:GZA3 Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:73 OomKillDisable:true NGoroutines:60 SystemTime:2022-06-02 19:43:42.2403552 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:4 KernelVersion:5.10.102.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_
64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:53902323712 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.16 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16 Expected:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16} RuncCommit:{ID:v1.1.1-0-g52de29d Expected:v1.1.1-0-g52de29d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,
profile=default] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.6.0] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:C:\Program Files\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc.
Version:v0.17.0]] Warnings:<nil>}}
	I0602 19:43:43.546788   12568 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0602 19:43:45.915664   12568 cli_runner.go:217] Completed: docker info --format "'{{json .SecurityOptions}}'": (2.3688651s)
	I0602 19:43:45.922679   12568 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname calico-20220602191616-12108 --name calico-20220602191616-12108 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=calico-20220602191616-12108 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=calico-20220602191616-12108 --network calico-20220602191616-12108 --ip 192.168.67.2 --volume calico-20220602191616-12108:/var --security-opt apparmor=unconfined --memory=2048mb --memory-swap=2048mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496
	I0602 19:43:48.664156   12568 cli_runner.go:217] Completed: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname calico-20220602191616-12108 --name calico-20220602191616-12108 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=calico-20220602191616-12108 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=calico-20220602191616-12108 --network calico-20220602191616-12108 --ip 192.168.67.2 --volume calico-20220602191616-12108:/var --security-opt apparmor=unconfined --memory=2048mb --memory-swap=2048mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496: (2.7414658s)
	I0602 19:43:48.676153   12568 cli_runner.go:164] Run: docker container inspect calico-20220602191616-12108 --format={{.State.Running}}
	I0602 19:43:50.178665   12568 cli_runner.go:217] Completed: docker container inspect calico-20220602191616-12108 --format={{.State.Running}}: (1.5025056s)
	I0602 19:43:50.188678   12568 cli_runner.go:164] Run: docker container inspect calico-20220602191616-12108 --format={{.State.Status}}
	I0602 19:43:51.490468   12568 cli_runner.go:217] Completed: docker container inspect calico-20220602191616-12108 --format={{.State.Status}}: (1.301784s)
	I0602 19:43:51.490468   12568 cli_runner.go:164] Run: docker exec calico-20220602191616-12108 stat /var/lib/dpkg/alternatives/iptables
	I0602 19:43:52.966586   12568 cli_runner.go:217] Completed: docker exec calico-20220602191616-12108 stat /var/lib/dpkg/alternatives/iptables: (1.4761118s)
	I0602 19:43:52.966586   12568 oci.go:247] the created container "calico-20220602191616-12108" has a running status.
	I0602 19:43:52.966586   12568 kic.go:210] Creating ssh key for kic: C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\calico-20220602191616-12108\id_rsa...
	I0602 19:43:53.264550   12568 kic_runner.go:191] docker (temp): C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\calico-20220602191616-12108\id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0602 19:43:54.688904   12568 cli_runner.go:164] Run: docker container inspect calico-20220602191616-12108 --format={{.State.Status}}
	I0602 19:43:55.976313   12568 cli_runner.go:217] Completed: docker container inspect calico-20220602191616-12108 --format={{.State.Status}}: (1.2874033s)
	I0602 19:43:56.000331   12568 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0602 19:43:56.000331   12568 kic_runner.go:114] Args: [docker exec --privileged calico-20220602191616-12108 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0602 19:43:57.405792   12568 kic_runner.go:123] Done: [docker exec --privileged calico-20220602191616-12108 chown docker:docker /home/docker/.ssh/authorized_keys]: (1.4054551s)
	I0602 19:43:57.411771   12568 kic.go:250] ensuring only current user has permissions to key file located at : C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\calico-20220602191616-12108\id_rsa...
	I0602 19:43:58.068711   12568 cli_runner.go:164] Run: docker container inspect calico-20220602191616-12108 --format={{.State.Status}}
	I0602 19:43:59.322438   12568 cli_runner.go:217] Completed: docker container inspect calico-20220602191616-12108 --format={{.State.Status}}: (1.2535095s)
	I0602 19:43:59.322438   12568 machine.go:88] provisioning docker machine ...
	I0602 19:43:59.322438   12568 ubuntu.go:169] provisioning hostname "calico-20220602191616-12108"
	I0602 19:43:59.329622   12568 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220602191616-12108
	I0602 19:44:00.624017   12568 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220602191616-12108: (1.2943893s)
	I0602 19:44:00.628024   12568 main.go:134] libmachine: Using SSH client type: native
	I0602 19:44:00.634020   12568 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x5b2ea0] 0x5b5d00 <nil>  [] 0s} 127.0.0.1 54867 <nil> <nil>}
	I0602 19:44:00.634020   12568 main.go:134] libmachine: About to run SSH command:
	sudo hostname calico-20220602191616-12108 && echo "calico-20220602191616-12108" | sudo tee /etc/hostname
	I0602 19:44:00.875457   12568 main.go:134] libmachine: SSH cmd err, output: <nil>: calico-20220602191616-12108
	
	I0602 19:44:00.883724   12568 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220602191616-12108
	I0602 19:44:02.273348   12568 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220602191616-12108: (1.3894743s)
	I0602 19:44:02.277709   12568 main.go:134] libmachine: Using SSH client type: native
	I0602 19:44:02.278664   12568 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x5b2ea0] 0x5b5d00 <nil>  [] 0s} 127.0.0.1 54867 <nil> <nil>}
	I0602 19:44:02.278696   12568 main.go:134] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\scalico-20220602191616-12108' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 calico-20220602191616-12108/g' /etc/hosts;
				else 
					echo '127.0.1.1 calico-20220602191616-12108' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0602 19:44:02.435624   12568 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I0602 19:44:02.435624   12568 ubuntu.go:175] set auth options {CertDir:C:\Users\jenkins.minikube7\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube7\minikube-integration\.minikube}
	I0602 19:44:02.435624   12568 ubuntu.go:177] setting up certificates
	I0602 19:44:02.435624   12568 provision.go:83] configureAuth start
	I0602 19:44:02.456665   12568 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" calico-20220602191616-12108
	I0602 19:44:03.835318   12568 cli_runner.go:217] Completed: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" calico-20220602191616-12108: (1.3786478s)
	I0602 19:44:03.835318   12568 provision.go:138] copyHostCerts
	I0602 19:44:03.835318   12568 exec_runner.go:144] found C:\Users\jenkins.minikube7\minikube-integration\.minikube/ca.pem, removing ...
	I0602 19:44:03.835318   12568 exec_runner.go:207] rm: C:\Users\jenkins.minikube7\minikube-integration\.minikube\ca.pem
	I0602 19:44:03.836321   12568 exec_runner.go:151] cp: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube7\minikube-integration\.minikube/ca.pem (1078 bytes)
	I0602 19:44:03.838329   12568 exec_runner.go:144] found C:\Users\jenkins.minikube7\minikube-integration\.minikube/cert.pem, removing ...
	I0602 19:44:03.838329   12568 exec_runner.go:207] rm: C:\Users\jenkins.minikube7\minikube-integration\.minikube\cert.pem
	I0602 19:44:03.838329   12568 exec_runner.go:151] cp: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube7\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0602 19:44:03.840334   12568 exec_runner.go:144] found C:\Users\jenkins.minikube7\minikube-integration\.minikube/key.pem, removing ...
	I0602 19:44:03.840334   12568 exec_runner.go:207] rm: C:\Users\jenkins.minikube7\minikube-integration\.minikube\key.pem
	I0602 19:44:03.841324   12568 exec_runner.go:151] cp: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube7\minikube-integration\.minikube/key.pem (1675 bytes)
	I0602 19:44:03.843428   12568 provision.go:112] generating server cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.calico-20220602191616-12108 san=[192.168.67.2 127.0.0.1 localhost 127.0.0.1 minikube calico-20220602191616-12108]
	I0602 19:44:04.095080   12568 provision.go:172] copyRemoteCerts
	I0602 19:44:04.107013   12568 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0602 19:44:04.114045   12568 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220602191616-12108
	I0602 19:44:05.520449   12568 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220602191616-12108: (1.4063447s)
	I0602 19:44:05.521088   12568 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:54867 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\calico-20220602191616-12108\id_rsa Username:docker}
	I0602 19:44:05.630392   12568 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (1.5232626s)
	I0602 19:44:05.630837   12568 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0602 19:44:05.698387   12568 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1257 bytes)
	I0602 19:44:05.748314   12568 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0602 19:44:05.813578   12568 provision.go:86] duration metric: configureAuth took 3.3779008s
	I0602 19:44:05.813578   12568 ubuntu.go:193] setting minikube options for container-runtime
	I0602 19:44:05.814347   12568 config.go:178] Loaded profile config "calico-20220602191616-12108": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.6
	I0602 19:44:05.823595   12568 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220602191616-12108
	I0602 19:44:07.108024   12568 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220602191616-12108: (1.2842846s)
	I0602 19:44:07.113049   12568 main.go:134] libmachine: Using SSH client type: native
	I0602 19:44:07.113049   12568 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x5b2ea0] 0x5b5d00 <nil>  [] 0s} 127.0.0.1 54867 <nil> <nil>}
	I0602 19:44:07.113049   12568 main.go:134] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0602 19:44:07.305620   12568 main.go:134] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0602 19:44:07.306128   12568 ubuntu.go:71] root file system type: overlay
	I0602 19:44:07.306365   12568 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0602 19:44:07.313544   12568 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220602191616-12108
	I0602 19:44:08.621498   12568 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220602191616-12108: (1.3079478s)
	I0602 19:44:08.625519   12568 main.go:134] libmachine: Using SSH client type: native
	I0602 19:44:08.625519   12568 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x5b2ea0] 0x5b5d00 <nil>  [] 0s} 127.0.0.1 54867 <nil> <nil>}
	I0602 19:44:08.625519   12568 main.go:134] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0602 19:44:08.853775   12568 main.go:134] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0602 19:44:08.876992   12568 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220602191616-12108
	I0602 19:44:10.198744   12568 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220602191616-12108: (1.3217461s)
	I0602 19:44:10.203722   12568 main.go:134] libmachine: Using SSH client type: native
	I0602 19:44:10.203722   12568 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x5b2ea0] 0x5b5d00 <nil>  [] 0s} 127.0.0.1 54867 <nil> <nil>}
	I0602 19:44:10.203722   12568 main.go:134] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0602 19:44:11.877972   12568 main.go:134] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2022-05-12 09:15:28.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2022-06-02 19:44:08.837169000 +0000
	@@ -1,30 +1,32 @@
	 [Unit]
	 Description=Docker Application Container Engine
	 Documentation=https://docs.docker.com
	-After=network-online.target docker.socket firewalld.service containerd.service
	+BindsTo=containerd.service
	+After=network-online.target firewalld.service containerd.service
	 Wants=network-online.target
	-Requires=docker.socket containerd.service
	+Requires=docker.socket
	+StartLimitBurst=3
	+StartLimitIntervalSec=60
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutSec=0
	-RestartSec=2
	-Restart=always
	-
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	+Restart=on-failure
	 
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP $MAINPID
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	@@ -32,16 +34,16 @@
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	 
	 # kill only the docker process, not all processes in the cgroup
	 KillMode=process
	-OOMScoreAdjust=-500
	 
	 [Install]
	 WantedBy=multi-user.target
	Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install enable docker
	
	I0602 19:44:11.878081   12568 machine.go:91] provisioned docker machine in 12.5555883s
	I0602 19:44:11.878131   12568 client.go:171] LocalClient.Create took 1m16.8068463s
	I0602 19:44:11.878131   12568 start.go:173] duration metric: libmachine.API.Create for "calico-20220602191616-12108" took 1m16.8068964s
	I0602 19:44:11.878291   12568 start.go:306] post-start starting for "calico-20220602191616-12108" (driver="docker")
	I0602 19:44:11.878291   12568 start.go:316] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0602 19:44:11.892325   12568 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0602 19:44:11.899183   12568 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220602191616-12108
	I0602 19:44:13.143174   12568 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220602191616-12108: (1.2439855s)
	I0602 19:44:13.143174   12568 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:54867 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\calico-20220602191616-12108\id_rsa Username:docker}
	I0602 19:44:13.299375   12568 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (1.4070435s)
	I0602 19:44:13.313435   12568 ssh_runner.go:195] Run: cat /etc/os-release
	I0602 19:44:13.324383   12568 main.go:134] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0602 19:44:13.324383   12568 main.go:134] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0602 19:44:13.324383   12568 main.go:134] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0602 19:44:13.324383   12568 info.go:137] Remote host: Ubuntu 20.04.4 LTS
	I0602 19:44:13.324383   12568 filesync.go:126] Scanning C:\Users\jenkins.minikube7\minikube-integration\.minikube\addons for local assets ...
	I0602 19:44:13.324383   12568 filesync.go:126] Scanning C:\Users\jenkins.minikube7\minikube-integration\.minikube\files for local assets ...
	I0602 19:44:13.325377   12568 filesync.go:149] local asset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\files\etc\ssl\certs\121082.pem -> 121082.pem in /etc/ssl/certs
	I0602 19:44:13.339380   12568 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0602 19:44:13.364385   12568 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\files\etc\ssl\certs\121082.pem --> /etc/ssl/certs/121082.pem (1708 bytes)
	I0602 19:44:13.435010   12568 start.go:309] post-start completed in 1.5567121s
	I0602 19:44:13.454040   12568 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" calico-20220602191616-12108
	I0602 19:44:14.727176   12568 cli_runner.go:217] Completed: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" calico-20220602191616-12108: (1.2731304s)
	I0602 19:44:14.727176   12568 profile.go:148] Saving config to C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\calico-20220602191616-12108\config.json ...
	I0602 19:44:14.749207   12568 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0602 19:44:14.763189   12568 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220602191616-12108
	I0602 19:44:16.079652   12568 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220602191616-12108: (1.3164577s)
	I0602 19:44:16.079652   12568 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:54867 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\calico-20220602191616-12108\id_rsa Username:docker}
	I0602 19:44:16.193906   12568 ssh_runner.go:235] Completed: sh -c "df -h /var | awk 'NR==2{print $5}'": (1.4446923s)
	I0602 19:44:16.202907   12568 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0602 19:44:16.214917   12568 start.go:134] duration metric: createHost completed in 1m21.1486551s
	I0602 19:44:16.232173   12568 cli_runner.go:164] Run: docker container inspect calico-20220602191616-12108 --format={{.State.Status}}
	I0602 19:44:17.515490   12568 cli_runner.go:217] Completed: docker container inspect calico-20220602191616-12108 --format={{.State.Status}}: (1.2833117s)
	W0602 19:44:17.515490   12568 fix.go:129] unexpected machine state, will restart: <nil>
	I0602 19:44:17.515490   12568 machine.go:88] provisioning docker machine ...
	I0602 19:44:17.515490   12568 ubuntu.go:169] provisioning hostname "calico-20220602191616-12108"
	I0602 19:44:17.522475   12568 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220602191616-12108
	I0602 19:44:18.793392   12568 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220602191616-12108: (1.270911s)
	I0602 19:44:18.796356   12568 main.go:134] libmachine: Using SSH client type: native
	I0602 19:44:18.797355   12568 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x5b2ea0] 0x5b5d00 <nil>  [] 0s} 127.0.0.1 54867 <nil> <nil>}
	I0602 19:44:18.797355   12568 main.go:134] libmachine: About to run SSH command:
	sudo hostname calico-20220602191616-12108 && echo "calico-20220602191616-12108" | sudo tee /etc/hostname
	I0602 19:44:19.027164   12568 main.go:134] libmachine: SSH cmd err, output: <nil>: calico-20220602191616-12108
	
	I0602 19:44:19.035756   12568 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220602191616-12108
	I0602 19:44:20.342114   12568 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220602191616-12108: (1.3063525s)
	I0602 19:44:20.348127   12568 main.go:134] libmachine: Using SSH client type: native
	I0602 19:44:20.349121   12568 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x5b2ea0] 0x5b5d00 <nil>  [] 0s} 127.0.0.1 54867 <nil> <nil>}
	I0602 19:44:20.349121   12568 main.go:134] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\scalico-20220602191616-12108' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 calico-20220602191616-12108/g' /etc/hosts;
				else 
					echo '127.0.1.1 calico-20220602191616-12108' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0602 19:44:20.552739   12568 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I0602 19:44:20.552806   12568 ubuntu.go:175] set auth options {CertDir:C:\Users\jenkins.minikube7\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube7\minikube-integration\.minikube}
	I0602 19:44:20.552806   12568 ubuntu.go:177] setting up certificates
	I0602 19:44:20.552873   12568 provision.go:83] configureAuth start
	I0602 19:44:20.570941   12568 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" calico-20220602191616-12108
	I0602 19:44:21.865687   12568 cli_runner.go:217] Completed: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" calico-20220602191616-12108: (1.2945269s)
	I0602 19:44:21.865761   12568 provision.go:138] copyHostCerts
	I0602 19:44:21.866516   12568 exec_runner.go:144] found C:\Users\jenkins.minikube7\minikube-integration\.minikube/ca.pem, removing ...
	I0602 19:44:21.866591   12568 exec_runner.go:207] rm: C:\Users\jenkins.minikube7\minikube-integration\.minikube\ca.pem
	I0602 19:44:21.866982   12568 exec_runner.go:151] cp: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube7\minikube-integration\.minikube/ca.pem (1078 bytes)
	I0602 19:44:21.868844   12568 exec_runner.go:144] found C:\Users\jenkins.minikube7\minikube-integration\.minikube/cert.pem, removing ...
	I0602 19:44:21.868844   12568 exec_runner.go:207] rm: C:\Users\jenkins.minikube7\minikube-integration\.minikube\cert.pem
	I0602 19:44:21.869808   12568 exec_runner.go:151] cp: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube7\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0602 19:44:21.870492   12568 exec_runner.go:144] found C:\Users\jenkins.minikube7\minikube-integration\.minikube/key.pem, removing ...
	I0602 19:44:21.870492   12568 exec_runner.go:207] rm: C:\Users\jenkins.minikube7\minikube-integration\.minikube\key.pem
	I0602 19:44:21.871675   12568 exec_runner.go:151] cp: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube7\minikube-integration\.minikube/key.pem (1675 bytes)
	I0602 19:44:21.874575   12568 provision.go:112] generating server cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.calico-20220602191616-12108 san=[192.168.67.2 127.0.0.1 localhost 127.0.0.1 minikube calico-20220602191616-12108]
	I0602 19:44:22.116622   12568 provision.go:172] copyRemoteCerts
	I0602 19:44:22.127450   12568 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0602 19:44:22.135489   12568 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220602191616-12108
	I0602 19:44:23.461360   12568 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220602191616-12108: (1.3257902s)
	I0602 19:44:23.462051   12568 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:54867 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\calico-20220602191616-12108\id_rsa Username:docker}
	I0602 19:44:23.594540   12568 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (1.4670831s)
	I0602 19:44:23.594540   12568 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0602 19:44:23.654222   12568 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1257 bytes)
	I0602 19:44:23.718953   12568 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0602 19:44:23.780667   12568 provision.go:86] duration metric: configureAuth took 3.2277179s
	I0602 19:44:23.780667   12568 ubuntu.go:193] setting minikube options for container-runtime
	I0602 19:44:23.780667   12568 config.go:178] Loaded profile config "calico-20220602191616-12108": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.6
	I0602 19:44:23.792640   12568 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220602191616-12108
	I0602 19:44:24.975714   12568 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220602191616-12108: (1.1829312s)
	I0602 19:44:24.985234   12568 main.go:134] libmachine: Using SSH client type: native
	I0602 19:44:24.985234   12568 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x5b2ea0] 0x5b5d00 <nil>  [] 0s} 127.0.0.1 54867 <nil> <nil>}
	I0602 19:44:24.985234   12568 main.go:134] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0602 19:44:25.205186   12568 main.go:134] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0602 19:44:25.205747   12568 ubuntu.go:71] root file system type: overlay
	I0602 19:44:25.206417   12568 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0602 19:44:25.216775   12568 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220602191616-12108
	I0602 19:44:26.394024   12568 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220602191616-12108: (1.1771065s)
	I0602 19:44:26.400346   12568 main.go:134] libmachine: Using SSH client type: native
	I0602 19:44:26.400775   12568 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x5b2ea0] 0x5b5d00 <nil>  [] 0s} 127.0.0.1 54867 <nil> <nil>}
	I0602 19:44:26.400975   12568 main.go:134] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0602 19:44:26.562769   12568 main.go:134] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0602 19:44:26.571744   12568 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220602191616-12108
	I0602 19:44:27.784866   12568 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220602191616-12108: (1.2131169s)
	I0602 19:44:27.788821   12568 main.go:134] libmachine: Using SSH client type: native
	I0602 19:44:27.788821   12568 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x5b2ea0] 0x5b5d00 <nil>  [] 0s} 127.0.0.1 54867 <nil> <nil>}
	I0602 19:44:27.788821   12568 main.go:134] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0602 19:44:28.019829   12568 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I0602 19:44:28.019829   12568 machine.go:91] provisioned docker machine in 10.5042932s
	I0602 19:44:28.019829   12568 start.go:306] post-start starting for "calico-20220602191616-12108" (driver="docker")
	I0602 19:44:28.019829   12568 start.go:316] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0602 19:44:28.033261   12568 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0602 19:44:28.042096   12568 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220602191616-12108
	I0602 19:44:29.274341   12568 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220602191616-12108: (1.2313924s)
	I0602 19:44:29.275166   12568 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:54867 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\calico-20220602191616-12108\id_rsa Username:docker}
	I0602 19:44:29.394578   12568 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (1.3602756s)
	I0602 19:44:29.403596   12568 ssh_runner.go:195] Run: cat /etc/os-release
	I0602 19:44:29.414607   12568 main.go:134] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0602 19:44:29.415578   12568 main.go:134] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0602 19:44:29.415578   12568 main.go:134] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0602 19:44:29.415578   12568 info.go:137] Remote host: Ubuntu 20.04.4 LTS
	I0602 19:44:29.415578   12568 filesync.go:126] Scanning C:\Users\jenkins.minikube7\minikube-integration\.minikube\addons for local assets ...
	I0602 19:44:29.415578   12568 filesync.go:126] Scanning C:\Users\jenkins.minikube7\minikube-integration\.minikube\files for local assets ...
	I0602 19:44:29.415578   12568 filesync.go:149] local asset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\files\etc\ssl\certs\121082.pem -> 121082.pem in /etc/ssl/certs
	I0602 19:44:29.425591   12568 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0602 19:44:29.453022   12568 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\files\etc\ssl\certs\121082.pem --> /etc/ssl/certs/121082.pem (1708 bytes)
	I0602 19:44:29.529346   12568 start.go:309] post-start completed in 1.5095112s
	I0602 19:44:29.545800   12568 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0602 19:44:29.557785   12568 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220602191616-12108
	I0602 19:44:30.766655   12568 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220602191616-12108: (1.208565s)
	I0602 19:44:30.767265   12568 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:54867 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\calico-20220602191616-12108\id_rsa Username:docker}
	I0602 19:44:30.910068   12568 ssh_runner.go:235] Completed: sh -c "df -h /var | awk 'NR==2{print $5}'": (1.3642622s)
	I0602 19:44:30.925050   12568 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0602 19:44:30.936908   12568 fix.go:57] fixHost completed within 6m8.8065195s
	I0602 19:44:30.936908   12568 start.go:81] releasing machines lock for "calico-20220602191616-12108", held for 6m8.8067624s
	I0602 19:44:30.943905   12568 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" calico-20220602191616-12108
	I0602 19:44:32.088772   12568 cli_runner.go:217] Completed: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" calico-20220602191616-12108: (1.144862s)
	I0602 19:44:32.090773   12568 ssh_runner.go:195] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0602 19:44:32.097796   12568 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220602191616-12108
	I0602 19:44:32.101838   12568 ssh_runner.go:195] Run: sudo service containerd status
	I0602 19:44:32.109782   12568 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220602191616-12108
	I0602 19:44:33.422982   12568 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220602191616-12108: (1.3251807s)
	I0602 19:44:33.422982   12568 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:54867 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\calico-20220602191616-12108\id_rsa Username:docker}
	I0602 19:44:33.438984   12568 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220602191616-12108: (1.329197s)
	I0602 19:44:33.438984   12568 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:54867 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\calico-20220602191616-12108\id_rsa Username:docker}
	I0602 19:44:33.638918   12568 ssh_runner.go:235] Completed: curl -sS -m 2 https://k8s.gcr.io/: (1.5481385s)
	I0602 19:44:33.639293   12568 ssh_runner.go:235] Completed: sudo service containerd status: (1.5374002s)
	I0602 19:44:33.655169   12568 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0602 19:44:33.908187   12568 cruntime.go:273] skipping containerd shutdown because we are bound to it
	I0602 19:44:33.929237   12568 ssh_runner.go:195] Run: sudo service crio status
	I0602 19:44:33.983871   12568 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/dockershim.sock
	image-endpoint: unix:///var/run/dockershim.sock
	" | sudo tee /etc/crictl.yaml"
	I0602 19:44:34.051390   12568 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0602 19:44:34.101807   12568 ssh_runner.go:195] Run: sudo service docker status
	I0602 19:44:34.165678   12568 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0602 19:44:34.270249   12568 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0602 19:44:34.385398   12568 out.go:204] * Preparing Kubernetes v1.23.6 on Docker 20.10.16 ...
	I0602 19:44:34.394389   12568 cli_runner.go:164] Run: docker exec -t calico-20220602191616-12108 dig +short host.docker.internal
	I0602 19:44:35.814692   12568 cli_runner.go:217] Completed: docker exec -t calico-20220602191616-12108 dig +short host.docker.internal: (1.4202962s)
	I0602 19:44:35.814692   12568 network.go:96] got host ip for mount in container by digging dns: 192.168.65.2
	I0602 19:44:35.823693   12568 ssh_runner.go:195] Run: grep 192.168.65.2	host.minikube.internal$ /etc/hosts
	I0602 19:44:35.833692   12568 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.65.2	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0602 19:44:35.885239   12568 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" calico-20220602191616-12108
	I0602 19:44:37.026315   12568 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" calico-20220602191616-12108: (1.141034s)
	I0602 19:44:37.026315   12568 preload.go:132] Checking if preload exists for k8s version v1.23.6 and runtime docker
	I0602 19:44:37.035137   12568 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0602 19:44:37.141327   12568 docker.go:610] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.23.6
	k8s.gcr.io/kube-scheduler:v1.23.6
	k8s.gcr.io/kube-controller-manager:v1.23.6
	k8s.gcr.io/kube-proxy:v1.23.6
	k8s.gcr.io/etcd:3.5.1-0
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0602 19:44:37.141379   12568 docker.go:541] Images already preloaded, skipping extraction
	I0602 19:44:37.151482   12568 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0602 19:44:37.233104   12568 docker.go:610] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.23.6
	k8s.gcr.io/kube-scheduler:v1.23.6
	k8s.gcr.io/kube-proxy:v1.23.6
	k8s.gcr.io/kube-controller-manager:v1.23.6
	k8s.gcr.io/etcd:3.5.1-0
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0602 19:44:37.233104   12568 cache_images.go:84] Images are preloaded, skipping loading
	I0602 19:44:37.245966   12568 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0602 19:44:37.451059   12568 cni.go:95] Creating CNI manager for "calico"
	I0602 19:44:37.451059   12568 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0602 19:44:37.451596   12568 kubeadm.go:158] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.67.2 APIServerPort:8443 KubernetesVersion:v1.23.6 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:calico-20220602191616-12108 NodeName:calico-20220602191616-12108 DNSDomain:cluster.local CRISocket:/var/run/dockershim.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.67.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.67.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/
minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	I0602 19:44:37.451915   12568 kubeadm.go:162] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.67.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/dockershim.sock
	  name: "calico-20220602191616-12108"
	  kubeletExtraArgs:
	    node-ip: 192.168.67.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.67.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.23.6
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0602 19:44:37.452169   12568 kubeadm.go:961] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.23.6/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=docker --hostname-override=calico-20220602191616-12108 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.67.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.23.6 ClusterName:calico-20220602191616-12108 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico NodeIP: NodePort:8443 NodeName:}
	I0602 19:44:37.467317   12568 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.23.6
	I0602 19:44:37.494301   12568 binaries.go:44] Found k8s binaries, skipping transfer
	I0602 19:44:37.505299   12568 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /var/lib/minikube /etc/init.d
	I0602 19:44:37.537297   12568 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (374 bytes)
	I0602 19:44:37.576956   12568 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0602 19:44:37.610934   12568 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2049 bytes)
	I0602 19:44:37.656621   12568 ssh_runner.go:362] scp memory --> /var/lib/minikube/openrc-restart-wrapper.sh (233 bytes)
	I0602 19:44:37.704562   12568 ssh_runner.go:362] scp memory --> /etc/init.d/kubelet (839 bytes)
	I0602 19:44:37.766428   12568 ssh_runner.go:195] Run: grep 192.168.67.2	control-plane.minikube.internal$ /etc/hosts
	I0602 19:44:37.780164   12568 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.67.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0602 19:44:37.815603   12568 certs.go:54] Setting up C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\calico-20220602191616-12108 for IP: 192.168.67.2
	I0602 19:44:37.815603   12568 certs.go:182] skipping minikubeCA CA generation: C:\Users\jenkins.minikube7\minikube-integration\.minikube\ca.key
	I0602 19:44:37.816361   12568 certs.go:182] skipping proxyClientCA CA generation: C:\Users\jenkins.minikube7\minikube-integration\.minikube\proxy-client-ca.key
	I0602 19:44:37.817374   12568 certs.go:302] generating minikube-user signed cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\calico-20220602191616-12108\client.key
	I0602 19:44:37.817709   12568 crypto.go:68] Generating cert C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\calico-20220602191616-12108\client.crt with IP's: []
	I0602 19:44:37.956738   12568 crypto.go:156] Writing cert to C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\calico-20220602191616-12108\client.crt ...
	I0602 19:44:37.956738   12568 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\calico-20220602191616-12108\client.crt: {Name:mkdc94d9392c7ff5fe49ab613a885520c8cbcc32 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0602 19:44:37.959153   12568 crypto.go:164] Writing key to C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\calico-20220602191616-12108\client.key ...
	I0602 19:44:37.959271   12568 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\calico-20220602191616-12108\client.key: {Name:mk25258f3b60f974b435a77f3d6a5b96794b21f5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0602 19:44:37.961048   12568 certs.go:302] generating minikube signed cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\calico-20220602191616-12108\apiserver.key.c7fa3a9e
	I0602 19:44:37.961387   12568 crypto.go:68] Generating cert C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\calico-20220602191616-12108\apiserver.crt.c7fa3a9e with IP's: [192.168.67.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I0602 19:44:38.053560   12568 crypto.go:156] Writing cert to C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\calico-20220602191616-12108\apiserver.crt.c7fa3a9e ...
	I0602 19:44:38.053560   12568 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\calico-20220602191616-12108\apiserver.crt.c7fa3a9e: {Name:mk18fc10807acdb57d0a037b8dcaf33939f6fe6e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0602 19:44:38.055023   12568 crypto.go:164] Writing key to C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\calico-20220602191616-12108\apiserver.key.c7fa3a9e ...
	I0602 19:44:38.055023   12568 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\calico-20220602191616-12108\apiserver.key.c7fa3a9e: {Name:mkec4aee3f340832a3387b058f9873106a5b71d4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0602 19:44:38.056714   12568 certs.go:320] copying C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\calico-20220602191616-12108\apiserver.crt.c7fa3a9e -> C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\calico-20220602191616-12108\apiserver.crt
	I0602 19:44:38.064738   12568 certs.go:324] copying C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\calico-20220602191616-12108\apiserver.key.c7fa3a9e -> C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\calico-20220602191616-12108\apiserver.key
	I0602 19:44:38.065393   12568 certs.go:302] generating aggregator signed cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\calico-20220602191616-12108\proxy-client.key
	I0602 19:44:38.066257   12568 crypto.go:68] Generating cert C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\calico-20220602191616-12108\proxy-client.crt with IP's: []
	I0602 19:44:38.300312   12568 crypto.go:156] Writing cert to C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\calico-20220602191616-12108\proxy-client.crt ...
	I0602 19:44:38.300312   12568 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\calico-20220602191616-12108\proxy-client.crt: {Name:mk72885da9bd976aacb2f4f468702255ed9a8fd3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0602 19:44:38.301575   12568 crypto.go:164] Writing key to C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\calico-20220602191616-12108\proxy-client.key ...
	I0602 19:44:38.301575   12568 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\calico-20220602191616-12108\proxy-client.key: {Name:mk6453dba2be111049ece0eb1970040b46e985ae Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0602 19:44:38.309050   12568 certs.go:388] found cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\12108.pem (1338 bytes)
	W0602 19:44:38.309626   12568 certs.go:384] ignoring C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\12108_empty.pem, impossibly tiny 0 bytes
	I0602 19:44:38.309737   12568 certs.go:388] found cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca-key.pem (1675 bytes)
	I0602 19:44:38.309737   12568 certs.go:388] found cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem (1078 bytes)
	I0602 19:44:38.310262   12568 certs.go:388] found cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0602 19:44:38.310557   12568 certs.go:388] found cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\key.pem (1675 bytes)
	I0602 19:44:38.310642   12568 certs.go:388] found cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\files\etc\ssl\certs\C:\Users\jenkins.minikube7\minikube-integration\.minikube\files\etc\ssl\certs\121082.pem (1708 bytes)
	I0602 19:44:38.311979   12568 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\calico-20220602191616-12108\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0602 19:44:38.384305   12568 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\calico-20220602191616-12108\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0602 19:44:38.457286   12568 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\calico-20220602191616-12108\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0602 19:44:38.533131   12568 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\calico-20220602191616-12108\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0602 19:44:38.597439   12568 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0602 19:44:38.646779   12568 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0602 19:44:38.698345   12568 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0602 19:44:38.763917   12568 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0602 19:44:38.828175   12568 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0602 19:44:38.881088   12568 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\12108.pem --> /usr/share/ca-certificates/12108.pem (1338 bytes)
	I0602 19:44:38.938401   12568 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\files\etc\ssl\certs\121082.pem --> /usr/share/ca-certificates/121082.pem (1708 bytes)
	I0602 19:44:39.003701   12568 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (752 bytes)
	I0602 19:44:39.059772   12568 ssh_runner.go:195] Run: openssl version
	I0602 19:44:39.085265   12568 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0602 19:44:39.123381   12568 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0602 19:44:39.136387   12568 certs.go:431] hashing: -rw-r--r-- 1 root root 1111 Jun  2 17:16 /usr/share/ca-certificates/minikubeCA.pem
	I0602 19:44:39.147393   12568 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0602 19:44:39.168381   12568 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0602 19:44:39.219798   12568 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/12108.pem && ln -fs /usr/share/ca-certificates/12108.pem /etc/ssl/certs/12108.pem"
	I0602 19:44:39.257900   12568 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12108.pem
	I0602 19:44:39.268902   12568 certs.go:431] hashing: -rw-r--r-- 1 root root 1338 Jun  2 17:28 /usr/share/ca-certificates/12108.pem
	I0602 19:44:39.278924   12568 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12108.pem
	I0602 19:44:39.299905   12568 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/12108.pem /etc/ssl/certs/51391683.0"
	I0602 19:44:39.339171   12568 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/121082.pem && ln -fs /usr/share/ca-certificates/121082.pem /etc/ssl/certs/121082.pem"
	I0602 19:44:39.371169   12568 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/121082.pem
	I0602 19:44:39.385338   12568 certs.go:431] hashing: -rw-r--r-- 1 root root 1708 Jun  2 17:28 /usr/share/ca-certificates/121082.pem
	I0602 19:44:39.405510   12568 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/121082.pem
	I0602 19:44:39.435189   12568 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/121082.pem /etc/ssl/certs/3ec20f2e.0"
	I0602 19:44:39.471111   12568 kubeadm.go:395] StartCluster: {Name:calico-20220602191616-12108 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.6 ClusterName:calico-20220602191616-12108 Namespace:default APIServerName:miniku
beCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.23.6 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:5m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube7:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Binary
Mirror: DisableOptimizations:false DisableMetrics:false}
	I0602 19:44:39.479103   12568 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0602 19:44:39.591643   12568 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0602 19:44:39.640890   12568 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0602 19:44:39.672911   12568 kubeadm.go:221] ignoring SystemVerification for kubeadm because of docker driver
	I0602 19:44:39.682906   12568 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0602 19:44:39.722963   12568 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0602 19:44:39.722963   12568 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0602 19:45:02.168867   12568 out.go:204]   - Generating certificates and keys ...
	I0602 19:45:02.176856   12568 out.go:204]   - Booting up control plane ...
	I0602 19:45:02.184845   12568 out.go:204]   - Configuring RBAC rules ...
	I0602 19:45:02.189843   12568 cni.go:95] Creating CNI manager for "calico"
	I0602 19:45:02.194836   12568 out.go:177] * Configuring Calico (Container Networking Interface) ...
	I0602 19:45:02.197833   12568 cni.go:189] applying CNI manifest using /var/lib/minikube/binaries/v1.23.6/kubectl ...
	I0602 19:45:02.197833   12568 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (202049 bytes)
	I0602 19:45:02.299376   12568 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0602 19:45:07.859017   12568 ssh_runner.go:235] Completed: sudo /var/lib/minikube/binaries/v1.23.6/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml: (5.5596171s)
	I0602 19:45:07.859017   12568 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0602 19:45:07.881503   12568 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0602 19:45:07.882383   12568 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl label nodes minikube.k8s.io/version=v1.26.0-beta.1 minikube.k8s.io/commit=408dc4036f5a6d8b1313a2031b5dcb646a720fae minikube.k8s.io/name=calico-20220602191616-12108 minikube.k8s.io/updated_at=2022_06_02T19_45_07_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0602 19:45:07.892171   12568 ops.go:34] apiserver oom_adj: -16
	I0602 19:45:08.186764   12568 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0602 19:45:08.920983   12568 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0602 19:45:09.421301   12568 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0602 19:45:09.918889   12568 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0602 19:45:10.421877   12568 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0602 19:45:10.927612   12568 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0602 19:45:11.417579   12568 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0602 19:45:11.917089   12568 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0602 19:45:12.416135   12568 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0602 19:45:13.413869   12568 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0602 19:45:13.919325   12568 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0602 19:45:14.426836   12568 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0602 19:45:15.260435   12568 kubeadm.go:1045] duration metric: took 7.4003542s to wait for elevateKubeSystemPrivileges.
	I0602 19:45:15.260435   12568 kubeadm.go:397] StartCluster complete in 35.7891698s
	I0602 19:45:15.260435   12568 settings.go:142] acquiring lock: {Name:mk2f48f1c2db86c45c5c20d13312e07e9c171d09 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0602 19:45:15.261310   12568 settings.go:150] Updating kubeconfig:  C:\Users\jenkins.minikube7\minikube-integration\kubeconfig
	I0602 19:45:15.263674   12568 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube7\minikube-integration\kubeconfig: {Name:mk3b9816be6135fef42a073128e9ed356868417a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0602 19:45:16.162120   12568 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "calico-20220602191616-12108" rescaled to 1
	I0602 19:45:16.162120   12568 start.go:208] Will wait 5m0s for node &{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.23.6 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0602 19:45:16.163104   12568 addons.go:415] enableAddons start: toEnable=map[], additional=[]
	I0602 19:45:16.163104   12568 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.23.6/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0602 19:45:16.166127   12568 addons.go:65] Setting storage-provisioner=true in profile "calico-20220602191616-12108"
	I0602 19:45:16.166127   12568 addons.go:65] Setting default-storageclass=true in profile "calico-20220602191616-12108"
	I0602 19:45:16.166127   12568 addons.go:153] Setting addon storage-provisioner=true in "calico-20220602191616-12108"
	W0602 19:45:16.166127   12568 addons.go:165] addon storage-provisioner should already be in state true
	I0602 19:45:16.166127   12568 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "calico-20220602191616-12108"
	I0602 19:45:16.167093   12568 host.go:66] Checking if "calico-20220602191616-12108" exists ...
	I0602 19:45:16.166127   12568 out.go:177] * Verifying Kubernetes components...
	I0602 19:45:16.163104   12568 config.go:178] Loaded profile config "calico-20220602191616-12108": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.6
	I0602 19:45:16.188102   12568 ssh_runner.go:195] Run: sudo service kubelet status
	I0602 19:45:16.190099   12568 cli_runner.go:164] Run: docker container inspect calico-20220602191616-12108 --format={{.State.Status}}
	I0602 19:45:16.191095   12568 cli_runner.go:164] Run: docker container inspect calico-20220602191616-12108 --format={{.State.Status}}
	I0602 19:45:17.065196   12568 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.23.6/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.65.2 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.23.6/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0602 19:45:17.077175   12568 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" calico-20220602191616-12108
	I0602 19:45:17.855080   12568 cli_runner.go:217] Completed: docker container inspect calico-20220602191616-12108 --format={{.State.Status}}: (1.663978s)
	I0602 19:45:17.863095   12568 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0602 19:45:17.868088   12568 addons.go:348] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0602 19:45:17.868088   12568 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0602 19:45:17.878092   12568 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220602191616-12108
	I0602 19:45:17.883102   12568 cli_runner.go:217] Completed: docker container inspect calico-20220602191616-12108 --format={{.State.Status}}: (1.6929964s)
	I0602 19:45:17.952088   12568 addons.go:153] Setting addon default-storageclass=true in "calico-20220602191616-12108"
	W0602 19:45:17.952088   12568 addons.go:165] addon default-storageclass should already be in state true
	I0602 19:45:17.952088   12568 host.go:66] Checking if "calico-20220602191616-12108" exists ...
	I0602 19:45:17.985092   12568 cli_runner.go:164] Run: docker container inspect calico-20220602191616-12108 --format={{.State.Status}}
	I0602 19:45:18.779249   12568 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" calico-20220602191616-12108: (1.7020666s)
	I0602 19:45:18.782292   12568 node_ready.go:35] waiting up to 5m0s for node "calico-20220602191616-12108" to be "Ready" ...
	I0602 19:45:18.793244   12568 node_ready.go:49] node "calico-20220602191616-12108" has status "Ready":"True"
	I0602 19:45:18.793244   12568 node_ready.go:38] duration metric: took 10.9527ms waiting for node "calico-20220602191616-12108" to be "Ready" ...
	I0602 19:45:18.793244   12568 pod_ready.go:35] extra waiting up to 5m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0602 19:45:18.852810   12568 pod_ready.go:78] waiting up to 5m0s for pod "calico-kube-controllers-8594699699-m62nx" in "kube-system" namespace to be "Ready" ...
	I0602 19:45:19.538197   12568 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220602191616-12108: (1.6600982s)
	I0602 19:45:19.538197   12568 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:54867 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\calico-20220602191616-12108\id_rsa Username:docker}
	I0602 19:45:19.632410   12568 cli_runner.go:217] Completed: docker container inspect calico-20220602191616-12108 --format={{.State.Status}}: (1.6473107s)
	I0602 19:45:19.632410   12568 addons.go:348] installing /etc/kubernetes/addons/storageclass.yaml
	I0602 19:45:19.632410   12568 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0602 19:45:19.647445   12568 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220602191616-12108
	I0602 19:45:20.180208   12568 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.6/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0602 19:45:21.046208   12568 pod_ready.go:102] pod "calico-kube-controllers-8594699699-m62nx" in "kube-system" namespace has status "Ready":"False"
	I0602 19:45:21.101235   12568 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220602191616-12108: (1.4537834s)
	I0602 19:45:21.101235   12568 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:54867 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\calico-20220602191616-12108\id_rsa Username:docker}
	I0602 19:45:21.777443   12568 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.6/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0602 19:45:23.744608   12568 pod_ready.go:102] pod "calico-kube-controllers-8594699699-m62nx" in "kube-system" namespace has status "Ready":"False"
	I0602 19:45:24.858043   12568 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.23.6/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.65.2 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.23.6/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (7.792813s)
	I0602 19:45:24.858043   12568 start.go:806] {"host.minikube.internal": 192.168.65.2} host record injected into CoreDNS
	I0602 19:45:25.545725   12568 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.6/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (3.7672601s)
	I0602 19:45:25.545725   12568 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.6/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (5.3654933s)
	I0602 19:45:25.549754   12568 out.go:177] * Enabled addons: default-storageclass, storage-provisioner
	I0602 19:45:25.553701   12568 addons.go:417] enableAddons completed in 9.3895777s
	I0602 19:45:26.047046   12568 pod_ready.go:102] pod "calico-kube-controllers-8594699699-m62nx" in "kube-system" namespace has status "Ready":"False"
	I0602 19:45:28.556289   12568 pod_ready.go:102] pod "calico-kube-controllers-8594699699-m62nx" in "kube-system" namespace has status "Ready":"False"
	I0602 19:45:30.957286   12568 pod_ready.go:102] pod "calico-kube-controllers-8594699699-m62nx" in "kube-system" namespace has status "Ready":"False"
	I0602 19:45:32.958874   12568 pod_ready.go:102] pod "calico-kube-controllers-8594699699-m62nx" in "kube-system" namespace has status "Ready":"False"
	I0602 19:45:35.451130   12568 pod_ready.go:102] pod "calico-kube-controllers-8594699699-m62nx" in "kube-system" namespace has status "Ready":"False"
	I0602 19:45:37.455873   12568 pod_ready.go:102] pod "calico-kube-controllers-8594699699-m62nx" in "kube-system" namespace has status "Ready":"False"
	I0602 19:45:39.952037   12568 pod_ready.go:102] pod "calico-kube-controllers-8594699699-m62nx" in "kube-system" namespace has status "Ready":"False"
	I0602 19:45:41.953590   12568 pod_ready.go:102] pod "calico-kube-controllers-8594699699-m62nx" in "kube-system" namespace has status "Ready":"False"
	I0602 19:45:44.055320   12568 pod_ready.go:102] pod "calico-kube-controllers-8594699699-m62nx" in "kube-system" namespace has status "Ready":"False"
	I0602 19:45:46.460244   12568 pod_ready.go:102] pod "calico-kube-controllers-8594699699-m62nx" in "kube-system" namespace has status "Ready":"False"
	I0602 19:45:48.464960   12568 pod_ready.go:102] pod "calico-kube-controllers-8594699699-m62nx" in "kube-system" namespace has status "Ready":"False"
	I0602 19:45:50.957962   12568 pod_ready.go:102] pod "calico-kube-controllers-8594699699-m62nx" in "kube-system" namespace has status "Ready":"False"
	I0602 19:45:53.456004   12568 pod_ready.go:102] pod "calico-kube-controllers-8594699699-m62nx" in "kube-system" namespace has status "Ready":"False"
	I0602 19:45:55.961545   12568 pod_ready.go:102] pod "calico-kube-controllers-8594699699-m62nx" in "kube-system" namespace has status "Ready":"False"
	I0602 19:45:58.542909   12568 pod_ready.go:102] pod "calico-kube-controllers-8594699699-m62nx" in "kube-system" namespace has status "Ready":"False"
	I0602 19:46:00.912996   12568 pod_ready.go:102] pod "calico-kube-controllers-8594699699-m62nx" in "kube-system" namespace has status "Ready":"False"
	I0602 19:46:02.956957   12568 pod_ready.go:102] pod "calico-kube-controllers-8594699699-m62nx" in "kube-system" namespace has status "Ready":"False"
	I0602 19:46:05.048708   12568 pod_ready.go:102] pod "calico-kube-controllers-8594699699-m62nx" in "kube-system" namespace has status "Ready":"False"
	I0602 19:46:07.462698   12568 pod_ready.go:102] pod "calico-kube-controllers-8594699699-m62nx" in "kube-system" namespace has status "Ready":"False"
	I0602 19:46:09.954846   12568 pod_ready.go:102] pod "calico-kube-controllers-8594699699-m62nx" in "kube-system" namespace has status "Ready":"False"
	I0602 19:46:12.462765   12568 pod_ready.go:102] pod "calico-kube-controllers-8594699699-m62nx" in "kube-system" namespace has status "Ready":"False"
	I0602 19:46:14.969890   12568 pod_ready.go:102] pod "calico-kube-controllers-8594699699-m62nx" in "kube-system" namespace has status "Ready":"False"
	I0602 19:46:17.463199   12568 pod_ready.go:102] pod "calico-kube-controllers-8594699699-m62nx" in "kube-system" namespace has status "Ready":"False"
	I0602 19:46:19.959815   12568 pod_ready.go:102] pod "calico-kube-controllers-8594699699-m62nx" in "kube-system" namespace has status "Ready":"False"
	I0602 19:46:21.959877   12568 pod_ready.go:102] pod "calico-kube-controllers-8594699699-m62nx" in "kube-system" namespace has status "Ready":"False"
	I0602 19:46:24.458199   12568 pod_ready.go:102] pod "calico-kube-controllers-8594699699-m62nx" in "kube-system" namespace has status "Ready":"False"
	I0602 19:46:26.935194   12568 pod_ready.go:102] pod "calico-kube-controllers-8594699699-m62nx" in "kube-system" namespace has status "Ready":"False"
	I0602 19:46:28.963416   12568 pod_ready.go:102] pod "calico-kube-controllers-8594699699-m62nx" in "kube-system" namespace has status "Ready":"False"
	I0602 19:46:31.475999   12568 pod_ready.go:102] pod "calico-kube-controllers-8594699699-m62nx" in "kube-system" namespace has status "Ready":"False"
	I0602 19:46:33.914876   12568 pod_ready.go:102] pod "calico-kube-controllers-8594699699-m62nx" in "kube-system" namespace has status "Ready":"False"
	I0602 19:46:35.965256   12568 pod_ready.go:102] pod "calico-kube-controllers-8594699699-m62nx" in "kube-system" namespace has status "Ready":"False"
	I0602 19:46:38.461171   12568 pod_ready.go:102] pod "calico-kube-controllers-8594699699-m62nx" in "kube-system" namespace has status "Ready":"False"
	I0602 19:46:40.906328   12568 pod_ready.go:102] pod "calico-kube-controllers-8594699699-m62nx" in "kube-system" namespace has status "Ready":"False"
	I0602 19:46:42.956634   12568 pod_ready.go:102] pod "calico-kube-controllers-8594699699-m62nx" in "kube-system" namespace has status "Ready":"False"
	I0602 19:46:45.459202   12568 pod_ready.go:102] pod "calico-kube-controllers-8594699699-m62nx" in "kube-system" namespace has status "Ready":"False"
	I0602 19:46:47.904173   12568 pod_ready.go:102] pod "calico-kube-controllers-8594699699-m62nx" in "kube-system" namespace has status "Ready":"False"
	I0602 19:46:49.953096   12568 pod_ready.go:102] pod "calico-kube-controllers-8594699699-m62nx" in "kube-system" namespace has status "Ready":"False"
	I0602 19:46:52.455048   12568 pod_ready.go:102] pod "calico-kube-controllers-8594699699-m62nx" in "kube-system" namespace has status "Ready":"False"
	I0602 19:46:54.948954   12568 pod_ready.go:102] pod "calico-kube-controllers-8594699699-m62nx" in "kube-system" namespace has status "Ready":"False"
	I0602 19:46:56.963450   12568 pod_ready.go:102] pod "calico-kube-controllers-8594699699-m62nx" in "kube-system" namespace has status "Ready":"False"
	I0602 19:46:59.453500   12568 pod_ready.go:102] pod "calico-kube-controllers-8594699699-m62nx" in "kube-system" namespace has status "Ready":"False"
	I0602 19:47:01.952788   12568 pod_ready.go:102] pod "calico-kube-controllers-8594699699-m62nx" in "kube-system" namespace has status "Ready":"False"
	I0602 19:47:03.955309   12568 pod_ready.go:102] pod "calico-kube-controllers-8594699699-m62nx" in "kube-system" namespace has status "Ready":"False"
	I0602 19:47:05.958585   12568 pod_ready.go:102] pod "calico-kube-controllers-8594699699-m62nx" in "kube-system" namespace has status "Ready":"False"
	I0602 19:47:08.420129   12568 pod_ready.go:102] pod "calico-kube-controllers-8594699699-m62nx" in "kube-system" namespace has status "Ready":"False"
	I0602 19:47:10.459913   12568 pod_ready.go:102] pod "calico-kube-controllers-8594699699-m62nx" in "kube-system" namespace has status "Ready":"False"
	I0602 19:47:12.908496   12568 pod_ready.go:102] pod "calico-kube-controllers-8594699699-m62nx" in "kube-system" namespace has status "Ready":"False"
	I0602 19:47:14.955755   12568 pod_ready.go:102] pod "calico-kube-controllers-8594699699-m62nx" in "kube-system" namespace has status "Ready":"False"
	I0602 19:47:17.418027   12568 pod_ready.go:102] pod "calico-kube-controllers-8594699699-m62nx" in "kube-system" namespace has status "Ready":"False"
	I0602 19:47:19.454695   12568 pod_ready.go:102] pod "calico-kube-controllers-8594699699-m62nx" in "kube-system" namespace has status "Ready":"False"
	I0602 19:47:21.903027   12568 pod_ready.go:102] pod "calico-kube-controllers-8594699699-m62nx" in "kube-system" namespace has status "Ready":"False"
	I0602 19:47:23.908908   12568 pod_ready.go:102] pod "calico-kube-controllers-8594699699-m62nx" in "kube-system" namespace has status "Ready":"False"
	I0602 19:47:25.956112   12568 pod_ready.go:102] pod "calico-kube-controllers-8594699699-m62nx" in "kube-system" namespace has status "Ready":"False"
	I0602 19:47:28.446426   12568 pod_ready.go:102] pod "calico-kube-controllers-8594699699-m62nx" in "kube-system" namespace has status "Ready":"False"
	I0602 19:47:30.914146   12568 pod_ready.go:102] pod "calico-kube-controllers-8594699699-m62nx" in "kube-system" namespace has status "Ready":"False"
	I0602 19:47:32.954453   12568 pod_ready.go:102] pod "calico-kube-controllers-8594699699-m62nx" in "kube-system" namespace has status "Ready":"False"
	I0602 19:47:34.955995   12568 pod_ready.go:102] pod "calico-kube-controllers-8594699699-m62nx" in "kube-system" namespace has status "Ready":"False"
	I0602 19:47:36.958893   12568 pod_ready.go:102] pod "calico-kube-controllers-8594699699-m62nx" in "kube-system" namespace has status "Ready":"False"
	I0602 19:47:39.421174   12568 pod_ready.go:102] pod "calico-kube-controllers-8594699699-m62nx" in "kube-system" namespace has status "Ready":"False"
	I0602 19:47:41.458951   12568 pod_ready.go:102] pod "calico-kube-controllers-8594699699-m62nx" in "kube-system" namespace has status "Ready":"False"
	I0602 19:47:43.460421   12568 pod_ready.go:102] pod "calico-kube-controllers-8594699699-m62nx" in "kube-system" namespace has status "Ready":"False"
	I0602 19:47:45.959840   12568 pod_ready.go:102] pod "calico-kube-controllers-8594699699-m62nx" in "kube-system" namespace has status "Ready":"False"
	I0602 19:47:53.747455   12568 pod_ready.go:102] pod "calico-kube-controllers-8594699699-m62nx" in "kube-system" namespace has status "Ready":"False"
	I0602 19:47:55.954527   12568 pod_ready.go:102] pod "calico-kube-controllers-8594699699-m62nx" in "kube-system" namespace has status "Ready":"False"
	I0602 19:47:58.455762   12568 pod_ready.go:102] pod "calico-kube-controllers-8594699699-m62nx" in "kube-system" namespace has status "Ready":"False"
	I0602 19:48:00.960324   12568 pod_ready.go:102] pod "calico-kube-controllers-8594699699-m62nx" in "kube-system" namespace has status "Ready":"False"
	I0602 19:48:03.464795   12568 pod_ready.go:102] pod "calico-kube-controllers-8594699699-m62nx" in "kube-system" namespace has status "Ready":"False"
	I0602 19:48:05.957435   12568 pod_ready.go:102] pod "calico-kube-controllers-8594699699-m62nx" in "kube-system" namespace has status "Ready":"False"
	I0602 19:48:08.044605   12568 pod_ready.go:102] pod "calico-kube-controllers-8594699699-m62nx" in "kube-system" namespace has status "Ready":"False"
	I0602 19:48:10.450678   12568 pod_ready.go:102] pod "calico-kube-controllers-8594699699-m62nx" in "kube-system" namespace has status "Ready":"False"
	I0602 19:48:12.463487   12568 pod_ready.go:102] pod "calico-kube-controllers-8594699699-m62nx" in "kube-system" namespace has status "Ready":"False"
	I0602 19:48:14.925660   12568 pod_ready.go:102] pod "calico-kube-controllers-8594699699-m62nx" in "kube-system" namespace has status "Ready":"False"
	I0602 19:48:16.956765   12568 pod_ready.go:102] pod "calico-kube-controllers-8594699699-m62nx" in "kube-system" namespace has status "Ready":"False"
	I0602 19:48:19.452346   12568 pod_ready.go:102] pod "calico-kube-controllers-8594699699-m62nx" in "kube-system" namespace has status "Ready":"False"
	I0602 19:48:21.454862   12568 pod_ready.go:102] pod "calico-kube-controllers-8594699699-m62nx" in "kube-system" namespace has status "Ready":"False"
	I0602 19:48:23.957475   12568 pod_ready.go:102] pod "calico-kube-controllers-8594699699-m62nx" in "kube-system" namespace has status "Ready":"False"
	I0602 19:48:25.971426   12568 pod_ready.go:102] pod "calico-kube-controllers-8594699699-m62nx" in "kube-system" namespace has status "Ready":"False"
	I0602 19:48:28.414667   12568 pod_ready.go:102] pod "calico-kube-controllers-8594699699-m62nx" in "kube-system" namespace has status "Ready":"False"
	I0602 19:48:30.453899   12568 pod_ready.go:102] pod "calico-kube-controllers-8594699699-m62nx" in "kube-system" namespace has status "Ready":"False"
	I0602 19:48:32.911637   12568 pod_ready.go:102] pod "calico-kube-controllers-8594699699-m62nx" in "kube-system" namespace has status "Ready":"False"
	I0602 19:48:35.455543   12568 pod_ready.go:102] pod "calico-kube-controllers-8594699699-m62nx" in "kube-system" namespace has status "Ready":"False"
	I0602 19:48:37.920387   12568 pod_ready.go:102] pod "calico-kube-controllers-8594699699-m62nx" in "kube-system" namespace has status "Ready":"False"
	I0602 19:48:40.918407   12568 pod_ready.go:102] pod "calico-kube-controllers-8594699699-m62nx" in "kube-system" namespace has status "Ready":"False"
	I0602 19:48:42.960337   12568 pod_ready.go:102] pod "calico-kube-controllers-8594699699-m62nx" in "kube-system" namespace has status "Ready":"False"
	I0602 19:48:45.442682   12568 pod_ready.go:102] pod "calico-kube-controllers-8594699699-m62nx" in "kube-system" namespace has status "Ready":"False"
	I0602 19:48:47.954106   12568 pod_ready.go:102] pod "calico-kube-controllers-8594699699-m62nx" in "kube-system" namespace has status "Ready":"False"
	I0602 19:48:49.959040   12568 pod_ready.go:102] pod "calico-kube-controllers-8594699699-m62nx" in "kube-system" namespace has status "Ready":"False"
	I0602 19:48:52.458987   12568 pod_ready.go:102] pod "calico-kube-controllers-8594699699-m62nx" in "kube-system" namespace has status "Ready":"False"
	I0602 19:48:54.916056   12568 pod_ready.go:102] pod "calico-kube-controllers-8594699699-m62nx" in "kube-system" namespace has status "Ready":"False"
	I0602 19:48:56.965645   12568 pod_ready.go:102] pod "calico-kube-controllers-8594699699-m62nx" in "kube-system" namespace has status "Ready":"False"
	I0602 19:48:59.413207   12568 pod_ready.go:102] pod "calico-kube-controllers-8594699699-m62nx" in "kube-system" namespace has status "Ready":"False"
	I0602 19:49:01.455754   12568 pod_ready.go:102] pod "calico-kube-controllers-8594699699-m62nx" in "kube-system" namespace has status "Ready":"False"
	I0602 19:49:03.500160   12568 pod_ready.go:102] pod "calico-kube-controllers-8594699699-m62nx" in "kube-system" namespace has status "Ready":"False"
	I0602 19:49:05.945227   12568 pod_ready.go:102] pod "calico-kube-controllers-8594699699-m62nx" in "kube-system" namespace has status "Ready":"False"
	I0602 19:49:07.957740   12568 pod_ready.go:102] pod "calico-kube-controllers-8594699699-m62nx" in "kube-system" namespace has status "Ready":"False"
	I0602 19:49:10.455333   12568 pod_ready.go:102] pod "calico-kube-controllers-8594699699-m62nx" in "kube-system" namespace has status "Ready":"False"
	I0602 19:49:12.954257   12568 pod_ready.go:102] pod "calico-kube-controllers-8594699699-m62nx" in "kube-system" namespace has status "Ready":"False"
	I0602 19:49:14.958912   12568 pod_ready.go:102] pod "calico-kube-controllers-8594699699-m62nx" in "kube-system" namespace has status "Ready":"False"
	I0602 19:49:16.959096   12568 pod_ready.go:102] pod "calico-kube-controllers-8594699699-m62nx" in "kube-system" namespace has status "Ready":"False"
	I0602 19:49:19.045772   12568 pod_ready.go:102] pod "calico-kube-controllers-8594699699-m62nx" in "kube-system" namespace has status "Ready":"False"
	I0602 19:49:19.045772   12568 pod_ready.go:81] duration metric: took 4m0.1919252s waiting for pod "calico-kube-controllers-8594699699-m62nx" in "kube-system" namespace to be "Ready" ...
	E0602 19:49:19.045895   12568 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting for the condition
	I0602 19:49:19.045895   12568 pod_ready.go:78] waiting up to 5m0s for pod "calico-node-h2fxn" in "kube-system" namespace to be "Ready" ...
	I0602 19:49:21.177791   12568 pod_ready.go:102] pod "calico-node-h2fxn" in "kube-system" namespace has status "Ready":"False"
	I0602 19:49:23.181592   12568 pod_ready.go:102] pod "calico-node-h2fxn" in "kube-system" namespace has status "Ready":"False"
	I0602 19:49:25.194729   12568 pod_ready.go:102] pod "calico-node-h2fxn" in "kube-system" namespace has status "Ready":"False"
	I0602 19:49:27.743237   12568 pod_ready.go:102] pod "calico-node-h2fxn" in "kube-system" namespace has status "Ready":"False"
	I0602 19:49:30.162694   12568 pod_ready.go:102] pod "calico-node-h2fxn" in "kube-system" namespace has status "Ready":"False"
	I0602 19:49:32.181463   12568 pod_ready.go:102] pod "calico-node-h2fxn" in "kube-system" namespace has status "Ready":"False"
	I0602 19:49:34.681551   12568 pod_ready.go:102] pod "calico-node-h2fxn" in "kube-system" namespace has status "Ready":"False"
	I0602 19:49:36.748195   12568 pod_ready.go:102] pod "calico-node-h2fxn" in "kube-system" namespace has status "Ready":"False"
	I0602 19:49:39.176601   12568 pod_ready.go:102] pod "calico-node-h2fxn" in "kube-system" namespace has status "Ready":"False"
	I0602 19:49:41.670418   12568 pod_ready.go:102] pod "calico-node-h2fxn" in "kube-system" namespace has status "Ready":"False"
	I0602 19:49:43.745802   12568 pod_ready.go:102] pod "calico-node-h2fxn" in "kube-system" namespace has status "Ready":"False"
	I0602 19:49:46.173898   12568 pod_ready.go:102] pod "calico-node-h2fxn" in "kube-system" namespace has status "Ready":"False"
	I0602 19:49:48.176935   12568 pod_ready.go:102] pod "calico-node-h2fxn" in "kube-system" namespace has status "Ready":"False"
	I0602 19:49:50.265170   12568 pod_ready.go:102] pod "calico-node-h2fxn" in "kube-system" namespace has status "Ready":"False"
	I0602 19:49:52.676418   12568 pod_ready.go:102] pod "calico-node-h2fxn" in "kube-system" namespace has status "Ready":"False"
	I0602 19:49:55.182823   12568 pod_ready.go:102] pod "calico-node-h2fxn" in "kube-system" namespace has status "Ready":"False"
	I0602 19:49:57.670239   12568 pod_ready.go:102] pod "calico-node-h2fxn" in "kube-system" namespace has status "Ready":"False"
	I0602 19:49:59.742563   12568 pod_ready.go:102] pod "calico-node-h2fxn" in "kube-system" namespace has status "Ready":"False"
	I0602 19:50:02.167755   12568 pod_ready.go:102] pod "calico-node-h2fxn" in "kube-system" namespace has status "Ready":"False"
	I0602 19:50:04.242315   12568 pod_ready.go:102] pod "calico-node-h2fxn" in "kube-system" namespace has status "Ready":"False"
	I0602 19:50:06.344796   12568 pod_ready.go:102] pod "calico-node-h2fxn" in "kube-system" namespace has status "Ready":"False"
	I0602 19:50:08.674434   12568 pod_ready.go:102] pod "calico-node-h2fxn" in "kube-system" namespace has status "Ready":"False"
	I0602 19:50:10.677098   12568 pod_ready.go:102] pod "calico-node-h2fxn" in "kube-system" namespace has status "Ready":"False"
	I0602 19:50:12.679144   12568 pod_ready.go:102] pod "calico-node-h2fxn" in "kube-system" namespace has status "Ready":"False"
	I0602 19:50:14.680577   12568 pod_ready.go:102] pod "calico-node-h2fxn" in "kube-system" namespace has status "Ready":"False"
	I0602 19:50:16.745260   12568 pod_ready.go:102] pod "calico-node-h2fxn" in "kube-system" namespace has status "Ready":"False"
	I0602 19:50:19.162550   12568 pod_ready.go:102] pod "calico-node-h2fxn" in "kube-system" namespace has status "Ready":"False"
	I0602 19:50:21.245709   12568 pod_ready.go:102] pod "calico-node-h2fxn" in "kube-system" namespace has status "Ready":"False"
	I0602 19:50:23.746863   12568 pod_ready.go:102] pod "calico-node-h2fxn" in "kube-system" namespace has status "Ready":"False"
	I0602 19:50:26.245856   12568 pod_ready.go:102] pod "calico-node-h2fxn" in "kube-system" namespace has status "Ready":"False"
	I0602 19:50:28.667528   12568 pod_ready.go:102] pod "calico-node-h2fxn" in "kube-system" namespace has status "Ready":"False"
	I0602 19:50:30.696523   12568 pod_ready.go:102] pod "calico-node-h2fxn" in "kube-system" namespace has status "Ready":"False"
	I0602 19:50:32.744260   12568 pod_ready.go:102] pod "calico-node-h2fxn" in "kube-system" namespace has status "Ready":"False"
	I0602 19:50:35.242903   12568 pod_ready.go:102] pod "calico-node-h2fxn" in "kube-system" namespace has status "Ready":"False"
	I0602 19:50:37.260209   12568 pod_ready.go:102] pod "calico-node-h2fxn" in "kube-system" namespace has status "Ready":"False"
	I0602 19:50:39.675135   12568 pod_ready.go:102] pod "calico-node-h2fxn" in "kube-system" namespace has status "Ready":"False"
	I0602 19:50:42.183111   12568 pod_ready.go:102] pod "calico-node-h2fxn" in "kube-system" namespace has status "Ready":"False"
	I0602 19:50:44.679069   12568 pod_ready.go:102] pod "calico-node-h2fxn" in "kube-system" namespace has status "Ready":"False"
	I0602 19:50:47.166282   12568 pod_ready.go:102] pod "calico-node-h2fxn" in "kube-system" namespace has status "Ready":"False"
	I0602 19:50:49.169469   12568 pod_ready.go:102] pod "calico-node-h2fxn" in "kube-system" namespace has status "Ready":"False"
	I0602 19:50:51.668914   12568 pod_ready.go:102] pod "calico-node-h2fxn" in "kube-system" namespace has status "Ready":"False"
	I0602 19:50:53.670651   12568 pod_ready.go:102] pod "calico-node-h2fxn" in "kube-system" namespace has status "Ready":"False"
	I0602 19:50:55.743230   12568 pod_ready.go:102] pod "calico-node-h2fxn" in "kube-system" namespace has status "Ready":"False"
	I0602 19:50:58.171419   12568 pod_ready.go:102] pod "calico-node-h2fxn" in "kube-system" namespace has status "Ready":"False"
	I0602 19:51:00.181165   12568 pod_ready.go:102] pod "calico-node-h2fxn" in "kube-system" namespace has status "Ready":"False"
	I0602 19:51:02.682049   12568 pod_ready.go:102] pod "calico-node-h2fxn" in "kube-system" namespace has status "Ready":"False"
	I0602 19:51:05.167197   12568 pod_ready.go:102] pod "calico-node-h2fxn" in "kube-system" namespace has status "Ready":"False"
	I0602 19:51:07.244016   12568 pod_ready.go:102] pod "calico-node-h2fxn" in "kube-system" namespace has status "Ready":"False"
	I0602 19:51:09.747633   12568 pod_ready.go:102] pod "calico-node-h2fxn" in "kube-system" namespace has status "Ready":"False"
	I0602 19:51:11.809872   12568 pod_ready.go:102] pod "calico-node-h2fxn" in "kube-system" namespace has status "Ready":"False"
	I0602 19:51:14.180358   12568 pod_ready.go:102] pod "calico-node-h2fxn" in "kube-system" namespace has status "Ready":"False"
	I0602 19:51:16.242796   12568 pod_ready.go:102] pod "calico-node-h2fxn" in "kube-system" namespace has status "Ready":"False"
	I0602 19:51:18.681390   12568 pod_ready.go:102] pod "calico-node-h2fxn" in "kube-system" namespace has status "Ready":"False"
	I0602 19:51:20.688505   12568 pod_ready.go:102] pod "calico-node-h2fxn" in "kube-system" namespace has status "Ready":"False"
	I0602 19:51:23.180457   12568 pod_ready.go:102] pod "calico-node-h2fxn" in "kube-system" namespace has status "Ready":"False"
	I0602 19:51:25.742599   12568 pod_ready.go:102] pod "calico-node-h2fxn" in "kube-system" namespace has status "Ready":"False"
	I0602 19:51:28.169431   12568 pod_ready.go:102] pod "calico-node-h2fxn" in "kube-system" namespace has status "Ready":"False"
	I0602 19:51:30.256578   12568 pod_ready.go:102] pod "calico-node-h2fxn" in "kube-system" namespace has status "Ready":"False"
	I0602 19:51:32.674451   12568 pod_ready.go:102] pod "calico-node-h2fxn" in "kube-system" namespace has status "Ready":"False"
	I0602 19:51:35.169990   12568 pod_ready.go:102] pod "calico-node-h2fxn" in "kube-system" namespace has status "Ready":"False"
	I0602 19:51:37.243357   12568 pod_ready.go:102] pod "calico-node-h2fxn" in "kube-system" namespace has status "Ready":"False"
	I0602 19:51:39.741533   12568 pod_ready.go:102] pod "calico-node-h2fxn" in "kube-system" namespace has status "Ready":"False"
	I0602 19:51:42.183688   12568 pod_ready.go:102] pod "calico-node-h2fxn" in "kube-system" namespace has status "Ready":"False"
	I0602 19:51:44.199792   12568 pod_ready.go:102] pod "calico-node-h2fxn" in "kube-system" namespace has status "Ready":"False"
	I0602 19:51:46.246418   12568 pod_ready.go:102] pod "calico-node-h2fxn" in "kube-system" namespace has status "Ready":"False"
	I0602 19:51:48.677079   12568 pod_ready.go:102] pod "calico-node-h2fxn" in "kube-system" namespace has status "Ready":"False"
	I0602 19:51:50.686869   12568 pod_ready.go:102] pod "calico-node-h2fxn" in "kube-system" namespace has status "Ready":"False"
	I0602 19:51:53.176101   12568 pod_ready.go:102] pod "calico-node-h2fxn" in "kube-system" namespace has status "Ready":"False"
	I0602 19:51:55.598407   12568 pod_ready.go:102] pod "calico-node-h2fxn" in "kube-system" namespace has status "Ready":"False"
	I0602 19:51:57.751192   12568 pod_ready.go:102] pod "calico-node-h2fxn" in "kube-system" namespace has status "Ready":"False"
	I0602 19:52:00.243654   12568 pod_ready.go:102] pod "calico-node-h2fxn" in "kube-system" namespace has status "Ready":"False"
	I0602 19:52:02.252657   12568 pod_ready.go:102] pod "calico-node-h2fxn" in "kube-system" namespace has status "Ready":"False"
	I0602 19:52:04.748178   12568 pod_ready.go:102] pod "calico-node-h2fxn" in "kube-system" namespace has status "Ready":"False"
	I0602 19:52:06.759812   12568 pod_ready.go:102] pod "calico-node-h2fxn" in "kube-system" namespace has status "Ready":"False"
	I0602 19:52:09.178359   12568 pod_ready.go:102] pod "calico-node-h2fxn" in "kube-system" namespace has status "Ready":"False"
	I0602 19:52:11.690269   12568 pod_ready.go:102] pod "calico-node-h2fxn" in "kube-system" namespace has status "Ready":"False"
	I0602 19:52:14.177066   12568 pod_ready.go:102] pod "calico-node-h2fxn" in "kube-system" namespace has status "Ready":"False"
	I0602 19:52:16.247071   12568 pod_ready.go:102] pod "calico-node-h2fxn" in "kube-system" namespace has status "Ready":"False"
	I0602 19:52:18.266604   12568 pod_ready.go:102] pod "calico-node-h2fxn" in "kube-system" namespace has status "Ready":"False"
	I0602 19:52:20.746595   12568 pod_ready.go:102] pod "calico-node-h2fxn" in "kube-system" namespace has status "Ready":"False"
	I0602 19:52:23.264253   12568 pod_ready.go:102] pod "calico-node-h2fxn" in "kube-system" namespace has status "Ready":"False"
	I0602 19:52:25.749643   12568 pod_ready.go:102] pod "calico-node-h2fxn" in "kube-system" namespace has status "Ready":"False"
	I0602 19:52:28.179833   12568 pod_ready.go:102] pod "calico-node-h2fxn" in "kube-system" namespace has status "Ready":"False"
	I0602 19:52:30.246703   12568 pod_ready.go:102] pod "calico-node-h2fxn" in "kube-system" namespace has status "Ready":"False"
	I0602 19:52:32.686345   12568 pod_ready.go:102] pod "calico-node-h2fxn" in "kube-system" namespace has status "Ready":"False"
	I0602 19:52:35.245025   12568 pod_ready.go:102] pod "calico-node-h2fxn" in "kube-system" namespace has status "Ready":"False"
	I0602 19:52:37.684737   12568 pod_ready.go:102] pod "calico-node-h2fxn" in "kube-system" namespace has status "Ready":"False"
	I0602 19:52:40.225050   12568 pod_ready.go:102] pod "calico-node-h2fxn" in "kube-system" namespace has status "Ready":"False"
	I0602 19:52:42.746644   12568 pod_ready.go:102] pod "calico-node-h2fxn" in "kube-system" namespace has status "Ready":"False"
	I0602 19:52:45.265896   12568 pod_ready.go:102] pod "calico-node-h2fxn" in "kube-system" namespace has status "Ready":"False"
	I0602 19:52:47.662962   12568 pod_ready.go:102] pod "calico-node-h2fxn" in "kube-system" namespace has status "Ready":"False"
	I0602 19:52:49.675976   12568 pod_ready.go:102] pod "calico-node-h2fxn" in "kube-system" namespace has status "Ready":"False"
	I0602 19:52:51.676933   12568 pod_ready.go:102] pod "calico-node-h2fxn" in "kube-system" namespace has status "Ready":"False"
	I0602 19:52:54.243606   12568 pod_ready.go:102] pod "calico-node-h2fxn" in "kube-system" namespace has status "Ready":"False"
	I0602 19:52:56.244431   12568 pod_ready.go:102] pod "calico-node-h2fxn" in "kube-system" namespace has status "Ready":"False"
	I0602 19:52:58.681242   12568 pod_ready.go:102] pod "calico-node-h2fxn" in "kube-system" namespace has status "Ready":"False"
	I0602 19:53:00.760241   12568 pod_ready.go:102] pod "calico-node-h2fxn" in "kube-system" namespace has status "Ready":"False"
	I0602 19:53:03.182469   12568 pod_ready.go:102] pod "calico-node-h2fxn" in "kube-system" namespace has status "Ready":"False"
	I0602 19:53:05.686889   12568 pod_ready.go:102] pod "calico-node-h2fxn" in "kube-system" namespace has status "Ready":"False"
	I0602 19:53:07.688896   12568 pod_ready.go:102] pod "calico-node-h2fxn" in "kube-system" namespace has status "Ready":"False"
	I0602 19:53:09.689740   12568 pod_ready.go:102] pod "calico-node-h2fxn" in "kube-system" namespace has status "Ready":"False"
	I0602 19:53:11.748604   12568 pod_ready.go:102] pod "calico-node-h2fxn" in "kube-system" namespace has status "Ready":"False"
	I0602 19:53:14.188745   12568 pod_ready.go:102] pod "calico-node-h2fxn" in "kube-system" namespace has status "Ready":"False"
	I0602 19:53:16.246709   12568 pod_ready.go:102] pod "calico-node-h2fxn" in "kube-system" namespace has status "Ready":"False"
	I0602 19:53:18.745554   12568 pod_ready.go:102] pod "calico-node-h2fxn" in "kube-system" namespace has status "Ready":"False"
	I0602 19:53:19.358989   12568 pod_ready.go:81] duration metric: took 4m0.3120559s waiting for pod "calico-node-h2fxn" in "kube-system" namespace to be "Ready" ...
	E0602 19:53:19.359050   12568 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting for the condition
	I0602 19:53:19.359050   12568 pod_ready.go:38] duration metric: took 8m0.5637308s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0602 19:53:19.362751   12568 out.go:177] 
	W0602 19:53:19.365063   12568 out.go:239] X Exiting due to GUEST_START: wait 5m0s for node: extra waiting: timed out waiting 5m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	X Exiting due to GUEST_START: wait 5m0s for node: extra waiting: timed out waiting 5m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	W0602 19:53:19.365063   12568 out.go:239] * 
	* 
	W0602 19:53:19.366279   12568 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0602 19:53:19.370057   12568 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:103: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/calico/Start (988.87s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium/Start (628.97s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium/Start
net_test.go:101: (dbg) Run:  out/minikube-windows-amd64.exe start -p cilium-20220602191616-12108 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=cilium --driver=docker
E0602 19:37:41.000128   12108 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\ingress-addon-legacy-20220602180932-12108\client.crt: The system cannot find the path specified.
E0602 19:39:06.676787   12108 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\functional-20220602172845-12108\client.crt: The system cannot find the path specified.
E0602 19:40:00.508799   12108 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\addons-20220602171403-12108\client.crt: The system cannot find the path specified.
E0602 19:40:44.572084   12108 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\ingress-addon-legacy-20220602180932-12108\client.crt: The system cannot find the path specified.
E0602 19:40:47.513825   12108 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\no-preload-20220602192234-12108\client.crt: The system cannot find the path specified.
E0602 19:40:47.529537   12108 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\no-preload-20220602192234-12108\client.crt: The system cannot find the path specified.
E0602 19:40:47.545479   12108 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\no-preload-20220602192234-12108\client.crt: The system cannot find the path specified.
E0602 19:40:47.576243   12108 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\no-preload-20220602192234-12108\client.crt: The system cannot find the path specified.
E0602 19:40:47.622722   12108 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\no-preload-20220602192234-12108\client.crt: The system cannot find the path specified.
E0602 19:40:47.716038   12108 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\no-preload-20220602192234-12108\client.crt: The system cannot find the path specified.
E0602 19:40:47.887526   12108 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\no-preload-20220602192234-12108\client.crt: The system cannot find the path specified.
E0602 19:40:48.216926   12108 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\no-preload-20220602192234-12108\client.crt: The system cannot find the path specified.
E0602 19:40:48.868341   12108 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\no-preload-20220602192234-12108\client.crt: The system cannot find the path specified.
E0602 19:40:50.149141   12108 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\no-preload-20220602192234-12108\client.crt: The system cannot find the path specified.
E0602 19:40:52.714326   12108 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\no-preload-20220602192234-12108\client.crt: The system cannot find the path specified.
E0602 19:40:57.842696   12108 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\no-preload-20220602192234-12108\client.crt: The system cannot find the path specified.
E0602 19:41:04.367617   12108 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\old-k8s-version-20220602192231-12108\client.crt: The system cannot find the path specified.
E0602 19:41:04.383071   12108 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\old-k8s-version-20220602192231-12108\client.crt: The system cannot find the path specified.
E0602 19:41:04.398364   12108 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\old-k8s-version-20220602192231-12108\client.crt: The system cannot find the path specified.
E0602 19:41:04.429233   12108 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\old-k8s-version-20220602192231-12108\client.crt: The system cannot find the path specified.
E0602 19:41:04.476145   12108 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\old-k8s-version-20220602192231-12108\client.crt: The system cannot find the path specified.
E0602 19:41:04.571700   12108 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\old-k8s-version-20220602192231-12108\client.crt: The system cannot find the path specified.
E0602 19:41:04.743730   12108 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\old-k8s-version-20220602192231-12108\client.crt: The system cannot find the path specified.
E0602 19:41:05.068575   12108 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\old-k8s-version-20220602192231-12108\client.crt: The system cannot find the path specified.
E0602 19:41:05.719201   12108 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\old-k8s-version-20220602192231-12108\client.crt: The system cannot find the path specified.
E0602 19:41:07.001514   12108 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\old-k8s-version-20220602192231-12108\client.crt: The system cannot find the path specified.
E0602 19:41:08.088458   12108 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\no-preload-20220602192234-12108\client.crt: The system cannot find the path specified.
E0602 19:41:09.571612   12108 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\old-k8s-version-20220602192231-12108\client.crt: The system cannot find the path specified.
E0602 19:41:14.701012   12108 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\old-k8s-version-20220602192231-12108\client.crt: The system cannot find the path specified.
E0602 19:41:24.949313   12108 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\old-k8s-version-20220602192231-12108\client.crt: The system cannot find the path specified.
E0602 19:41:28.577817   12108 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\no-preload-20220602192234-12108\client.crt: The system cannot find the path specified.
E0602 19:41:45.435181   12108 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\old-k8s-version-20220602192231-12108\client.crt: The system cannot find the path specified.
E0602 19:41:57.286377   12108 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\addons-20220602171403-12108\client.crt: The system cannot find the path specified.
E0602 19:42:07.991662   12108 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\default-k8s-different-port-20220602192441-12108\client.crt: The system cannot find the path specified.
E0602 19:42:08.007473   12108 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\default-k8s-different-port-20220602192441-12108\client.crt: The system cannot find the path specified.
E0602 19:42:08.023675   12108 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\default-k8s-different-port-20220602192441-12108\client.crt: The system cannot find the path specified.
E0602 19:42:08.056980   12108 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\default-k8s-different-port-20220602192441-12108\client.crt: The system cannot find the path specified.
E0602 19:42:08.104499   12108 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\default-k8s-different-port-20220602192441-12108\client.crt: The system cannot find the path specified.
E0602 19:42:08.197089   12108 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\default-k8s-different-port-20220602192441-12108\client.crt: The system cannot find the path specified.
E0602 19:42:08.369081   12108 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\default-k8s-different-port-20220602192441-12108\client.crt: The system cannot find the path specified.
E0602 19:42:08.699372   12108 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\default-k8s-different-port-20220602192441-12108\client.crt: The system cannot find the path specified.
E0602 19:42:09.340174   12108 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\default-k8s-different-port-20220602192441-12108\client.crt: The system cannot find the path specified.
E0602 19:42:09.541779   12108 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\no-preload-20220602192234-12108\client.crt: The system cannot find the path specified.
E0602 19:42:10.635633   12108 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\default-k8s-different-port-20220602192441-12108\client.crt: The system cannot find the path specified.
E0602 19:42:13.208967   12108 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\default-k8s-different-port-20220602192441-12108\client.crt: The system cannot find the path specified.
E0602 19:42:18.337548   12108 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\default-k8s-different-port-20220602192441-12108\client.crt: The system cannot find the path specified.
E0602 19:42:26.397579   12108 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\old-k8s-version-20220602192231-12108\client.crt: The system cannot find the path specified.
E0602 19:42:28.585334   12108 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\default-k8s-different-port-20220602192441-12108\client.crt: The system cannot find the path specified.
E0602 19:42:41.004512   12108 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\ingress-addon-legacy-20220602180932-12108\client.crt: The system cannot find the path specified.
E0602 19:42:49.066816   12108 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\default-k8s-different-port-20220602192441-12108\client.crt: The system cannot find the path specified.
E0602 19:43:30.035209   12108 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\default-k8s-different-port-20220602192441-12108\client.crt: The system cannot find the path specified.
E0602 19:43:31.467848   12108 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\no-preload-20220602192234-12108\client.crt: The system cannot find the path specified.
E0602 19:43:48.334172   12108 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\old-k8s-version-20220602192231-12108\client.crt: The system cannot find the path specified.
E0602 19:44:06.676487   12108 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\functional-20220602172845-12108\client.crt: The system cannot find the path specified.

                                                
                                                
=== CONT  TestNetworkPlugins/group/cilium/Start
net_test.go:101: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p cilium-20220602191616-12108 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=cilium --driver=docker: exit status 80 (10m28.4567259s)

                                                
                                                
-- stdout --
	* [cilium-20220602191616-12108] minikube v1.26.0-beta.1 on Microsoft Windows 10 Enterprise N 10.0.19044 Build 19044
	  - KUBECONFIG=C:\Users\jenkins.minikube7\minikube-integration\kubeconfig
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube7\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=14269
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	* Using the docker driver based on user configuration
	* Using Docker Desktop driver with the root privilege
	* Starting control plane node cilium-20220602191616-12108 in cluster cilium-20220602191616-12108
	* Pulling base image ...
	* Creating docker container (CPUs=2, Memory=2048MB) ...
	* Preparing Kubernetes v1.23.6 on Docker 20.10.16 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Configuring RBAC rules ...
	* Configuring Cilium (Container Networking Interface) ...
	* Verifying Kubernetes components...
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	* Enabled addons: storage-provisioner, default-storageclass
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0602 19:37:25.604405   11612 out.go:296] Setting OutFile to fd 2008 ...
	I0602 19:37:25.665714   11612 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0602 19:37:25.665714   11612 out.go:309] Setting ErrFile to fd 1944...
	I0602 19:37:25.665714   11612 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0602 19:37:25.689407   11612 out.go:303] Setting JSON to false
	I0602 19:37:25.691746   11612 start.go:115] hostinfo: {"hostname":"minikube7","uptime":61787,"bootTime":1654136858,"procs":193,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19044 Build 19044","kernelVersion":"10.0.19044 Build 19044","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f66ed2ea-6c04-4a6b-8eea-b2fb0953e990"}
	W0602 19:37:25.692427   11612 start.go:123] gopshost.Virtualization returned error: not implemented yet
	I0602 19:37:25.698488   11612 out.go:177] * [cilium-20220602191616-12108] minikube v1.26.0-beta.1 on Microsoft Windows 10 Enterprise N 10.0.19044 Build 19044
	I0602 19:37:25.701453   11612 notify.go:193] Checking for updates...
	I0602 19:37:25.704513   11612 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube7\minikube-integration\kubeconfig
	I0602 19:37:25.707358   11612 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube7\minikube-integration\.minikube
	I0602 19:37:25.709774   11612 out.go:177]   - MINIKUBE_LOCATION=14269
	I0602 19:37:25.715911   11612 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0602 19:37:25.720383   11612 config.go:178] Loaded profile config "auto-20220602191545-12108": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.6
	I0602 19:37:25.720456   11612 config.go:178] Loaded profile config "calico-20220602191616-12108": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.6
	I0602 19:37:25.721048   11612 config.go:178] Loaded profile config "newest-cni-20220602193528-12108": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.6
	I0602 19:37:25.721048   11612 driver.go:358] Setting default libvirt URI to qemu:///system
	I0602 19:37:28.390683   11612 docker.go:137] docker version: linux-20.10.16
	I0602 19:37:28.397683   11612 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0602 19:37:30.449576   11612 cli_runner.go:217] Completed: docker system info --format "{{json .}}": (2.0518845s)
	I0602 19:37:30.449576   11612 info.go:265] docker info: {ID:DTJ5:6HSD:HQYO:NIUY:6CLH:5UQL:6FBK:7BMO:GVE6:JAI7:26MB:GZA3 Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:63 OomKillDisable:true NGoroutines:74 SystemTime:2022-06-02 19:37:29.457337 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:4 KernelVersion:5.10.102.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_6
4 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:53902323712 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.16 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16 Expected:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16} RuncCommit:{ID:v1.1.1-0-g52de29d Expected:v1.1.1-0-g52de29d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,p
rofile=default] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.6.0] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:C:\Program Files\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc.
Version:v0.17.0]] Warnings:<nil>}}
	I0602 19:37:30.454954   11612 out.go:177] * Using the docker driver based on user configuration
	I0602 19:37:30.458611   11612 start.go:284] selected driver: docker
	I0602 19:37:30.458611   11612 start.go:806] validating driver "docker" against <nil>
	I0602 19:37:30.459169   11612 start.go:817] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0602 19:37:30.533537   11612 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0602 19:37:32.607651   11612 cli_runner.go:217] Completed: docker system info --format "{{json .}}": (2.0741049s)
	I0602 19:37:32.607651   11612 info.go:265] docker info: {ID:DTJ5:6HSD:HQYO:NIUY:6CLH:5UQL:6FBK:7BMO:GVE6:JAI7:26MB:GZA3 Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:63 OomKillDisable:true NGoroutines:74 SystemTime:2022-06-02 19:37:31.5898703 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:4 KernelVersion:5.10.102.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_
64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:53902323712 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.16 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16 Expected:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16} RuncCommit:{ID:v1.1.1-0-g52de29d Expected:v1.1.1-0-g52de29d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,
profile=default] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.6.0] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:C:\Program Files\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc.
Version:v0.17.0]] Warnings:<nil>}}
	I0602 19:37:32.607651   11612 start_flags.go:292] no existing cluster config was found, will generate one from the flags 
	I0602 19:37:32.608650   11612 start_flags.go:847] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0602 19:37:32.611693   11612 out.go:177] * Using Docker Desktop driver with the root privilege
	I0602 19:37:32.613649   11612 cni.go:95] Creating CNI manager for "cilium"
	I0602 19:37:32.613649   11612 start_flags.go:301] Found "Cilium" CNI - setting NetworkPlugin=cni
	I0602 19:37:32.613649   11612 start_flags.go:306] config:
	{Name:cilium-20220602191616-12108 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.6 ClusterName:cilium-20220602191616-12108 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.loc
al ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:cilium NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:5m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube7:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0602 19:37:32.618661   11612 out.go:177] * Starting control plane node cilium-20220602191616-12108 in cluster cilium-20220602191616-12108
	I0602 19:37:32.622647   11612 cache.go:120] Beginning downloading kic base image for docker with docker
	I0602 19:37:32.625646   11612 out.go:177] * Pulling base image ...
	I0602 19:37:32.627648   11612 preload.go:132] Checking if preload exists for k8s version v1.23.6 and runtime docker
	I0602 19:37:32.627648   11612 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496 in local docker daemon
	I0602 19:37:32.628662   11612 preload.go:148] Found local preload: C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.23.6-docker-overlay2-amd64.tar.lz4
	I0602 19:37:32.628662   11612 cache.go:57] Caching tarball of preloaded images
	I0602 19:37:32.628662   11612 preload.go:174] Found C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.23.6-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0602 19:37:32.628662   11612 cache.go:60] Finished verifying existence of preloaded tar for  v1.23.6 on docker
	I0602 19:37:32.629660   11612 profile.go:148] Saving config to C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\cilium-20220602191616-12108\config.json ...
	I0602 19:37:32.629660   11612 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\cilium-20220602191616-12108\config.json: {Name:mkb3b35e8281f7fdacb774509d49aa312de53789 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0602 19:37:33.770425   11612 image.go:79] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496 in local docker daemon, skipping pull
	I0602 19:37:33.770425   11612 cache.go:141] gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496 exists in daemon, skipping load
	I0602 19:37:33.770425   11612 cache.go:206] Successfully downloaded all kic artifacts
	I0602 19:37:33.770845   11612 start.go:352] acquiring machines lock for cilium-20220602191616-12108: {Name:mke8c51b1b7f67ea8da2e6d3335883a4c167d884 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0602 19:37:33.771048   11612 start.go:356] acquired machines lock for "cilium-20220602191616-12108" in 0s
	I0602 19:37:33.771290   11612 start.go:91] Provisioning new machine with config: &{Name:cilium-20220602191616-12108 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.6 ClusterName:cilium-20220602191616-12108 Namespace:defaul
t APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:cilium NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.23.6 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:5m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube7:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:doc
ker BinaryMirror: DisableOptimizations:false DisableMetrics:false} &{Name: IP: Port:8443 KubernetesVersion:v1.23.6 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0602 19:37:33.771461   11612 start.go:131] createHost starting for "" (driver="docker")
	I0602 19:37:33.776408   11612 out.go:204] * Creating docker container (CPUs=2, Memory=2048MB) ...
	I0602 19:37:33.776408   11612 start.go:165] libmachine.API.Create for "cilium-20220602191616-12108" (driver="docker")
	I0602 19:37:33.776408   11612 client.go:168] LocalClient.Create starting
	I0602 19:37:33.776408   11612 main.go:134] libmachine: Reading certificate data from C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem
	I0602 19:37:33.777423   11612 main.go:134] libmachine: Decoding PEM data...
	I0602 19:37:33.777423   11612 main.go:134] libmachine: Parsing certificate...
	I0602 19:37:33.777423   11612 main.go:134] libmachine: Reading certificate data from C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\cert.pem
	I0602 19:37:33.777423   11612 main.go:134] libmachine: Decoding PEM data...
	I0602 19:37:33.777423   11612 main.go:134] libmachine: Parsing certificate...
	I0602 19:37:33.785413   11612 cli_runner.go:164] Run: docker network inspect cilium-20220602191616-12108 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0602 19:37:34.868149   11612 cli_runner.go:211] docker network inspect cilium-20220602191616-12108 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0602 19:37:34.868149   11612 cli_runner.go:217] Completed: docker network inspect cilium-20220602191616-12108 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}": (1.0827311s)
	I0602 19:37:34.875147   11612 network_create.go:272] running [docker network inspect cilium-20220602191616-12108] to gather additional debugging logs...
	I0602 19:37:34.875147   11612 cli_runner.go:164] Run: docker network inspect cilium-20220602191616-12108
	W0602 19:37:35.963070   11612 cli_runner.go:211] docker network inspect cilium-20220602191616-12108 returned with exit code 1
	I0602 19:37:35.963070   11612 cli_runner.go:217] Completed: docker network inspect cilium-20220602191616-12108: (1.0879183s)
	I0602 19:37:35.963070   11612 network_create.go:275] error running [docker network inspect cilium-20220602191616-12108]: docker network inspect cilium-20220602191616-12108: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: cilium-20220602191616-12108
	I0602 19:37:35.963070   11612 network_create.go:277] output of [docker network inspect cilium-20220602191616-12108]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: cilium-20220602191616-12108
	
	** /stderr **
	I0602 19:37:35.971219   11612 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0602 19:37:37.042310   11612 cli_runner.go:217] Completed: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}": (1.0710858s)
	I0602 19:37:37.061306   11612 network.go:288] reserving subnet 192.168.49.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[] amended:true}} dirty:map[192.168.49.0:0xc00015e210] misses:0}
	I0602 19:37:37.061306   11612 network.go:235] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0602 19:37:37.061306   11612 network_create.go:115] attempt to create docker network cilium-20220602191616-12108 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0602 19:37:37.069340   11612 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true cilium-20220602191616-12108
	I0602 19:37:38.240145   11612 cli_runner.go:217] Completed: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true cilium-20220602191616-12108: (1.1707998s)
	I0602 19:37:38.240145   11612 network_create.go:99] docker network cilium-20220602191616-12108 192.168.49.0/24 created
	I0602 19:37:38.240145   11612 kic.go:106] calculated static IP "192.168.49.2" for the "cilium-20220602191616-12108" container
	I0602 19:37:38.256782   11612 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0602 19:37:39.387739   11612 cli_runner.go:217] Completed: docker ps -a --format {{.Names}}: (1.1307235s)
	I0602 19:37:39.396300   11612 cli_runner.go:164] Run: docker volume create cilium-20220602191616-12108 --label name.minikube.sigs.k8s.io=cilium-20220602191616-12108 --label created_by.minikube.sigs.k8s.io=true
	I0602 19:37:40.530245   11612 cli_runner.go:217] Completed: docker volume create cilium-20220602191616-12108 --label name.minikube.sigs.k8s.io=cilium-20220602191616-12108 --label created_by.minikube.sigs.k8s.io=true: (1.1336596s)
	I0602 19:37:40.530307   11612 oci.go:103] Successfully created a docker volume cilium-20220602191616-12108
	I0602 19:37:40.539880   11612 cli_runner.go:164] Run: docker run --rm --name cilium-20220602191616-12108-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=cilium-20220602191616-12108 --entrypoint /usr/bin/test -v cilium-20220602191616-12108:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496 -d /var/lib
	I0602 19:38:03.377763   11612 cli_runner.go:217] Completed: docker run --rm --name cilium-20220602191616-12108-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=cilium-20220602191616-12108 --entrypoint /usr/bin/test -v cilium-20220602191616-12108:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496 -d /var/lib: (22.8373179s)
	I0602 19:38:03.377763   11612 oci.go:107] Successfully prepared a docker volume cilium-20220602191616-12108
	I0602 19:38:03.378101   11612 preload.go:132] Checking if preload exists for k8s version v1.23.6 and runtime docker
	I0602 19:38:03.378287   11612 kic.go:179] Starting extracting preloaded images to volume ...
	I0602 19:38:03.388266   11612 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.23.6-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v cilium-20220602191616-12108:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496 -I lz4 -xf /preloaded.tar -C /extractDir
	I0602 19:38:37.792825   11612 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.23.6-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v cilium-20220602191616-12108:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496 -I lz4 -xf /preloaded.tar -C /extractDir: (34.4041014s)
	I0602 19:38:37.792825   11612 kic.go:188] duration metric: took 34.414387 seconds to extract preloaded images to volume
	I0602 19:38:37.799962   11612 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0602 19:38:39.760780   11612 cli_runner.go:217] Completed: docker system info --format "{{json .}}": (1.9606077s)
	I0602 19:38:39.760938   11612 info.go:265] docker info: {ID:DTJ5:6HSD:HQYO:NIUY:6CLH:5UQL:6FBK:7BMO:GVE6:JAI7:26MB:GZA3 Containers:3 ContainersRunning:0 ContainersPaused:0 ContainersStopped:3 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:48 OomKillDisable:true NGoroutines:50 SystemTime:2022-06-02 19:38:38.7760123 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:4 KernelVersion:5.10.102.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_
64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:53902323712 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.16 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16 Expected:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16} RuncCommit:{ID:v1.1.1-0-g52de29d Expected:v1.1.1-0-g52de29d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,
profile=default] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.6.0] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:C:\Program Files\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc.
Version:v0.17.0]] Warnings:<nil>}}
	I0602 19:38:39.768983   11612 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0602 19:38:41.780571   11612 cli_runner.go:217] Completed: docker info --format "'{{json .SecurityOptions}}'": (2.0115218s)
	I0602 19:38:41.789115   11612 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname cilium-20220602191616-12108 --name cilium-20220602191616-12108 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=cilium-20220602191616-12108 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=cilium-20220602191616-12108 --network cilium-20220602191616-12108 --ip 192.168.49.2 --volume cilium-20220602191616-12108:/var --security-opt apparmor=unconfined --memory=2048mb --memory-swap=2048mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496
	I0602 19:38:43.971860   11612 cli_runner.go:217] Completed: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname cilium-20220602191616-12108 --name cilium-20220602191616-12108 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=cilium-20220602191616-12108 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=cilium-20220602191616-12108 --network cilium-20220602191616-12108 --ip 192.168.49.2 --volume cilium-20220602191616-12108:/var --security-opt apparmor=unconfined --memory=2048mb --memory-swap=2048mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496: (2.1825569s)
	I0602 19:38:43.981495   11612 cli_runner.go:164] Run: docker container inspect cilium-20220602191616-12108 --format={{.State.Running}}
	I0602 19:38:45.098972   11612 cli_runner.go:217] Completed: docker container inspect cilium-20220602191616-12108 --format={{.State.Running}}: (1.1174722s)
	I0602 19:38:45.105973   11612 cli_runner.go:164] Run: docker container inspect cilium-20220602191616-12108 --format={{.State.Status}}
	I0602 19:38:46.198487   11612 cli_runner.go:217] Completed: docker container inspect cilium-20220602191616-12108 --format={{.State.Status}}: (1.0925094s)
	I0602 19:38:46.205914   11612 cli_runner.go:164] Run: docker exec cilium-20220602191616-12108 stat /var/lib/dpkg/alternatives/iptables
	I0602 19:38:47.344949   11612 cli_runner.go:217] Completed: docker exec cilium-20220602191616-12108 stat /var/lib/dpkg/alternatives/iptables: (1.1390294s)
	I0602 19:38:47.344949   11612 oci.go:247] the created container "cilium-20220602191616-12108" has a running status.
	I0602 19:38:47.345244   11612 kic.go:210] Creating ssh key for kic: C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\cilium-20220602191616-12108\id_rsa...
	I0602 19:38:47.709448   11612 kic_runner.go:191] docker (temp): C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\cilium-20220602191616-12108\id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0602 19:38:48.895887   11612 cli_runner.go:164] Run: docker container inspect cilium-20220602191616-12108 --format={{.State.Status}}
	I0602 19:38:49.997845   11612 cli_runner.go:217] Completed: docker container inspect cilium-20220602191616-12108 --format={{.State.Status}}: (1.1009971s)
	I0602 19:38:50.017044   11612 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0602 19:38:50.017044   11612 kic_runner.go:114] Args: [docker exec --privileged cilium-20220602191616-12108 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0602 19:38:51.268635   11612 kic_runner.go:123] Done: [docker exec --privileged cilium-20220602191616-12108 chown docker:docker /home/docker/.ssh/authorized_keys]: (1.2514106s)
	I0602 19:38:51.272906   11612 kic.go:250] ensuring only current user has permissions to key file located at : C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\cilium-20220602191616-12108\id_rsa...
	I0602 19:38:51.801028   11612 cli_runner.go:164] Run: docker container inspect cilium-20220602191616-12108 --format={{.State.Status}}
	I0602 19:38:52.851984   11612 cli_runner.go:217] Completed: docker container inspect cilium-20220602191616-12108 --format={{.State.Status}}: (1.0509515s)
	I0602 19:38:52.852206   11612 machine.go:88] provisioning docker machine ...
	I0602 19:38:52.852206   11612 ubuntu.go:169] provisioning hostname "cilium-20220602191616-12108"
	I0602 19:38:52.859521   11612 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20220602191616-12108
	I0602 19:38:53.936196   11612 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20220602191616-12108: (1.076671s)
	I0602 19:38:53.940834   11612 main.go:134] libmachine: Using SSH client type: native
	I0602 19:38:53.946786   11612 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x5b2ea0] 0x5b5d00 <nil>  [] 0s} 127.0.0.1 54726 <nil> <nil>}
	I0602 19:38:53.946786   11612 main.go:134] libmachine: About to run SSH command:
	sudo hostname cilium-20220602191616-12108 && echo "cilium-20220602191616-12108" | sudo tee /etc/hostname
	I0602 19:38:54.128614   11612 main.go:134] libmachine: SSH cmd err, output: <nil>: cilium-20220602191616-12108
	
	I0602 19:38:54.138033   11612 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20220602191616-12108
	I0602 19:38:55.182471   11612 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20220602191616-12108: (1.0444339s)
	I0602 19:38:55.186439   11612 main.go:134] libmachine: Using SSH client type: native
	I0602 19:38:55.186563   11612 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x5b2ea0] 0x5b5d00 <nil>  [] 0s} 127.0.0.1 54726 <nil> <nil>}
	I0602 19:38:55.186563   11612 main.go:134] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\scilium-20220602191616-12108' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 cilium-20220602191616-12108/g' /etc/hosts;
				else 
					echo '127.0.1.1 cilium-20220602191616-12108' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0602 19:38:55.398654   11612 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I0602 19:38:55.398654   11612 ubuntu.go:175] set auth options {CertDir:C:\Users\jenkins.minikube7\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube7\minikube-integration\.minikube}
	I0602 19:38:55.398654   11612 ubuntu.go:177] setting up certificates
	I0602 19:38:55.398654   11612 provision.go:83] configureAuth start
	I0602 19:38:55.406639   11612 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" cilium-20220602191616-12108
	I0602 19:38:56.428190   11612 cli_runner.go:217] Completed: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" cilium-20220602191616-12108: (1.0215464s)
	I0602 19:38:56.428190   11612 provision.go:138] copyHostCerts
	I0602 19:38:56.428190   11612 exec_runner.go:144] found C:\Users\jenkins.minikube7\minikube-integration\.minikube/ca.pem, removing ...
	I0602 19:38:56.428190   11612 exec_runner.go:207] rm: C:\Users\jenkins.minikube7\minikube-integration\.minikube\ca.pem
	I0602 19:38:56.428190   11612 exec_runner.go:151] cp: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube7\minikube-integration\.minikube/ca.pem (1078 bytes)
	I0602 19:38:56.430404   11612 exec_runner.go:144] found C:\Users\jenkins.minikube7\minikube-integration\.minikube/cert.pem, removing ...
	I0602 19:38:56.430404   11612 exec_runner.go:207] rm: C:\Users\jenkins.minikube7\minikube-integration\.minikube\cert.pem
	I0602 19:38:56.430773   11612 exec_runner.go:151] cp: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube7\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0602 19:38:56.431962   11612 exec_runner.go:144] found C:\Users\jenkins.minikube7\minikube-integration\.minikube/key.pem, removing ...
	I0602 19:38:56.432030   11612 exec_runner.go:207] rm: C:\Users\jenkins.minikube7\minikube-integration\.minikube\key.pem
	I0602 19:38:56.432388   11612 exec_runner.go:151] cp: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube7\minikube-integration\.minikube/key.pem (1675 bytes)
	I0602 19:38:56.433296   11612 provision.go:112] generating server cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.cilium-20220602191616-12108 san=[192.168.49.2 127.0.0.1 localhost 127.0.0.1 minikube cilium-20220602191616-12108]
	I0602 19:38:56.540813   11612 provision.go:172] copyRemoteCerts
	I0602 19:38:56.548798   11612 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0602 19:38:56.555792   11612 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20220602191616-12108
	I0602 19:38:57.587469   11612 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20220602191616-12108: (1.0316722s)
	I0602 19:38:57.587469   11612 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:54726 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\cilium-20220602191616-12108\id_rsa Username:docker}
	I0602 19:38:57.757637   11612 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (1.208834s)
	I0602 19:38:57.758077   11612 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0602 19:38:57.815702   11612 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1253 bytes)
	I0602 19:38:57.877987   11612 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0602 19:38:57.942149   11612 provision.go:86] duration metric: configureAuth took 2.5434833s
	I0602 19:38:57.942149   11612 ubuntu.go:193] setting minikube options for container-runtime
	I0602 19:38:57.943217   11612 config.go:178] Loaded profile config "cilium-20220602191616-12108": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.6
	I0602 19:38:57.955534   11612 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20220602191616-12108
	I0602 19:38:59.028173   11612 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20220602191616-12108: (1.0726343s)
	I0602 19:38:59.032973   11612 main.go:134] libmachine: Using SSH client type: native
	I0602 19:38:59.032973   11612 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x5b2ea0] 0x5b5d00 <nil>  [] 0s} 127.0.0.1 54726 <nil> <nil>}
	I0602 19:38:59.032973   11612 main.go:134] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0602 19:38:59.225232   11612 main.go:134] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0602 19:38:59.225232   11612 ubuntu.go:71] root file system type: overlay
	I0602 19:38:59.225232   11612 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0602 19:38:59.233764   11612 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20220602191616-12108
	I0602 19:39:00.295345   11612 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20220602191616-12108: (1.0615761s)
	I0602 19:39:00.299477   11612 main.go:134] libmachine: Using SSH client type: native
	I0602 19:39:00.300028   11612 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x5b2ea0] 0x5b5d00 <nil>  [] 0s} 127.0.0.1 54726 <nil> <nil>}
	I0602 19:39:00.300257   11612 main.go:134] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0602 19:39:00.544968   11612 main.go:134] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0602 19:39:00.557114   11612 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20220602191616-12108
	I0602 19:39:01.597321   11612 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20220602191616-12108: (1.0400687s)
	I0602 19:39:01.601013   11612 main.go:134] libmachine: Using SSH client type: native
	I0602 19:39:01.601428   11612 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x5b2ea0] 0x5b5d00 <nil>  [] 0s} 127.0.0.1 54726 <nil> <nil>}
	I0602 19:39:01.601543   11612 main.go:134] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0602 19:39:03.075176   11612 main.go:134] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2022-05-12 09:15:28.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2022-06-02 19:39:00.530339000 +0000
	@@ -1,30 +1,32 @@
	 [Unit]
	 Description=Docker Application Container Engine
	 Documentation=https://docs.docker.com
	-After=network-online.target docker.socket firewalld.service containerd.service
	+BindsTo=containerd.service
	+After=network-online.target firewalld.service containerd.service
	 Wants=network-online.target
	-Requires=docker.socket containerd.service
	+Requires=docker.socket
	+StartLimitBurst=3
	+StartLimitIntervalSec=60
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutSec=0
	-RestartSec=2
	-Restart=always
	-
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	+Restart=on-failure
	 
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP $MAINPID
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	@@ -32,16 +34,16 @@
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	 
	 # kill only the docker process, not all processes in the cgroup
	 KillMode=process
	-OOMScoreAdjust=-500
	 
	 [Install]
	 WantedBy=multi-user.target
	Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install enable docker
	
	I0602 19:39:03.075274   11612 machine.go:91] provisioned docker machine in 10.2230225s
	I0602 19:39:03.075274   11612 client.go:171] LocalClient.Create took 1m29.2984744s
	I0602 19:39:03.075389   11612 start.go:173] duration metric: libmachine.API.Create for "cilium-20220602191616-12108" took 1m29.2984744s
	I0602 19:39:03.075439   11612 start.go:306] post-start starting for "cilium-20220602191616-12108" (driver="docker")
	I0602 19:39:03.075439   11612 start.go:316] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0602 19:39:03.085757   11612 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0602 19:39:03.093901   11612 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20220602191616-12108
	I0602 19:39:04.142952   11612 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20220602191616-12108: (1.0490463s)
	I0602 19:39:04.142952   11612 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:54726 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\cilium-20220602191616-12108\id_rsa Username:docker}
	I0602 19:39:04.298507   11612 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (1.2127444s)
	I0602 19:39:04.308896   11612 ssh_runner.go:195] Run: cat /etc/os-release
	I0602 19:39:04.324377   11612 main.go:134] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0602 19:39:04.324377   11612 main.go:134] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0602 19:39:04.324377   11612 main.go:134] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0602 19:39:04.324377   11612 info.go:137] Remote host: Ubuntu 20.04.4 LTS
	I0602 19:39:04.324377   11612 filesync.go:126] Scanning C:\Users\jenkins.minikube7\minikube-integration\.minikube\addons for local assets ...
	I0602 19:39:04.324959   11612 filesync.go:126] Scanning C:\Users\jenkins.minikube7\minikube-integration\.minikube\files for local assets ...
	I0602 19:39:04.326265   11612 filesync.go:149] local asset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\files\etc\ssl\certs\121082.pem -> 121082.pem in /etc/ssl/certs
	I0602 19:39:04.337107   11612 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0602 19:39:04.367375   11612 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\files\etc\ssl\certs\121082.pem --> /etc/ssl/certs/121082.pem (1708 bytes)
	I0602 19:39:04.423848   11612 start.go:309] post-start completed in 1.3484025s
	I0602 19:39:04.451677   11612 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" cilium-20220602191616-12108
	I0602 19:39:05.518899   11612 cli_runner.go:217] Completed: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" cilium-20220602191616-12108: (1.0670329s)
	I0602 19:39:05.519104   11612 profile.go:148] Saving config to C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\cilium-20220602191616-12108\config.json ...
	I0602 19:39:05.531490   11612 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0602 19:39:05.538082   11612 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20220602191616-12108
	I0602 19:39:06.630075   11612 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20220602191616-12108: (1.0917618s)
	I0602 19:39:06.630441   11612 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:54726 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\cilium-20220602191616-12108\id_rsa Username:docker}
	I0602 19:39:06.777298   11612 ssh_runner.go:235] Completed: sh -c "df -h /var | awk 'NR==2{print $5}'": (1.2458026s)
	I0602 19:39:06.787056   11612 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0602 19:39:06.803397   11612 start.go:134] duration metric: createHost completed in 1m33.0315282s
	I0602 19:39:06.803840   11612 start.go:81] releasing machines lock for "cilium-20220602191616-12108", held for 1m33.0323397s
	I0602 19:39:06.812102   11612 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" cilium-20220602191616-12108
	I0602 19:39:07.866984   11612 cli_runner.go:217] Completed: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" cilium-20220602191616-12108: (1.0548772s)
	I0602 19:39:07.868991   11612 ssh_runner.go:195] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0602 19:39:07.876989   11612 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20220602191616-12108
	I0602 19:39:07.876989   11612 ssh_runner.go:195] Run: systemctl --version
	I0602 19:39:07.884990   11612 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20220602191616-12108
	I0602 19:39:08.929257   11612 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20220602191616-12108: (1.0522632s)
	I0602 19:39:08.929257   11612 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:54726 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\cilium-20220602191616-12108\id_rsa Username:docker}
	I0602 19:39:08.945205   11612 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20220602191616-12108: (1.0602109s)
	I0602 19:39:08.945205   11612 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:54726 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\cilium-20220602191616-12108\id_rsa Username:docker}
	I0602 19:39:09.213442   11612 ssh_runner.go:235] Completed: systemctl --version: (1.3364037s)
	I0602 19:39:09.213472   11612 ssh_runner.go:235] Completed: curl -sS -m 2 https://k8s.gcr.io/: (1.3444451s)
	I0602 19:39:09.223987   11612 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0602 19:39:09.260228   11612 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0602 19:39:09.297235   11612 cruntime.go:273] skipping containerd shutdown because we are bound to it
	I0602 19:39:09.308322   11612 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0602 19:39:09.340110   11612 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/dockershim.sock
	image-endpoint: unix:///var/run/dockershim.sock
	" | sudo tee /etc/crictl.yaml"
	I0602 19:39:09.390341   11612 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0602 19:39:09.568211   11612 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0602 19:39:09.754980   11612 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0602 19:39:09.791973   11612 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0602 19:39:09.991682   11612 ssh_runner.go:195] Run: sudo systemctl start docker
	I0602 19:39:10.027644   11612 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0602 19:39:10.136588   11612 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0602 19:39:10.231903   11612 out.go:204] * Preparing Kubernetes v1.23.6 on Docker 20.10.16 ...
	I0602 19:39:10.241472   11612 cli_runner.go:164] Run: docker exec -t cilium-20220602191616-12108 dig +short host.docker.internal
	I0602 19:39:11.488962   11612 cli_runner.go:217] Completed: docker exec -t cilium-20220602191616-12108 dig +short host.docker.internal: (1.2474847s)
	I0602 19:39:11.489236   11612 network.go:96] got host ip for mount in container by digging dns: 192.168.65.2
	I0602 19:39:11.495868   11612 ssh_runner.go:195] Run: grep 192.168.65.2	host.minikube.internal$ /etc/hosts
	I0602 19:39:11.513283   11612 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.65.2	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0602 19:39:11.547084   11612 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" cilium-20220602191616-12108
	I0602 19:39:12.598402   11612 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" cilium-20220602191616-12108: (1.0513133s)
	I0602 19:39:12.598944   11612 preload.go:132] Checking if preload exists for k8s version v1.23.6 and runtime docker
	I0602 19:39:12.605806   11612 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0602 19:39:12.687833   11612 docker.go:610] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.23.6
	k8s.gcr.io/kube-scheduler:v1.23.6
	k8s.gcr.io/kube-controller-manager:v1.23.6
	k8s.gcr.io/kube-proxy:v1.23.6
	k8s.gcr.io/etcd:3.5.1-0
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0602 19:39:12.687833   11612 docker.go:541] Images already preloaded, skipping extraction
	I0602 19:39:12.697108   11612 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0602 19:39:12.773764   11612 docker.go:610] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.23.6
	k8s.gcr.io/kube-scheduler:v1.23.6
	k8s.gcr.io/kube-proxy:v1.23.6
	k8s.gcr.io/kube-controller-manager:v1.23.6
	k8s.gcr.io/etcd:3.5.1-0
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0602 19:39:12.773855   11612 cache_images.go:84] Images are preloaded, skipping loading
	I0602 19:39:12.780988   11612 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0602 19:39:12.960496   11612 cni.go:95] Creating CNI manager for "cilium"
	I0602 19:39:12.961146   11612 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0602 19:39:12.961448   11612 kubeadm.go:158] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.23.6 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:cilium-20220602191616-12108 NodeName:cilium-20220602191616-12108 DNSDomain:cluster.local CRISocket:/var/run/dockershim.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/
minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	I0602 19:39:12.961546   11612 kubeadm.go:162] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/dockershim.sock
	  name: "cilium-20220602191616-12108"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.23.6
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0602 19:39:12.961546   11612 kubeadm.go:961] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.23.6/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=docker --hostname-override=cilium-20220602191616-12108 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.23.6 ClusterName:cilium-20220602191616-12108 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:cilium NodeIP: NodePort:8443 NodeName:}
	I0602 19:39:12.972360   11612 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.23.6
	I0602 19:39:13.001177   11612 binaries.go:44] Found k8s binaries, skipping transfer
	I0602 19:39:13.010078   11612 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0602 19:39:13.031794   11612 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (374 bytes)
	I0602 19:39:13.076267   11612 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0602 19:39:13.115868   11612 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2049 bytes)
	I0602 19:39:13.164084   11612 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0602 19:39:13.178696   11612 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0602 19:39:13.205673   11612 certs.go:54] Setting up C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\cilium-20220602191616-12108 for IP: 192.168.49.2
	I0602 19:39:13.206295   11612 certs.go:182] skipping minikubeCA CA generation: C:\Users\jenkins.minikube7\minikube-integration\.minikube\ca.key
	I0602 19:39:13.206601   11612 certs.go:182] skipping proxyClientCA CA generation: C:\Users\jenkins.minikube7\minikube-integration\.minikube\proxy-client-ca.key
	I0602 19:39:13.207249   11612 certs.go:302] generating minikube-user signed cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\cilium-20220602191616-12108\client.key
	I0602 19:39:13.207472   11612 crypto.go:68] Generating cert C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\cilium-20220602191616-12108\client.crt with IP's: []
	I0602 19:39:13.278068   11612 crypto.go:156] Writing cert to C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\cilium-20220602191616-12108\client.crt ...
	I0602 19:39:13.278068   11612 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\cilium-20220602191616-12108\client.crt: {Name:mk2d9bcb1180913b7ac11aa4450d321eb5780810 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0602 19:39:13.279386   11612 crypto.go:164] Writing key to C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\cilium-20220602191616-12108\client.key ...
	I0602 19:39:13.279386   11612 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\cilium-20220602191616-12108\client.key: {Name:mk88a7bb42770bedd928ad570a43ebb360c66491 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0602 19:39:13.280770   11612 certs.go:302] generating minikube signed cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\cilium-20220602191616-12108\apiserver.key.dd3b5fb2
	I0602 19:39:13.280948   11612 crypto.go:68] Generating cert C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\cilium-20220602191616-12108\apiserver.crt.dd3b5fb2 with IP's: [192.168.49.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I0602 19:39:13.541282   11612 crypto.go:156] Writing cert to C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\cilium-20220602191616-12108\apiserver.crt.dd3b5fb2 ...
	I0602 19:39:13.541282   11612 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\cilium-20220602191616-12108\apiserver.crt.dd3b5fb2: {Name:mkb2af5f1bb8ae6925636cff2289fa872533004d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0602 19:39:13.542772   11612 crypto.go:164] Writing key to C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\cilium-20220602191616-12108\apiserver.key.dd3b5fb2 ...
	I0602 19:39:13.542772   11612 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\cilium-20220602191616-12108\apiserver.key.dd3b5fb2: {Name:mkf51cd9a087ffd5e5306b7ebe49a49aecbd8b70 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0602 19:39:13.543076   11612 certs.go:320] copying C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\cilium-20220602191616-12108\apiserver.crt.dd3b5fb2 -> C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\cilium-20220602191616-12108\apiserver.crt
	I0602 19:39:13.549139   11612 certs.go:324] copying C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\cilium-20220602191616-12108\apiserver.key.dd3b5fb2 -> C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\cilium-20220602191616-12108\apiserver.key
	I0602 19:39:13.550854   11612 certs.go:302] generating aggregator signed cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\cilium-20220602191616-12108\proxy-client.key
	I0602 19:39:13.551115   11612 crypto.go:68] Generating cert C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\cilium-20220602191616-12108\proxy-client.crt with IP's: []
	I0602 19:39:13.731301   11612 crypto.go:156] Writing cert to C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\cilium-20220602191616-12108\proxy-client.crt ...
	I0602 19:39:13.731301   11612 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\cilium-20220602191616-12108\proxy-client.crt: {Name:mkf11cddeff752a4cac56f077ce7956377faf42a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0602 19:39:13.732265   11612 crypto.go:164] Writing key to C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\cilium-20220602191616-12108\proxy-client.key ...
	I0602 19:39:13.732265   11612 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\cilium-20220602191616-12108\proxy-client.key: {Name:mkf502d802ea30311992cd0c3102eedb76123711 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0602 19:39:13.741250   11612 certs.go:388] found cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\12108.pem (1338 bytes)
	W0602 19:39:13.741766   11612 certs.go:384] ignoring C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\12108_empty.pem, impossibly tiny 0 bytes
	I0602 19:39:13.741766   11612 certs.go:388] found cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca-key.pem (1675 bytes)
	I0602 19:39:13.742041   11612 certs.go:388] found cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem (1078 bytes)
	I0602 19:39:13.742041   11612 certs.go:388] found cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0602 19:39:13.742610   11612 certs.go:388] found cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\key.pem (1675 bytes)
	I0602 19:39:13.742674   11612 certs.go:388] found cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\files\etc\ssl\certs\C:\Users\jenkins.minikube7\minikube-integration\.minikube\files\etc\ssl\certs\121082.pem (1708 bytes)
	I0602 19:39:13.743813   11612 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\cilium-20220602191616-12108\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0602 19:39:13.792775   11612 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\cilium-20220602191616-12108\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0602 19:39:13.845131   11612 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\cilium-20220602191616-12108\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0602 19:39:13.899684   11612 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\cilium-20220602191616-12108\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0602 19:39:13.954472   11612 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0602 19:39:13.999066   11612 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0602 19:39:14.059793   11612 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0602 19:39:14.114148   11612 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0602 19:39:14.184238   11612 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0602 19:39:14.246923   11612 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\12108.pem --> /usr/share/ca-certificates/12108.pem (1338 bytes)
	I0602 19:39:14.305128   11612 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\files\etc\ssl\certs\121082.pem --> /usr/share/ca-certificates/121082.pem (1708 bytes)
	I0602 19:39:14.361899   11612 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (752 bytes)
	I0602 19:39:14.413695   11612 ssh_runner.go:195] Run: openssl version
	I0602 19:39:14.440045   11612 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0602 19:39:14.479491   11612 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0602 19:39:14.502832   11612 certs.go:431] hashing: -rw-r--r-- 1 root root 1111 Jun  2 17:16 /usr/share/ca-certificates/minikubeCA.pem
	I0602 19:39:14.516602   11612 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0602 19:39:14.540589   11612 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0602 19:39:14.571590   11612 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/12108.pem && ln -fs /usr/share/ca-certificates/12108.pem /etc/ssl/certs/12108.pem"
	I0602 19:39:14.600582   11612 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12108.pem
	I0602 19:39:14.610804   11612 certs.go:431] hashing: -rw-r--r-- 1 root root 1338 Jun  2 17:28 /usr/share/ca-certificates/12108.pem
	I0602 19:39:14.622099   11612 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12108.pem
	I0602 19:39:14.646598   11612 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/12108.pem /etc/ssl/certs/51391683.0"
	I0602 19:39:14.679980   11612 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/121082.pem && ln -fs /usr/share/ca-certificates/121082.pem /etc/ssl/certs/121082.pem"
	I0602 19:39:14.715835   11612 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/121082.pem
	I0602 19:39:14.730413   11612 certs.go:431] hashing: -rw-r--r-- 1 root root 1708 Jun  2 17:28 /usr/share/ca-certificates/121082.pem
	I0602 19:39:14.741792   11612 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/121082.pem
	I0602 19:39:14.767728   11612 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/121082.pem /etc/ssl/certs/3ec20f2e.0"
	I0602 19:39:14.799852   11612 kubeadm.go:395] StartCluster: {Name:cilium-20220602191616-12108 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.6 ClusterName:cilium-20220602191616-12108 Namespace:default APIServerName:miniku
beCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:cilium NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.23.6 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:5m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube7:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Binary
Mirror: DisableOptimizations:false DisableMetrics:false}
	I0602 19:39:14.807527   11612 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0602 19:39:14.894194   11612 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0602 19:39:14.941194   11612 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0602 19:39:14.987966   11612 kubeadm.go:221] ignoring SystemVerification for kubeadm because of docker driver
	I0602 19:39:14.999451   11612 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0602 19:39:15.020463   11612 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0602 19:39:15.020463   11612 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0602 19:39:35.444995   11612 out.go:204]   - Generating certificates and keys ...
	I0602 19:39:35.456471   11612 out.go:204]   - Booting up control plane ...
	I0602 19:39:35.464113   11612 out.go:204]   - Configuring RBAC rules ...
	I0602 19:39:35.467708   11612 cni.go:95] Creating CNI manager for "cilium"
	I0602 19:39:35.474693   11612 out.go:177] * Configuring Cilium (Container Networking Interface) ...
	I0602 19:39:35.489977   11612 ssh_runner.go:195] Run: sudo /bin/bash -c "grep 'bpffs /sys/fs/bpf' /proc/mounts || sudo mount bpffs -t bpf /sys/fs/bpf"
	I0602 19:39:35.658911   11612 cilium.go:816] Using pod CIDR: 10.244.0.0/16
	I0602 19:39:35.658980   11612 cilium.go:827] cilium options: {PodSubnet:10.244.0.0/16}
	I0602 19:39:35.659276   11612 cilium.go:831] cilium config:
	---
	# Source: cilium/templates/cilium-agent-serviceaccount.yaml
	apiVersion: v1
	kind: ServiceAccount
	metadata:
	  name: cilium
	  namespace: kube-system
	---
	# Source: cilium/templates/cilium-operator-serviceaccount.yaml
	apiVersion: v1
	kind: ServiceAccount
	metadata:
	  name: cilium-operator
	  namespace: kube-system
	---
	# Source: cilium/templates/cilium-configmap.yaml
	apiVersion: v1
	kind: ConfigMap
	metadata:
	  name: cilium-config
	  namespace: kube-system
	data:
	
	  # Identity allocation mode selects how identities are shared between cilium
	  # nodes by setting how they are stored. The options are "crd" or "kvstore".
	  # - "crd" stores identities in kubernetes as CRDs (custom resource definition).
	  #   These can be queried with:
	  #     kubectl get ciliumid
	  # - "kvstore" stores identities in a kvstore, etcd or consul, that is
	  #   configured below. Cilium versions before 1.6 supported only the kvstore
	  #   backend. Upgrades from these older cilium versions should continue using
	  #   the kvstore by commenting out the identity-allocation-mode below, or
	  #   setting it to "kvstore".
	  identity-allocation-mode: crd
	  cilium-endpoint-gc-interval: "5m0s"
	
	  # If you want to run cilium in debug mode change this value to true
	  debug: "false"
	  # The agent can be put into the following three policy enforcement modes
	  # default, always and never.
	  # https://docs.cilium.io/en/latest/policy/intro/#policy-enforcement-modes
	  enable-policy: "default"
	
	  # Enable IPv4 addressing. If enabled, all endpoints are allocated an IPv4
	  # address.
	  enable-ipv4: "true"
	
	  # Enable IPv6 addressing. If enabled, all endpoints are allocated an IPv6
	  # address.
	  enable-ipv6: "false"
	  # Users who wish to specify their own custom CNI configuration file must set
	  # custom-cni-conf to "true", otherwise Cilium may overwrite the configuration.
	  custom-cni-conf: "false"
	  enable-bpf-clock-probe: "true"
	  # If you want cilium monitor to aggregate tracing for packets, set this level
	  # to "low", "medium", or "maximum". The higher the level, the less packets
	  # that will be seen in monitor output.
	  monitor-aggregation: medium
	
	  # The monitor aggregation interval governs the typical time between monitor
	  # notification events for each allowed connection.
	  #
	  # Only effective when monitor aggregation is set to "medium" or higher.
	  monitor-aggregation-interval: 5s
	
	  # The monitor aggregation flags determine which TCP flags which, upon the
	  # first observation, cause monitor notifications to be generated.
	  #
	  # Only effective when monitor aggregation is set to "medium" or higher.
	  monitor-aggregation-flags: all
	  # Specifies the ratio (0.0-1.0) of total system memory to use for dynamic
	  # sizing of the TCP CT, non-TCP CT, NAT and policy BPF maps.
	  bpf-map-dynamic-size-ratio: "0.0025"
	  # bpf-policy-map-max specifies the maximum number of entries in endpoint
	  # policy map (per endpoint)
	  bpf-policy-map-max: "16384"
	  # bpf-lb-map-max specifies the maximum number of entries in bpf lb service,
	  # backend and affinity maps.
	  bpf-lb-map-max: "65536"
	  # Pre-allocation of map entries allows per-packet latency to be reduced, at
	  # the expense of up-front memory allocation for the entries in the maps. The
	  # default value below will minimize memory usage in the default installation;
	  # users who are sensitive to latency may consider setting this to "true".
	  #
	  # This option was introduced in Cilium 1.4. Cilium 1.3 and earlier ignore
	  # this option and behave as though it is set to "true".
	  #
	  # If this value is modified, then during the next Cilium startup the restore
	  # of existing endpoints and tracking of ongoing connections may be disrupted.
	  # As a result, reply packets may be dropped and the load-balancing decisions
	  # for established connections may change.
	  #
	  # If this option is set to "false" during an upgrade from 1.3 or earlier to
	  # 1.4 or later, then it may cause one-time disruptions during the upgrade.
	  preallocate-bpf-maps: "false"
	
	  # Regular expression matching compatible Istio sidecar istio-proxy
	  # container image names
	  sidecar-istio-proxy-image: "cilium/istio_proxy"
	
	  # Name of the cluster. Only relevant when building a mesh of clusters.
	  cluster-name: default
	  # Unique ID of the cluster. Must be unique across all conneted clusters and
	  # in the range of 1 and 255. Only relevant when building a mesh of clusters.
	  cluster-id: ""
	
	  # Encapsulation mode for communication between nodes
	  # Possible values:
	  #   - disabled
	  #   - vxlan (default)
	  #   - geneve
	  tunnel: vxlan
	  # Enables L7 proxy for L7 policy enforcement and visibility
	  enable-l7-proxy: "true"
	
	  # wait-bpf-mount makes init container wait until bpf filesystem is mounted
	  wait-bpf-mount: "false"
	
	  masquerade: "true"
	  enable-bpf-masquerade: "true"
	
	  enable-xt-socket-fallback: "true"
	  install-iptables-rules: "true"
	
	  auto-direct-node-routes: "false"
	  enable-bandwidth-manager: "false"
	  enable-local-redirect-policy: "false"
	  kube-proxy-replacement:  "probe"
	  kube-proxy-replacement-healthz-bind-address: ""
	  enable-health-check-nodeport: "true"
	  node-port-bind-protection: "true"
	  enable-auto-protect-node-port-range: "true"
	  enable-session-affinity: "true"
	  k8s-require-ipv4-pod-cidr: "true"
	  k8s-require-ipv6-pod-cidr: "false"
	  enable-endpoint-health-checking: "true"
	  enable-health-checking: "true"
	  enable-well-known-identities: "false"
	  enable-remote-node-identity: "true"
	  operator-api-serve-addr: "127.0.0.1:9234"
	  # Enable Hubble gRPC service.
	  enable-hubble: "true"
	  # UNIX domain socket for Hubble server to listen to.
	  hubble-socket-path:  "/var/run/cilium/hubble.sock"
	  # An additional address for Hubble server to listen to (e.g. ":4244").
	  hubble-listen-address: ":4244"
	  hubble-disable-tls: "false"
	  hubble-tls-cert-file: /var/lib/cilium/tls/hubble/server.crt
	  hubble-tls-key-file: /var/lib/cilium/tls/hubble/server.key
	  hubble-tls-client-ca-files: /var/lib/cilium/tls/hubble/client-ca.crt
	  ipam: "cluster-pool"
	  cluster-pool-ipv4-cidr: "10.244.0.0/16"
	  cluster-pool-ipv4-mask-size: "24"
	  disable-cnp-status-updates: "true"
	  cgroup-root: "/run/cilium/cgroupv2"
	---
	# Source: cilium/templates/cilium-agent-clusterrole.yaml
	apiVersion: rbac.authorization.k8s.io/v1
	kind: ClusterRole
	metadata:
	  name: cilium
	rules:
	- apiGroups:
	  - networking.k8s.io
	  resources:
	  - networkpolicies
	  verbs:
	  - get
	  - list
	  - watch
	- apiGroups:
	  - discovery.k8s.io
	  resources:
	  - endpointslices
	  verbs:
	  - get
	  - list
	  - watch
	- apiGroups:
	  - ""
	  resources:
	  - namespaces
	  - services
	  - nodes
	  - endpoints
	  verbs:
	  - get
	  - list
	  - watch
	- apiGroups:
	  - ""
	  resources:
	  - pods
	  - pods/finalizers
	  verbs:
	  - get
	  - list
	  - watch
	  - update
	  - delete
	- apiGroups:
	  - ""
	  resources:
	  - nodes
	  verbs:
	  - get
	  - list
	  - watch
	  - update
	- apiGroups:
	  - ""
	  resources:
	  - nodes
	  - nodes/status
	  verbs:
	  - patch
	- apiGroups:
	  - apiextensions.k8s.io
	  resources:
	  - customresourcedefinitions
	  verbs:
	  # Deprecated for removal in v1.10
	  - create
	  - list
	  - watch
	  - update
	
	  # This is used when validating policies in preflight. This will need to stay
	  # until we figure out how to avoid "get" inside the preflight, and then
	  # should be removed ideally.
	  - get
	- apiGroups:
	  - cilium.io
	  resources:
	  - ciliumnetworkpolicies
	  - ciliumnetworkpolicies/status
	  - ciliumnetworkpolicies/finalizers
	  - ciliumclusterwidenetworkpolicies
	  - ciliumclusterwidenetworkpolicies/status
	  - ciliumclusterwidenetworkpolicies/finalizers
	  - ciliumendpoints
	  - ciliumendpoints/status
	  - ciliumendpoints/finalizers
	  - ciliumnodes
	  - ciliumnodes/status
	  - ciliumnodes/finalizers
	  - ciliumidentities
	  - ciliumidentities/finalizers
	  - ciliumlocalredirectpolicies
	  - ciliumlocalredirectpolicies/status
	  - ciliumlocalredirectpolicies/finalizers
	  verbs:
	  - '*'
	---
	# Source: cilium/templates/cilium-operator-clusterrole.yaml
	apiVersion: rbac.authorization.k8s.io/v1
	kind: ClusterRole
	metadata:
	  name: cilium-operator
	rules:
	- apiGroups:
	  - ""
	  resources:
	  # to automatically delete [core|kube]dns pods so that are starting to being
	  # managed by Cilium
	  - pods
	  verbs:
	  - get
	  - list
	  - watch
	  - delete
	- apiGroups:
	  - discovery.k8s.io
	  resources:
	  - endpointslices
	  verbs:
	  - get
	  - list
	  - watch
	- apiGroups:
	  - ""
	  resources:
	  # to perform the translation of a CNP that contains 'ToGroup' to its endpoints
	  - services
	  - endpoints
	  # to check apiserver connectivity
	  - namespaces
	  verbs:
	  - get
	  - list
	  - watch
	- apiGroups:
	  - cilium.io
	  resources:
	  - ciliumnetworkpolicies
	  - ciliumnetworkpolicies/status
	  - ciliumnetworkpolicies/finalizers
	  - ciliumclusterwidenetworkpolicies
	  - ciliumclusterwidenetworkpolicies/status
	  - ciliumclusterwidenetworkpolicies/finalizers
	  - ciliumendpoints
	  - ciliumendpoints/status
	  - ciliumendpoints/finalizers
	  - ciliumnodes
	  - ciliumnodes/status
	  - ciliumnodes/finalizers
	  - ciliumidentities
	  - ciliumidentities/status
	  - ciliumidentities/finalizers
	  - ciliumlocalredirectpolicies
	  - ciliumlocalredirectpolicies/status
	  - ciliumlocalredirectpolicies/finalizers
	  verbs:
	  - '*'
	- apiGroups:
	  - apiextensions.k8s.io
	  resources:
	  - customresourcedefinitions
	  verbs:
	  - create
	  - get
	  - list
	  - update
	  - watch
	# For cilium-operator running in HA mode.
	#
	# Cilium operator running in HA mode requires the use of ResourceLock for Leader Election
	# between multiple running instances.
	# The preferred way of doing this is to use LeasesResourceLock as edits to Leases are less
	# common and fewer objects in the cluster watch "all Leases".
	# The support for leases was introduced in coordination.k8s.io/v1 during Kubernetes 1.14 release.
	# In Cilium we currently don't support HA mode for K8s version < 1.14. This condition make sure
	# that we only authorize access to leases resources in supported K8s versions.
	- apiGroups:
	  - coordination.k8s.io
	  resources:
	  - leases
	  verbs:
	  - create
	  - get
	  - update
	---
	# Source: cilium/templates/cilium-agent-clusterrolebinding.yaml
	apiVersion: rbac.authorization.k8s.io/v1
	kind: ClusterRoleBinding
	metadata:
	  name: cilium
	roleRef:
	  apiGroup: rbac.authorization.k8s.io
	  kind: ClusterRole
	  name: cilium
	subjects:
	- kind: ServiceAccount
	  name: cilium
	  namespace: kube-system
	---
	# Source: cilium/templates/cilium-operator-clusterrolebinding.yaml
	apiVersion: rbac.authorization.k8s.io/v1
	kind: ClusterRoleBinding
	metadata:
	  name: cilium-operator
	roleRef:
	  apiGroup: rbac.authorization.k8s.io
	  kind: ClusterRole
	  name: cilium-operator
	subjects:
	- kind: ServiceAccount
	  name: cilium-operator
	  namespace: kube-system
	---
	# Source: cilium/templates/cilium-agent-daemonset.yaml
	apiVersion: apps/v1
	kind: DaemonSet
	metadata:
	  labels:
	    k8s-app: cilium
	  name: cilium
	  namespace: kube-system
	spec:
	  selector:
	    matchLabels:
	      k8s-app: cilium
	  updateStrategy:
	    rollingUpdate:
	      maxUnavailable: 2
	    type: RollingUpdate
	  template:
	    metadata:
	      annotations:
	        # This annotation plus the CriticalAddonsOnly toleration makes
	        # cilium to be a critical pod in the cluster, which ensures cilium
	        # gets priority scheduling.
	        # https://kubernetes.io/docs/tasks/administer-cluster/guaranteed-scheduling-critical-addon-pods/
	        scheduler.alpha.kubernetes.io/critical-pod: ""
	      labels:
	        k8s-app: cilium
	    spec:
	      affinity:
	        podAntiAffinity:
	          requiredDuringSchedulingIgnoredDuringExecution:
	          - labelSelector:
	              matchExpressions:
	              - key: k8s-app
	                operator: In
	                values:
	                - cilium
	            topologyKey: kubernetes.io/hostname
	      containers:
	      - args:
	        - --config-dir=/tmp/cilium/config-map
	        command:
	        - cilium-agent
	        livenessProbe:
	          httpGet:
	            host: '127.0.0.1'
	            path: /healthz
	            port: 9876
	            scheme: HTTP
	            httpHeaders:
	            - name: "brief"
	              value: "true"
	          failureThreshold: 10
	          # The initial delay for the liveness probe is intentionally large to
	          # avoid an endless kill & restart cycle if in the event that the initial
	          # bootstrapping takes longer than expected.
	          initialDelaySeconds: 120
	          periodSeconds: 30
	          successThreshold: 1
	          timeoutSeconds: 5
	        readinessProbe:
	          httpGet:
	            host: '127.0.0.1'
	            path: /healthz
	            port: 9876
	            scheme: HTTP
	            httpHeaders:
	            - name: "brief"
	              value: "true"
	          failureThreshold: 3
	          initialDelaySeconds: 5
	          periodSeconds: 30
	          successThreshold: 1
	          timeoutSeconds: 5
	        env:
	        - name: K8S_NODE_NAME
	          valueFrom:
	            fieldRef:
	              apiVersion: v1
	              fieldPath: spec.nodeName
	        - name: CILIUM_K8S_NAMESPACE
	          valueFrom:
	            fieldRef:
	              apiVersion: v1
	              fieldPath: metadata.namespace
	        - name: CILIUM_FLANNEL_MASTER_DEVICE
	          valueFrom:
	            configMapKeyRef:
	              key: flannel-master-device
	              name: cilium-config
	              optional: true
	        - name: CILIUM_FLANNEL_UNINSTALL_ON_EXIT
	          valueFrom:
	            configMapKeyRef:
	              key: flannel-uninstall-on-exit
	              name: cilium-config
	              optional: true
	        - name: CILIUM_CLUSTERMESH_CONFIG
	          value: /var/lib/cilium/clustermesh/
	        - name: CILIUM_CNI_CHAINING_MODE
	          valueFrom:
	            configMapKeyRef:
	              key: cni-chaining-mode
	              name: cilium-config
	              optional: true
	        - name: CILIUM_CUSTOM_CNI_CONF
	          valueFrom:
	            configMapKeyRef:
	              key: custom-cni-conf
	              name: cilium-config
	              optional: true
	        image: "quay.io/cilium/cilium:v1.9.9@sha256:a85d5cff13f8231c2e267d9fc3c6e43d24be4a75dac9f641c11ec46e7f17624d"
	        imagePullPolicy: IfNotPresent
	        lifecycle:
	          postStart:
	            exec:
	              command:
	              - "/cni-install.sh"
	              - "--enable-debug=false"
	          preStop:
	            exec:
	              command:
	              - /cni-uninstall.sh
	        name: cilium-agent
	        securityContext:
	          capabilities:
	            add:
	            - NET_ADMIN
	            - SYS_MODULE
	          privileged: true
	        volumeMounts:
	        - mountPath: /sys/fs/bpf
	          name: bpf-maps
	        - mountPath: /var/run/cilium
	          name: cilium-run
	        - mountPath: /host/opt/cni/bin
	          name: cni-path
	        - mountPath: /host/etc/cni/net.d
	          name: etc-cni-netd
	        - mountPath: /var/lib/cilium/clustermesh
	          name: clustermesh-secrets
	          readOnly: true
	        - mountPath: /tmp/cilium/config-map
	          name: cilium-config-path
	          readOnly: true
	          # Needed to be able to load kernel modules
	        - mountPath: /lib/modules
	          name: lib-modules
	          readOnly: true
	        - mountPath: /run/xtables.lock
	          name: xtables-lock
	        - mountPath: /var/lib/cilium/tls/hubble
	          name: hubble-tls
	          readOnly: true
	      hostNetwork: true
	      initContainers:
	      # Required to mount cgroup2 filesystem on the underlying Kubernetes node.
	      # We use nsenter command with host's cgroup and mount namespaces enabled.
	      - name: mount-cgroup
	        env:
	          - name: CGROUP_ROOT
	            value: /run/cilium/cgroupv2
	          - name: BIN_PATH
	            value: /opt/cni/bin
	        command:
	          - sh
	          - -c
	          # The statically linked Go program binary is invoked to avoid any
	          # dependency on utilities like sh and mount that can be missing on certain
	          # distros installed on the underlying host. Copy the binary to the
	          # same directory where we install cilium cni plugin so that exec permissions
	          # are available.
	          - 'cp /usr/bin/cilium-mount /hostbin/cilium-mount && nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; rm /hostbin/cilium-mount'
	        image: "quay.io/cilium/cilium:v1.9.9@sha256:a85d5cff13f8231c2e267d9fc3c6e43d24be4a75dac9f641c11ec46e7f17624d"
	        imagePullPolicy: IfNotPresent
	        volumeMounts:
	          - mountPath: /hostproc
	            name: hostproc
	          - mountPath: /hostbin
	            name: cni-path
	        securityContext:
	          privileged: true
	      - command:
	        - /init-container.sh
	        env:
	        - name: CILIUM_ALL_STATE
	          valueFrom:
	            configMapKeyRef:
	              key: clean-cilium-state
	              name: cilium-config
	              optional: true
	        - name: CILIUM_BPF_STATE
	          valueFrom:
	            configMapKeyRef:
	              key: clean-cilium-bpf-state
	              name: cilium-config
	              optional: true
	        - name: CILIUM_WAIT_BPF_MOUNT
	          valueFrom:
	            configMapKeyRef:
	              key: wait-bpf-mount
	              name: cilium-config
	              optional: true
	        image: "quay.io/cilium/cilium:v1.9.9@sha256:a85d5cff13f8231c2e267d9fc3c6e43d24be4a75dac9f641c11ec46e7f17624d"
	        imagePullPolicy: IfNotPresent
	        name: clean-cilium-state
	        securityContext:
	          capabilities:
	            add:
	            - NET_ADMIN
	          privileged: true
	        volumeMounts:
	        - mountPath: /sys/fs/bpf
	          name: bpf-maps
	          mountPropagation: HostToContainer
	          # Required to mount cgroup filesystem from the host to cilium agent pod
	        - mountPath: /run/cilium/cgroupv2
	          name: cilium-cgroup
	          mountPropagation: HostToContainer
	        - mountPath: /var/run/cilium
	          name: cilium-run
	        resources:
	          requests:
	            cpu: 100m
	            memory: 100Mi
	      restartPolicy: Always
	      priorityClassName: system-node-critical
	      serviceAccount: cilium
	      serviceAccountName: cilium
	      terminationGracePeriodSeconds: 1
	      tolerations:
	      - operator: Exists
	      volumes:
	        # To keep state between restarts / upgrades
	      - hostPath:
	          path: /var/run/cilium
	          type: DirectoryOrCreate
	        name: cilium-run
	        # To keep state between restarts / upgrades for bpf maps
	      - hostPath:
	          path: /sys/fs/bpf
	          type: DirectoryOrCreate
	        name: bpf-maps
	      # To mount cgroup2 filesystem on the host
	      - hostPath:
	          path: /proc
	          type: Directory
	        name: hostproc
	      # To keep state between restarts / upgrades for cgroup2 filesystem
	      - hostPath:
	          path: /run/cilium/cgroupv2
	          type: DirectoryOrCreate
	        name: cilium-cgroup
	      # To install cilium cni plugin in the host
	      - hostPath:
	          path:  /opt/cni/bin
	          type: DirectoryOrCreate
	        name: cni-path
	        # To install cilium cni configuration in the host
	      - hostPath:
	          path: /etc/cni/net.d
	          type: DirectoryOrCreate
	        name: etc-cni-netd
	        # To be able to load kernel modules
	      - hostPath:
	          path: /lib/modules
	        name: lib-modules
	        # To access iptables concurrently with other processes (e.g. kube-proxy)
	      - hostPath:
	          path: /run/xtables.lock
	          type: FileOrCreate
	        name: xtables-lock
	        # To read the clustermesh configuration
	      - name: clustermesh-secrets
	        secret:
	          defaultMode: 420
	          optional: true
	          secretName: cilium-clustermesh
	        # To read the configuration from the config map
	      - configMap:
	          name: cilium-config
	        name: cilium-config-path
	      - name: hubble-tls
	        projected:
	          sources:
	          - secret:
	              name: hubble-server-certs
	              items:
	                - key: tls.crt
	                  path: server.crt
	                - key: tls.key
	                  path: server.key
	              optional: true
	          - configMap:
	              name: hubble-ca-cert
	              items:
	                - key: ca.crt
	                  path: client-ca.crt
	              optional: true
	---
	# Source: cilium/templates/cilium-operator-deployment.yaml
	apiVersion: apps/v1
	kind: Deployment
	metadata:
	  labels:
	    io.cilium/app: operator
	    name: cilium-operator
	  name: cilium-operator
	  namespace: kube-system
	spec:
	  # We support HA mode only for Kubernetes version > 1.14
	  # See docs on ServerCapabilities.LeasesResourceLock in file pkg/k8s/version/version.go
	  # for more details.
	  replicas: 1
	  selector:
	    matchLabels:
	      io.cilium/app: operator
	      name: cilium-operator
	  strategy:
	    rollingUpdate:
	      maxSurge: 1
	      maxUnavailable: 1
	    type: RollingUpdate
	  template:
	    metadata:
	      annotations:
	      labels:
	        io.cilium/app: operator
	        name: cilium-operator
	    spec:
	      # In HA mode, cilium-operator pods must not be scheduled on the same
	      # node as they will clash with each other.
	      affinity:
	        podAntiAffinity:
	          requiredDuringSchedulingIgnoredDuringExecution:
	          - labelSelector:
	              matchExpressions:
	              - key: io.cilium/app
	                operator: In
	                values:
	                - operator
	            topologyKey: kubernetes.io/hostname
	      containers:
	      - args:
	        - --config-dir=/tmp/cilium/config-map
	        - --debug=$(CILIUM_DEBUG)
	        command:
	        - cilium-operator-generic
	        env:
	        - name: K8S_NODE_NAME
	          valueFrom:
	            fieldRef:
	              apiVersion: v1
	              fieldPath: spec.nodeName
	        - name: CILIUM_K8S_NAMESPACE
	          valueFrom:
	            fieldRef:
	              apiVersion: v1
	              fieldPath: metadata.namespace
	        - name: CILIUM_DEBUG
	          valueFrom:
	            configMapKeyRef:
	              key: debug
	              name: cilium-config
	              optional: true
	        image: "quay.io/cilium/operator-generic:v1.9.9@sha256:3726a965cd960295ca3c5e7f2b543c02096c0912c6652eb8bbb9ce54bcaa99d8"
	        imagePullPolicy: IfNotPresent
	        name: cilium-operator
	        livenessProbe:
	          httpGet:
	            host: '127.0.0.1'
	            path: /healthz
	            port: 9234
	            scheme: HTTP
	          initialDelaySeconds: 60
	          periodSeconds: 10
	          timeoutSeconds: 3
	        volumeMounts:
	        - mountPath: /tmp/cilium/config-map
	          name: cilium-config-path
	          readOnly: true
	      hostNetwork: true
	      restartPolicy: Always
	      priorityClassName: system-cluster-critical
	      serviceAccount: cilium-operator
	      serviceAccountName: cilium-operator
	      tolerations:
	      - operator: Exists
	      volumes:
	        # To read the configuration from the config map
	      - configMap:
	          name: cilium-config
	        name: cilium-config-path
	
	I0602 19:39:35.659336   11612 cni.go:189] applying CNI manifest using /var/lib/minikube/binaries/v1.23.6/kubectl ...
	I0602 19:39:35.659440   11612 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (23204 bytes)
	I0602 19:39:35.861379   11612 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0602 19:39:39.759125   11612 ssh_runner.go:235] Completed: sudo /var/lib/minikube/binaries/v1.23.6/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml: (3.8975068s)
	I0602 19:39:39.759125   11612 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0602 19:39:39.774571   11612 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0602 19:39:39.776556   11612 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl label nodes minikube.k8s.io/version=v1.26.0-beta.1 minikube.k8s.io/commit=408dc4036f5a6d8b1313a2031b5dcb646a720fae minikube.k8s.io/name=cilium-20220602191616-12108 minikube.k8s.io/updated_at=2022_06_02T19_39_39_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0602 19:39:39.782274   11612 ops.go:34] apiserver oom_adj: -16
	I0602 19:39:40.086943   11612 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0602 19:39:40.765699   11612 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0602 19:39:41.271995   11612 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0602 19:39:41.770095   11612 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0602 19:39:42.267788   11612 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0602 19:39:42.761668   11612 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0602 19:39:43.264439   11612 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0602 19:39:43.772533   11612 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0602 19:39:44.269288   11612 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0602 19:39:44.771421   11612 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0602 19:39:45.265969   11612 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0602 19:39:45.764150   11612 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0602 19:39:46.264272   11612 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0602 19:39:46.759336   11612 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0602 19:39:47.261224   11612 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0602 19:39:47.773554   11612 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0602 19:39:48.266956   11612 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0602 19:39:49.248322   11612 kubeadm.go:1045] duration metric: took 9.4891617s to wait for elevateKubeSystemPrivileges.
	I0602 19:39:49.248322   11612 kubeadm.go:397] StartCluster complete in 34.4483245s
	I0602 19:39:49.248322   11612 settings.go:142] acquiring lock: {Name:mk2f48f1c2db86c45c5c20d13312e07e9c171d09 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0602 19:39:49.248322   11612 settings.go:150] Updating kubeconfig:  C:\Users\jenkins.minikube7\minikube-integration\kubeconfig
	I0602 19:39:49.250580   11612 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube7\minikube-integration\kubeconfig: {Name:mk3b9816be6135fef42a073128e9ed356868417a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0602 19:39:50.090580   11612 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "cilium-20220602191616-12108" rescaled to 1
	I0602 19:39:50.090580   11612 start.go:208] Will wait 5m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.23.6 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0602 19:39:50.090580   11612 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.23.6/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0602 19:39:50.091117   11612 addons.go:415] enableAddons start: toEnable=map[], additional=[]
	I0602 19:39:50.095762   11612 addons.go:65] Setting storage-provisioner=true in profile "cilium-20220602191616-12108"
	I0602 19:39:50.091817   11612 config.go:178] Loaded profile config "cilium-20220602191616-12108": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.6
	I0602 19:39:50.095831   11612 addons.go:65] Setting default-storageclass=true in profile "cilium-20220602191616-12108"
	I0602 19:39:50.095762   11612 out.go:177] * Verifying Kubernetes components...
	I0602 19:39:50.095998   11612 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "cilium-20220602191616-12108"
	I0602 19:39:50.095998   11612 addons.go:153] Setting addon storage-provisioner=true in "cilium-20220602191616-12108"
	W0602 19:39:50.096481   11612 addons.go:165] addon storage-provisioner should already be in state true
	I0602 19:39:50.096481   11612 host.go:66] Checking if "cilium-20220602191616-12108" exists ...
	I0602 19:39:50.112210   11612 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0602 19:39:50.117789   11612 cli_runner.go:164] Run: docker container inspect cilium-20220602191616-12108 --format={{.State.Status}}
	I0602 19:39:50.122786   11612 cli_runner.go:164] Run: docker container inspect cilium-20220602191616-12108 --format={{.State.Status}}
	I0602 19:39:50.641947   11612 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.23.6/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.65.2 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.23.6/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0602 19:39:50.655881   11612 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" cilium-20220602191616-12108
	I0602 19:39:51.369831   11612 cli_runner.go:217] Completed: docker container inspect cilium-20220602191616-12108 --format={{.State.Status}}: (1.252038s)
	I0602 19:39:51.395710   11612 cli_runner.go:217] Completed: docker container inspect cilium-20220602191616-12108 --format={{.State.Status}}: (1.2728291s)
	I0602 19:39:51.400591   11612 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0602 19:39:51.403046   11612 addons.go:348] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0602 19:39:51.403046   11612 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0602 19:39:51.412260   11612 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20220602191616-12108
	I0602 19:39:51.444047   11612 addons.go:153] Setting addon default-storageclass=true in "cilium-20220602191616-12108"
	W0602 19:39:51.444047   11612 addons.go:165] addon default-storageclass should already be in state true
	I0602 19:39:51.444047   11612 host.go:66] Checking if "cilium-20220602191616-12108" exists ...
	I0602 19:39:51.481022   11612 cli_runner.go:164] Run: docker container inspect cilium-20220602191616-12108 --format={{.State.Status}}
	I0602 19:39:51.988669   11612 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" cilium-20220602191616-12108: (1.3327831s)
	I0602 19:39:51.989679   11612 node_ready.go:35] waiting up to 5m0s for node "cilium-20220602191616-12108" to be "Ready" ...
	I0602 19:39:52.047025   11612 node_ready.go:49] node "cilium-20220602191616-12108" has status "Ready":"True"
	I0602 19:39:52.047025   11612 node_ready.go:38] duration metric: took 57.3452ms waiting for node "cilium-20220602191616-12108" to be "Ready" ...
	I0602 19:39:52.047025   11612 pod_ready.go:35] extra waiting up to 5m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0602 19:39:52.146109   11612 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.23.6/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.65.2 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.23.6/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.5041031s)
	I0602 19:39:52.147084   11612 start.go:806] {"host.minikube.internal": 192.168.65.2} host record injected into CoreDNS
	I0602 19:39:52.172869   11612 pod_ready.go:78] waiting up to 5m0s for pod "cilium-lkfwf" in "kube-system" namespace to be "Ready" ...
	I0602 19:39:52.813474   11612 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20220602191616-12108: (1.4012085s)
	I0602 19:39:52.813474   11612 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:54726 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\cilium-20220602191616-12108\id_rsa Username:docker}
	I0602 19:39:52.829482   11612 cli_runner.go:217] Completed: docker container inspect cilium-20220602191616-12108 --format={{.State.Status}}: (1.3484553s)
	I0602 19:39:52.829482   11612 addons.go:348] installing /etc/kubernetes/addons/storageclass.yaml
	I0602 19:39:52.829482   11612 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0602 19:39:52.841502   11612 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20220602191616-12108
	I0602 19:39:53.664144   11612 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.6/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0602 19:39:54.168996   11612 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20220602191616-12108: (1.3274896s)
	I0602 19:39:54.168996   11612 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:54726 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\cilium-20220602191616-12108\id_rsa Username:docker}
	I0602 19:39:54.358818   11612 pod_ready.go:102] pod "cilium-lkfwf" in "kube-system" namespace has status "Ready":"False"
	I0602 19:39:55.077405   11612 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.6/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0602 19:39:56.542008   11612 pod_ready.go:102] pod "cilium-lkfwf" in "kube-system" namespace has status "Ready":"False"
	I0602 19:39:56.941530   11612 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.6/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (3.2773748s)
	I0602 19:39:57.545484   11612 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.6/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.4680382s)
	I0602 19:39:57.552248   11612 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0602 19:39:57.557185   11612 addons.go:417] enableAddons completed in 7.4665792s
	I0602 19:39:58.940590   11612 pod_ready.go:102] pod "cilium-lkfwf" in "kube-system" namespace has status "Ready":"False"
	I0602 19:40:01.443397   11612 pod_ready.go:102] pod "cilium-lkfwf" in "kube-system" namespace has status "Ready":"False"
	I0602 19:40:03.940497   11612 pod_ready.go:102] pod "cilium-lkfwf" in "kube-system" namespace has status "Ready":"False"
	I0602 19:40:06.240263   11612 pod_ready.go:102] pod "cilium-lkfwf" in "kube-system" namespace has status "Ready":"False"
	I0602 19:40:08.441641   11612 pod_ready.go:102] pod "cilium-lkfwf" in "kube-system" namespace has status "Ready":"False"
	I0602 19:40:10.848942   11612 pod_ready.go:102] pod "cilium-lkfwf" in "kube-system" namespace has status "Ready":"False"
	I0602 19:40:12.858131   11612 pod_ready.go:102] pod "cilium-lkfwf" in "kube-system" namespace has status "Ready":"False"
	I0602 19:40:15.357442   11612 pod_ready.go:102] pod "cilium-lkfwf" in "kube-system" namespace has status "Ready":"False"
	I0602 19:40:17.856594   11612 pod_ready.go:102] pod "cilium-lkfwf" in "kube-system" namespace has status "Ready":"False"
	I0602 19:40:20.356975   11612 pod_ready.go:102] pod "cilium-lkfwf" in "kube-system" namespace has status "Ready":"False"
	I0602 19:40:22.454553   11612 pod_ready.go:102] pod "cilium-lkfwf" in "kube-system" namespace has status "Ready":"False"
	I0602 19:40:24.857752   11612 pod_ready.go:102] pod "cilium-lkfwf" in "kube-system" namespace has status "Ready":"False"
	I0602 19:40:26.859229   11612 pod_ready.go:102] pod "cilium-lkfwf" in "kube-system" namespace has status "Ready":"False"
	I0602 19:40:29.357288   11612 pod_ready.go:102] pod "cilium-lkfwf" in "kube-system" namespace has status "Ready":"False"
	I0602 19:40:31.941065   11612 pod_ready.go:102] pod "cilium-lkfwf" in "kube-system" namespace has status "Ready":"False"
	I0602 19:40:34.357406   11612 pod_ready.go:102] pod "cilium-lkfwf" in "kube-system" namespace has status "Ready":"False"
	I0602 19:40:36.782832   11612 pod_ready.go:102] pod "cilium-lkfwf" in "kube-system" namespace has status "Ready":"False"
	I0602 19:40:38.859355   11612 pod_ready.go:102] pod "cilium-lkfwf" in "kube-system" namespace has status "Ready":"False"
	I0602 19:40:41.284381   11612 pod_ready.go:102] pod "cilium-lkfwf" in "kube-system" namespace has status "Ready":"False"
	I0602 19:40:43.452538   11612 pod_ready.go:102] pod "cilium-lkfwf" in "kube-system" namespace has status "Ready":"False"
	I0602 19:40:45.793872   11612 pod_ready.go:102] pod "cilium-lkfwf" in "kube-system" namespace has status "Ready":"False"
	I0602 19:40:47.840103   11612 pod_ready.go:102] pod "cilium-lkfwf" in "kube-system" namespace has status "Ready":"False"
	I0602 19:40:50.051578   11612 pod_ready.go:102] pod "cilium-lkfwf" in "kube-system" namespace has status "Ready":"False"
	I0602 19:40:52.353641   11612 pod_ready.go:102] pod "cilium-lkfwf" in "kube-system" namespace has status "Ready":"False"
	I0602 19:40:54.359846   11612 pod_ready.go:102] pod "cilium-lkfwf" in "kube-system" namespace has status "Ready":"False"
	I0602 19:40:56.800707   11612 pod_ready.go:102] pod "cilium-lkfwf" in "kube-system" namespace has status "Ready":"False"
	I0602 19:40:59.294364   11612 pod_ready.go:102] pod "cilium-lkfwf" in "kube-system" namespace has status "Ready":"False"
	I0602 19:41:01.310976   11612 pod_ready.go:102] pod "cilium-lkfwf" in "kube-system" namespace has status "Ready":"False"
	I0602 19:41:03.795712   11612 pod_ready.go:102] pod "cilium-lkfwf" in "kube-system" namespace has status "Ready":"False"
	I0602 19:41:06.288904   11612 pod_ready.go:102] pod "cilium-lkfwf" in "kube-system" namespace has status "Ready":"False"
	I0602 19:41:08.842988   11612 pod_ready.go:102] pod "cilium-lkfwf" in "kube-system" namespace has status "Ready":"False"
	I0602 19:41:11.299433   11612 pod_ready.go:102] pod "cilium-lkfwf" in "kube-system" namespace has status "Ready":"False"
	I0602 19:41:13.789478   11612 pod_ready.go:102] pod "cilium-lkfwf" in "kube-system" namespace has status "Ready":"False"
	I0602 19:41:15.794864   11612 pod_ready.go:102] pod "cilium-lkfwf" in "kube-system" namespace has status "Ready":"False"
	I0602 19:41:17.803028   11612 pod_ready.go:102] pod "cilium-lkfwf" in "kube-system" namespace has status "Ready":"False"
	I0602 19:41:20.294316   11612 pod_ready.go:102] pod "cilium-lkfwf" in "kube-system" namespace has status "Ready":"False"
	I0602 19:41:22.299651   11612 pod_ready.go:102] pod "cilium-lkfwf" in "kube-system" namespace has status "Ready":"False"
	I0602 19:41:24.804851   11612 pod_ready.go:102] pod "cilium-lkfwf" in "kube-system" namespace has status "Ready":"False"
	I0602 19:41:27.300161   11612 pod_ready.go:102] pod "cilium-lkfwf" in "kube-system" namespace has status "Ready":"False"
	I0602 19:41:29.793566   11612 pod_ready.go:102] pod "cilium-lkfwf" in "kube-system" namespace has status "Ready":"False"
	I0602 19:41:31.800390   11612 pod_ready.go:102] pod "cilium-lkfwf" in "kube-system" namespace has status "Ready":"False"
	I0602 19:41:34.286113   11612 pod_ready.go:102] pod "cilium-lkfwf" in "kube-system" namespace has status "Ready":"False"
	I0602 19:41:36.301400   11612 pod_ready.go:102] pod "cilium-lkfwf" in "kube-system" namespace has status "Ready":"False"
	I0602 19:41:38.562004   11612 pod_ready.go:102] pod "cilium-lkfwf" in "kube-system" namespace has status "Ready":"False"
	I0602 19:41:40.796316   11612 pod_ready.go:102] pod "cilium-lkfwf" in "kube-system" namespace has status "Ready":"False"
	I0602 19:41:43.305439   11612 pod_ready.go:102] pod "cilium-lkfwf" in "kube-system" namespace has status "Ready":"False"
	I0602 19:41:45.790647   11612 pod_ready.go:102] pod "cilium-lkfwf" in "kube-system" namespace has status "Ready":"False"
	I0602 19:41:47.803740   11612 pod_ready.go:102] pod "cilium-lkfwf" in "kube-system" namespace has status "Ready":"False"
	I0602 19:41:50.285038   11612 pod_ready.go:102] pod "cilium-lkfwf" in "kube-system" namespace has status "Ready":"False"
	I0602 19:41:52.288647   11612 pod_ready.go:102] pod "cilium-lkfwf" in "kube-system" namespace has status "Ready":"False"
	I0602 19:41:56.277362   11612 pod_ready.go:102] pod "cilium-lkfwf" in "kube-system" namespace has status "Ready":"False"
	I0602 19:41:58.968901   11612 pod_ready.go:102] pod "cilium-lkfwf" in "kube-system" namespace has status "Ready":"False"
	I0602 19:42:00.980195   11612 pod_ready.go:102] pod "cilium-lkfwf" in "kube-system" namespace has status "Ready":"False"
	I0602 19:42:03.223031   11612 pod_ready.go:102] pod "cilium-lkfwf" in "kube-system" namespace has status "Ready":"False"
	I0602 19:42:05.288542   11612 pod_ready.go:102] pod "cilium-lkfwf" in "kube-system" namespace has status "Ready":"False"
	I0602 19:42:07.290118   11612 pod_ready.go:102] pod "cilium-lkfwf" in "kube-system" namespace has status "Ready":"False"
	I0602 19:42:09.297761   11612 pod_ready.go:102] pod "cilium-lkfwf" in "kube-system" namespace has status "Ready":"False"
	I0602 19:42:11.785740   11612 pod_ready.go:102] pod "cilium-lkfwf" in "kube-system" namespace has status "Ready":"False"
	I0602 19:42:13.799234   11612 pod_ready.go:102] pod "cilium-lkfwf" in "kube-system" namespace has status "Ready":"False"
	I0602 19:42:16.288990   11612 pod_ready.go:102] pod "cilium-lkfwf" in "kube-system" namespace has status "Ready":"False"
	I0602 19:42:18.290081   11612 pod_ready.go:102] pod "cilium-lkfwf" in "kube-system" namespace has status "Ready":"False"
	I0602 19:42:20.300935   11612 pod_ready.go:102] pod "cilium-lkfwf" in "kube-system" namespace has status "Ready":"False"
	I0602 19:42:22.784616   11612 pod_ready.go:102] pod "cilium-lkfwf" in "kube-system" namespace has status "Ready":"False"
	I0602 19:42:24.790986   11612 pod_ready.go:102] pod "cilium-lkfwf" in "kube-system" namespace has status "Ready":"False"
	I0602 19:42:26.799275   11612 pod_ready.go:102] pod "cilium-lkfwf" in "kube-system" namespace has status "Ready":"False"
	I0602 19:42:28.801302   11612 pod_ready.go:102] pod "cilium-lkfwf" in "kube-system" namespace has status "Ready":"False"
	I0602 19:42:31.283690   11612 pod_ready.go:102] pod "cilium-lkfwf" in "kube-system" namespace has status "Ready":"False"
	I0602 19:42:33.290265   11612 pod_ready.go:102] pod "cilium-lkfwf" in "kube-system" namespace has status "Ready":"False"
	I0602 19:42:35.299253   11612 pod_ready.go:102] pod "cilium-lkfwf" in "kube-system" namespace has status "Ready":"False"
	I0602 19:42:37.785784   11612 pod_ready.go:102] pod "cilium-lkfwf" in "kube-system" namespace has status "Ready":"False"
	I0602 19:42:39.795403   11612 pod_ready.go:102] pod "cilium-lkfwf" in "kube-system" namespace has status "Ready":"False"
	I0602 19:42:42.294347   11612 pod_ready.go:102] pod "cilium-lkfwf" in "kube-system" namespace has status "Ready":"False"
	I0602 19:42:44.296112   11612 pod_ready.go:102] pod "cilium-lkfwf" in "kube-system" namespace has status "Ready":"False"
	I0602 19:42:46.786628   11612 pod_ready.go:102] pod "cilium-lkfwf" in "kube-system" namespace has status "Ready":"False"
	I0602 19:42:48.804950   11612 pod_ready.go:102] pod "cilium-lkfwf" in "kube-system" namespace has status "Ready":"False"
	I0602 19:42:51.300814   11612 pod_ready.go:102] pod "cilium-lkfwf" in "kube-system" namespace has status "Ready":"False"
	I0602 19:42:53.303141   11612 pod_ready.go:102] pod "cilium-lkfwf" in "kube-system" namespace has status "Ready":"False"
	I0602 19:42:55.778856   11612 pod_ready.go:102] pod "cilium-lkfwf" in "kube-system" namespace has status "Ready":"False"
	I0602 19:42:57.782356   11612 pod_ready.go:102] pod "cilium-lkfwf" in "kube-system" namespace has status "Ready":"False"
	I0602 19:42:59.791892   11612 pod_ready.go:102] pod "cilium-lkfwf" in "kube-system" namespace has status "Ready":"False"
	I0602 19:43:01.804323   11612 pod_ready.go:102] pod "cilium-lkfwf" in "kube-system" namespace has status "Ready":"False"
	I0602 19:43:04.294752   11612 pod_ready.go:102] pod "cilium-lkfwf" in "kube-system" namespace has status "Ready":"False"
	I0602 19:43:06.790383   11612 pod_ready.go:102] pod "cilium-lkfwf" in "kube-system" namespace has status "Ready":"False"
	I0602 19:43:09.291937   11612 pod_ready.go:102] pod "cilium-lkfwf" in "kube-system" namespace has status "Ready":"False"
	I0602 19:43:11.829454   11612 pod_ready.go:102] pod "cilium-lkfwf" in "kube-system" namespace has status "Ready":"False"
	I0602 19:43:14.297643   11612 pod_ready.go:102] pod "cilium-lkfwf" in "kube-system" namespace has status "Ready":"False"
	I0602 19:43:16.802851   11612 pod_ready.go:102] pod "cilium-lkfwf" in "kube-system" namespace has status "Ready":"False"
	I0602 19:43:27.323831   11612 pod_ready.go:102] pod "cilium-lkfwf" in "kube-system" namespace has status "Ready":"False"
	I0602 19:43:32.921516   11612 pod_ready.go:102] pod "cilium-lkfwf" in "kube-system" namespace has status "Ready":"False"
	I0602 19:43:35.578602   11612 pod_ready.go:102] pod "cilium-lkfwf" in "kube-system" namespace has status "Ready":"False"
	I0602 19:43:38.219516   11612 pod_ready.go:102] pod "cilium-lkfwf" in "kube-system" namespace has status "Ready":"False"
	I0602 19:43:40.674888   11612 pod_ready.go:102] pod "cilium-lkfwf" in "kube-system" namespace has status "Ready":"False"
	I0602 19:43:42.854127   11612 pod_ready.go:102] pod "cilium-lkfwf" in "kube-system" namespace has status "Ready":"False"
	I0602 19:43:45.298041   11612 pod_ready.go:102] pod "cilium-lkfwf" in "kube-system" namespace has status "Ready":"False"
	I0602 19:43:47.790871   11612 pod_ready.go:102] pod "cilium-lkfwf" in "kube-system" namespace has status "Ready":"False"
	I0602 19:43:49.799713   11612 pod_ready.go:102] pod "cilium-lkfwf" in "kube-system" namespace has status "Ready":"False"
	I0602 19:43:51.800456   11612 pod_ready.go:102] pod "cilium-lkfwf" in "kube-system" namespace has status "Ready":"False"
	I0602 19:43:52.340226   11612 pod_ready.go:81] duration metric: took 4m0.1662399s waiting for pod "cilium-lkfwf" in "kube-system" namespace to be "Ready" ...
	E0602 19:43:52.340226   11612 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting for the condition
	I0602 19:43:52.340226   11612 pod_ready.go:78] waiting up to 5m0s for pod "cilium-operator-78f49c47f-snf9m" in "kube-system" namespace to be "Ready" ...
	I0602 19:43:52.369216   11612 pod_ready.go:92] pod "cilium-operator-78f49c47f-snf9m" in "kube-system" namespace has status "Ready":"True"
	I0602 19:43:52.369216   11612 pod_ready.go:81] duration metric: took 28.9903ms waiting for pod "cilium-operator-78f49c47f-snf9m" in "kube-system" namespace to be "Ready" ...
	I0602 19:43:52.369216   11612 pod_ready.go:78] waiting up to 5m0s for pod "coredns-64897985d-6flrb" in "kube-system" namespace to be "Ready" ...
	I0602 19:43:54.444719   11612 pod_ready.go:102] pod "coredns-64897985d-6flrb" in "kube-system" namespace has status "Ready":"False"
	I0602 19:43:56.923651   11612 pod_ready.go:102] pod "coredns-64897985d-6flrb" in "kube-system" namespace has status "Ready":"False"
	I0602 19:43:59.414551   11612 pod_ready.go:102] pod "coredns-64897985d-6flrb" in "kube-system" namespace has status "Ready":"False"
	I0602 19:44:01.417662   11612 pod_ready.go:102] pod "coredns-64897985d-6flrb" in "kube-system" namespace has status "Ready":"False"
	I0602 19:44:03.915623   11612 pod_ready.go:102] pod "coredns-64897985d-6flrb" in "kube-system" namespace has status "Ready":"False"
	I0602 19:44:06.418292   11612 pod_ready.go:102] pod "coredns-64897985d-6flrb" in "kube-system" namespace has status "Ready":"False"
	I0602 19:44:08.419738   11612 pod_ready.go:102] pod "coredns-64897985d-6flrb" in "kube-system" namespace has status "Ready":"False"
	I0602 19:44:10.424435   11612 pod_ready.go:102] pod "coredns-64897985d-6flrb" in "kube-system" namespace has status "Ready":"False"
	I0602 19:44:12.424891   11612 pod_ready.go:102] pod "coredns-64897985d-6flrb" in "kube-system" namespace has status "Ready":"False"
	I0602 19:44:14.923489   11612 pod_ready.go:102] pod "coredns-64897985d-6flrb" in "kube-system" namespace has status "Ready":"False"
	I0602 19:44:16.928882   11612 pod_ready.go:102] pod "coredns-64897985d-6flrb" in "kube-system" namespace has status "Ready":"False"
	I0602 19:44:19.415709   11612 pod_ready.go:102] pod "coredns-64897985d-6flrb" in "kube-system" namespace has status "Ready":"False"
	I0602 19:44:21.429410   11612 pod_ready.go:102] pod "coredns-64897985d-6flrb" in "kube-system" namespace has status "Ready":"False"
	I0602 19:44:23.926914   11612 pod_ready.go:102] pod "coredns-64897985d-6flrb" in "kube-system" namespace has status "Ready":"False"
	I0602 19:44:26.423174   11612 pod_ready.go:102] pod "coredns-64897985d-6flrb" in "kube-system" namespace has status "Ready":"False"
	I0602 19:44:28.428277   11612 pod_ready.go:102] pod "coredns-64897985d-6flrb" in "kube-system" namespace has status "Ready":"False"
	I0602 19:44:30.439637   11612 pod_ready.go:102] pod "coredns-64897985d-6flrb" in "kube-system" namespace has status "Ready":"False"
	I0602 19:44:32.925342   11612 pod_ready.go:102] pod "coredns-64897985d-6flrb" in "kube-system" namespace has status "Ready":"False"
	I0602 19:44:35.415941   11612 pod_ready.go:102] pod "coredns-64897985d-6flrb" in "kube-system" namespace has status "Ready":"False"
	I0602 19:44:37.424243   11612 pod_ready.go:102] pod "coredns-64897985d-6flrb" in "kube-system" namespace has status "Ready":"False"
	I0602 19:44:39.925855   11612 pod_ready.go:102] pod "coredns-64897985d-6flrb" in "kube-system" namespace has status "Ready":"False"
	I0602 19:44:42.420346   11612 pod_ready.go:102] pod "coredns-64897985d-6flrb" in "kube-system" namespace has status "Ready":"False"
	I0602 19:44:44.424745   11612 pod_ready.go:102] pod "coredns-64897985d-6flrb" in "kube-system" namespace has status "Ready":"False"
	I0602 19:44:46.934589   11612 pod_ready.go:102] pod "coredns-64897985d-6flrb" in "kube-system" namespace has status "Ready":"False"
	I0602 19:44:49.415977   11612 pod_ready.go:102] pod "coredns-64897985d-6flrb" in "kube-system" namespace has status "Ready":"False"
	I0602 19:44:51.912972   11612 pod_ready.go:102] pod "coredns-64897985d-6flrb" in "kube-system" namespace has status "Ready":"False"
	I0602 19:44:53.923388   11612 pod_ready.go:102] pod "coredns-64897985d-6flrb" in "kube-system" namespace has status "Ready":"False"
	I0602 19:44:56.418137   11612 pod_ready.go:102] pod "coredns-64897985d-6flrb" in "kube-system" namespace has status "Ready":"False"
	I0602 19:44:58.913097   11612 pod_ready.go:102] pod "coredns-64897985d-6flrb" in "kube-system" namespace has status "Ready":"False"
	I0602 19:45:00.926017   11612 pod_ready.go:102] pod "coredns-64897985d-6flrb" in "kube-system" namespace has status "Ready":"False"
	I0602 19:45:03.420253   11612 pod_ready.go:102] pod "coredns-64897985d-6flrb" in "kube-system" namespace has status "Ready":"False"
	I0602 19:45:05.421419   11612 pod_ready.go:102] pod "coredns-64897985d-6flrb" in "kube-system" namespace has status "Ready":"False"
	I0602 19:45:07.914845   11612 pod_ready.go:102] pod "coredns-64897985d-6flrb" in "kube-system" namespace has status "Ready":"False"
	I0602 19:45:09.938824   11612 pod_ready.go:102] pod "coredns-64897985d-6flrb" in "kube-system" namespace has status "Ready":"False"
	I0602 19:45:12.422144   11612 pod_ready.go:102] pod "coredns-64897985d-6flrb" in "kube-system" namespace has status "Ready":"False"
	I0602 19:45:14.915305   11612 pod_ready.go:102] pod "coredns-64897985d-6flrb" in "kube-system" namespace has status "Ready":"False"
	I0602 19:45:16.925627   11612 pod_ready.go:102] pod "coredns-64897985d-6flrb" in "kube-system" namespace has status "Ready":"False"
	I0602 19:45:18.928961   11612 pod_ready.go:102] pod "coredns-64897985d-6flrb" in "kube-system" namespace has status "Ready":"False"
	I0602 19:45:21.424687   11612 pod_ready.go:102] pod "coredns-64897985d-6flrb" in "kube-system" namespace has status "Ready":"False"
	I0602 19:45:23.908577   11612 pod_ready.go:102] pod "coredns-64897985d-6flrb" in "kube-system" namespace has status "Ready":"False"
	I0602 19:45:26.408188   11612 pod_ready.go:102] pod "coredns-64897985d-6flrb" in "kube-system" namespace has status "Ready":"False"
	I0602 19:45:28.410072   11612 pod_ready.go:102] pod "coredns-64897985d-6flrb" in "kube-system" namespace has status "Ready":"False"
	I0602 19:45:30.425070   11612 pod_ready.go:102] pod "coredns-64897985d-6flrb" in "kube-system" namespace has status "Ready":"False"
	I0602 19:45:32.432855   11612 pod_ready.go:102] pod "coredns-64897985d-6flrb" in "kube-system" namespace has status "Ready":"False"
	I0602 19:45:34.988897   11612 pod_ready.go:102] pod "coredns-64897985d-6flrb" in "kube-system" namespace has status "Ready":"False"
	I0602 19:45:37.411055   11612 pod_ready.go:102] pod "coredns-64897985d-6flrb" in "kube-system" namespace has status "Ready":"False"
	I0602 19:45:39.421012   11612 pod_ready.go:102] pod "coredns-64897985d-6flrb" in "kube-system" namespace has status "Ready":"False"
	I0602 19:45:41.907546   11612 pod_ready.go:102] pod "coredns-64897985d-6flrb" in "kube-system" namespace has status "Ready":"False"
	I0602 19:45:43.909987   11612 pod_ready.go:102] pod "coredns-64897985d-6flrb" in "kube-system" namespace has status "Ready":"False"
	I0602 19:45:46.456242   11612 pod_ready.go:102] pod "coredns-64897985d-6flrb" in "kube-system" namespace has status "Ready":"False"
	I0602 19:45:48.966120   11612 pod_ready.go:102] pod "coredns-64897985d-6flrb" in "kube-system" namespace has status "Ready":"False"
	I0602 19:45:51.423686   11612 pod_ready.go:102] pod "coredns-64897985d-6flrb" in "kube-system" namespace has status "Ready":"False"
	I0602 19:45:53.429991   11612 pod_ready.go:102] pod "coredns-64897985d-6flrb" in "kube-system" namespace has status "Ready":"False"
	I0602 19:45:55.937158   11612 pod_ready.go:102] pod "coredns-64897985d-6flrb" in "kube-system" namespace has status "Ready":"False"
	I0602 19:45:58.412172   11612 pod_ready.go:102] pod "coredns-64897985d-6flrb" in "kube-system" namespace has status "Ready":"False"
	I0602 19:46:00.446385   11612 pod_ready.go:102] pod "coredns-64897985d-6flrb" in "kube-system" namespace has status "Ready":"False"
	I0602 19:46:02.920944   11612 pod_ready.go:102] pod "coredns-64897985d-6flrb" in "kube-system" namespace has status "Ready":"False"
	I0602 19:46:04.925373   11612 pod_ready.go:102] pod "coredns-64897985d-6flrb" in "kube-system" namespace has status "Ready":"False"
	I0602 19:46:07.418700   11612 pod_ready.go:102] pod "coredns-64897985d-6flrb" in "kube-system" namespace has status "Ready":"False"
	I0602 19:46:09.917140   11612 pod_ready.go:102] pod "coredns-64897985d-6flrb" in "kube-system" namespace has status "Ready":"False"
	I0602 19:46:11.927311   11612 pod_ready.go:102] pod "coredns-64897985d-6flrb" in "kube-system" namespace has status "Ready":"False"
	I0602 19:46:14.415947   11612 pod_ready.go:102] pod "coredns-64897985d-6flrb" in "kube-system" namespace has status "Ready":"False"
	I0602 19:46:16.927070   11612 pod_ready.go:102] pod "coredns-64897985d-6flrb" in "kube-system" namespace has status "Ready":"False"
	I0602 19:46:19.412475   11612 pod_ready.go:102] pod "coredns-64897985d-6flrb" in "kube-system" namespace has status "Ready":"False"
	I0602 19:46:21.421850   11612 pod_ready.go:102] pod "coredns-64897985d-6flrb" in "kube-system" namespace has status "Ready":"False"
	I0602 19:46:23.423626   11612 pod_ready.go:102] pod "coredns-64897985d-6flrb" in "kube-system" namespace has status "Ready":"False"
	I0602 19:46:25.931595   11612 pod_ready.go:102] pod "coredns-64897985d-6flrb" in "kube-system" namespace has status "Ready":"False"
	I0602 19:46:28.423791   11612 pod_ready.go:102] pod "coredns-64897985d-6flrb" in "kube-system" namespace has status "Ready":"False"
	I0602 19:46:30.922650   11612 pod_ready.go:102] pod "coredns-64897985d-6flrb" in "kube-system" namespace has status "Ready":"False"
	I0602 19:46:32.923471   11612 pod_ready.go:102] pod "coredns-64897985d-6flrb" in "kube-system" namespace has status "Ready":"False"
	I0602 19:46:34.938432   11612 pod_ready.go:102] pod "coredns-64897985d-6flrb" in "kube-system" namespace has status "Ready":"False"
	I0602 19:46:37.411449   11612 pod_ready.go:102] pod "coredns-64897985d-6flrb" in "kube-system" namespace has status "Ready":"False"
	I0602 19:46:39.918209   11612 pod_ready.go:102] pod "coredns-64897985d-6flrb" in "kube-system" namespace has status "Ready":"False"
	I0602 19:46:42.422969   11612 pod_ready.go:102] pod "coredns-64897985d-6flrb" in "kube-system" namespace has status "Ready":"False"
	I0602 19:46:44.913110   11612 pod_ready.go:102] pod "coredns-64897985d-6flrb" in "kube-system" namespace has status "Ready":"False"
	I0602 19:46:46.922296   11612 pod_ready.go:102] pod "coredns-64897985d-6flrb" in "kube-system" namespace has status "Ready":"False"
	I0602 19:46:49.417565   11612 pod_ready.go:102] pod "coredns-64897985d-6flrb" in "kube-system" namespace has status "Ready":"False"
	I0602 19:46:51.917138   11612 pod_ready.go:102] pod "coredns-64897985d-6flrb" in "kube-system" namespace has status "Ready":"False"
	I0602 19:46:54.431186   11612 pod_ready.go:102] pod "coredns-64897985d-6flrb" in "kube-system" namespace has status "Ready":"False"
	I0602 19:46:56.922439   11612 pod_ready.go:102] pod "coredns-64897985d-6flrb" in "kube-system" namespace has status "Ready":"False"
	I0602 19:46:59.414486   11612 pod_ready.go:102] pod "coredns-64897985d-6flrb" in "kube-system" namespace has status "Ready":"False"
	I0602 19:47:01.427193   11612 pod_ready.go:102] pod "coredns-64897985d-6flrb" in "kube-system" namespace has status "Ready":"False"
	I0602 19:47:03.919623   11612 pod_ready.go:102] pod "coredns-64897985d-6flrb" in "kube-system" namespace has status "Ready":"False"
	I0602 19:47:05.923651   11612 pod_ready.go:102] pod "coredns-64897985d-6flrb" in "kube-system" namespace has status "Ready":"False"
	I0602 19:47:08.420129   11612 pod_ready.go:102] pod "coredns-64897985d-6flrb" in "kube-system" namespace has status "Ready":"False"
	I0602 19:47:10.918338   11612 pod_ready.go:102] pod "coredns-64897985d-6flrb" in "kube-system" namespace has status "Ready":"False"
	I0602 19:47:12.925975   11612 pod_ready.go:102] pod "coredns-64897985d-6flrb" in "kube-system" namespace has status "Ready":"False"
	I0602 19:47:15.419070   11612 pod_ready.go:102] pod "coredns-64897985d-6flrb" in "kube-system" namespace has status "Ready":"False"
	I0602 19:47:17.914734   11612 pod_ready.go:102] pod "coredns-64897985d-6flrb" in "kube-system" namespace has status "Ready":"False"
	I0602 19:47:19.933115   11612 pod_ready.go:102] pod "coredns-64897985d-6flrb" in "kube-system" namespace has status "Ready":"False"
	I0602 19:47:22.418383   11612 pod_ready.go:102] pod "coredns-64897985d-6flrb" in "kube-system" namespace has status "Ready":"False"
	I0602 19:47:24.914609   11612 pod_ready.go:102] pod "coredns-64897985d-6flrb" in "kube-system" namespace has status "Ready":"False"
	I0602 19:47:27.412413   11612 pod_ready.go:102] pod "coredns-64897985d-6flrb" in "kube-system" namespace has status "Ready":"False"
	I0602 19:47:29.417089   11612 pod_ready.go:102] pod "coredns-64897985d-6flrb" in "kube-system" namespace has status "Ready":"False"
	I0602 19:47:31.917206   11612 pod_ready.go:102] pod "coredns-64897985d-6flrb" in "kube-system" namespace has status "Ready":"False"
	I0602 19:47:33.926250   11612 pod_ready.go:102] pod "coredns-64897985d-6flrb" in "kube-system" namespace has status "Ready":"False"
	I0602 19:47:36.422024   11612 pod_ready.go:102] pod "coredns-64897985d-6flrb" in "kube-system" namespace has status "Ready":"False"
	I0602 19:47:38.916244   11612 pod_ready.go:102] pod "coredns-64897985d-6flrb" in "kube-system" namespace has status "Ready":"False"
	I0602 19:47:40.928829   11612 pod_ready.go:102] pod "coredns-64897985d-6flrb" in "kube-system" namespace has status "Ready":"False"
	I0602 19:47:43.408426   11612 pod_ready.go:102] pod "coredns-64897985d-6flrb" in "kube-system" namespace has status "Ready":"False"
	I0602 19:47:45.424591   11612 pod_ready.go:102] pod "coredns-64897985d-6flrb" in "kube-system" namespace has status "Ready":"False"
	I0602 19:47:48.292753   11612 pod_ready.go:102] pod "coredns-64897985d-6flrb" in "kube-system" namespace has status "Ready":"False"
	I0602 19:47:53.679704   11612 pod_ready.go:102] pod "coredns-64897985d-6flrb" in "kube-system" namespace has status "Ready":"False"
	I0602 19:47:53.772454   11612 pod_ready.go:81] duration metric: took 4m1.4021992s waiting for pod "coredns-64897985d-6flrb" in "kube-system" namespace to be "Ready" ...
	E0602 19:47:53.772454   11612 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting for the condition
	I0602 19:47:53.772454   11612 pod_ready.go:38] duration metric: took 8m1.7230999s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0602 19:47:53.776514   11612 out.go:177] 
	W0602 19:47:53.780496   11612 out.go:239] X Exiting due to GUEST_START: wait 5m0s for node: extra waiting: timed out waiting 5m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	X Exiting due to GUEST_START: wait 5m0s for node: extra waiting: timed out waiting 5m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	W0602 19:47:53.780496   11612 out.go:239] * 
	* 
	W0602 19:47:53.781444   11612 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0602 19:47:53.785462   11612 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:103: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/cilium/Start (628.97s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (77.84s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:313: (dbg) Run:  out/minikube-windows-amd64.exe pause -p newest-cni-20220602193528-12108 --alsologtostderr -v=1
E0602 19:46:15.313174   12108 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\no-preload-20220602192234-12108\client.crt: The system cannot find the path specified.
start_stop_delete_test.go:313: (dbg) Non-zero exit: out/minikube-windows-amd64.exe pause -p newest-cni-20220602193528-12108 --alsologtostderr -v=1: exit status 80 (8.9115633s)

                                                
                                                
-- stdout --
	* Pausing node newest-cni-20220602193528-12108 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0602 19:46:14.186922    6524 out.go:296] Setting OutFile to fd 1884 ...
	I0602 19:46:14.257740    6524 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0602 19:46:14.257740    6524 out.go:309] Setting ErrFile to fd 1912...
	I0602 19:46:14.257740    6524 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0602 19:46:14.295072    6524 out.go:303] Setting JSON to false
	I0602 19:46:14.295072    6524 mustload.go:65] Loading cluster: newest-cni-20220602193528-12108
	I0602 19:46:14.295747    6524 config.go:178] Loaded profile config "newest-cni-20220602193528-12108": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.6
	I0602 19:46:14.309959    6524 cli_runner.go:164] Run: docker container inspect newest-cni-20220602193528-12108 --format={{.State.Status}}
	I0602 19:46:17.479549    6524 cli_runner.go:217] Completed: docker container inspect newest-cni-20220602193528-12108 --format={{.State.Status}}: (3.1693465s)
	I0602 19:46:17.479728    6524 host.go:66] Checking if "newest-cni-20220602193528-12108" exists ...
	I0602 19:46:17.490785    6524 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" newest-cni-20220602193528-12108
	I0602 19:46:18.840290    6524 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" newest-cni-20220602193528-12108: (1.3494994s)
	I0602 19:46:18.843261    6524 pause.go:58] "namespaces" ="keys" ="(MISSING)"
	I0602 19:46:18.847247    6524 out.go:177] * Pausing node newest-cni-20220602193528-12108 ... 
	I0602 19:46:18.851287    6524 host.go:66] Checking if "newest-cni-20220602193528-12108" exists ...
	I0602 19:46:18.877251    6524 ssh_runner.go:195] Run: systemctl --version
	I0602 19:46:18.888263    6524 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220602193528-12108
	I0602 19:46:20.265469    6524 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220602193528-12108: (1.3772004s)
	I0602 19:46:20.266520    6524 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:54943 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\newest-cni-20220602193528-12108\id_rsa Username:docker}
	I0602 19:46:20.551452    6524 ssh_runner.go:235] Completed: systemctl --version: (1.6741936s)
	I0602 19:46:20.572480    6524 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0602 19:46:20.668343    6524 pause.go:50] kubelet running: true
	I0602 19:46:20.683110    6524 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I0602 19:46:21.569037    6524 retry.go:31] will retry after 276.165072ms: kubelet disable --now: sudo systemctl disable --now kubelet: Process exited with status 1
	stdout:
	
	stderr:
	Synchronizing state of kubelet.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install disable kubelet
	update-rc.d: error: kubelet Default-Start contains no runlevels, aborting.
	I0602 19:46:21.872012    6524 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0602 19:46:21.958864    6524 pause.go:50] kubelet running: true
	I0602 19:46:21.974863    6524 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I0602 19:46:22.652512    6524 out.go:177] 
	W0602 19:46:22.655513    6524 out.go:239] X Exiting due to GUEST_PAUSE: kubelet disable --now: sudo systemctl disable --now kubelet: Process exited with status 1
	stdout:
	
	stderr:
	Synchronizing state of kubelet.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install disable kubelet
	update-rc.d: error: kubelet Default-Start contains no runlevels, aborting.
	
	X Exiting due to GUEST_PAUSE: kubelet disable --now: sudo systemctl disable --now kubelet: Process exited with status 1
	stdout:
	
	stderr:
	Synchronizing state of kubelet.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install disable kubelet
	update-rc.d: error: kubelet Default-Start contains no runlevels, aborting.
	
	W0602 19:46:22.655513    6524 out.go:239] * 
	* 
	W0602 19:46:22.782706    6524 out.go:239] ╭───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                       │
	│    * If the above advice does not help, please let us know:                                                           │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                         │
	│                                                                                                                       │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                              │
	│    * Please also attach the following file to the GitHub issue:                                                       │
	│    * - C:\Users\jenkins.minikube7\AppData\Local\Temp\minikube_pause_8a34b101973a5475dd3f2895f630b939c2202307_5.log    │
	│                                                                                                                       │
	╰───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                       │
	│    * If the above advice does not help, please let us know:                                                           │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                         │
	│                                                                                                                       │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                              │
	│    * Please also attach the following file to the GitHub issue:                                                       │
	│    * - C:\Users\jenkins.minikube7\AppData\Local\Temp\minikube_pause_8a34b101973a5475dd3f2895f630b939c2202307_5.log    │
	│                                                                                                                       │
	╰───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I0602 19:46:22.786753    6524 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:313: out/minikube-windows-amd64.exe pause -p newest-cni-20220602193528-12108 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect newest-cni-20220602193528-12108
helpers_test.go:231: (dbg) Done: docker inspect newest-cni-20220602193528-12108: (1.3957616s)
helpers_test.go:235: (dbg) docker inspect newest-cni-20220602193528-12108:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "494859eb0fb175e426210a880c5590c7d45afd36495c4b566f182cc72aceea2d",
	        "Created": "2022-06-02T19:42:09.2509866Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 277868,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2022-06-02T19:44:54.1265864Z",
	            "FinishedAt": "2022-06-02T19:44:32.7793205Z"
	        },
	        "Image": "sha256:462790409a0e2520bcd6c4a009da80732d87146c973d78561764290fa691ea41",
	        "ResolvConfPath": "/var/lib/docker/containers/494859eb0fb175e426210a880c5590c7d45afd36495c4b566f182cc72aceea2d/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/494859eb0fb175e426210a880c5590c7d45afd36495c4b566f182cc72aceea2d/hostname",
	        "HostsPath": "/var/lib/docker/containers/494859eb0fb175e426210a880c5590c7d45afd36495c4b566f182cc72aceea2d/hosts",
	        "LogPath": "/var/lib/docker/containers/494859eb0fb175e426210a880c5590c7d45afd36495c4b566f182cc72aceea2d/494859eb0fb175e426210a880c5590c7d45afd36495c4b566f182cc72aceea2d-json.log",
	        "Name": "/newest-cni-20220602193528-12108",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "newest-cni-20220602193528-12108:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "newest-cni-20220602193528-12108",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 2306867200,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/f9a6db0863ea21cba32fe26a339ddd0c48312b903df5653e1b208d284dac1e2d-init/diff:/var/lib/docker/overlay2/dfce970b43800856c522d9750e5e1364e8adf4be4cf71ca7c53d79b33355f5a7/diff:/var/lib/docker/overlay2/4fd23a1b84854239f1bb855d05e42ecd6acbd1b0944b347813a56f5f45356a42/diff:/var/lib/docker/overlay2/864c5b1fbc297750771bb843fdeb4bafa10868a71716f4a01f1119609fb34667/diff:/var/lib/docker/overlay2/0f11f6855118857c743b90ca120ff7aa550f8157d475abf59df950433a5bc6e8/diff:/var/lib/docker/overlay2/2ae7f559725a060dc3b3a9c2fbd554b98114ae47dbf8db75f13bd8a95cbae19a/diff:/var/lib/docker/overlay2/48f41ac288d1037223ac101e6bc07f05729cdcecd98cc85971db99e90765c437/diff:/var/lib/docker/overlay2/8d4eaae639ade3ad3459b4fb67dbcac83774b72a2550b0a4bca1f21d122b20e6/diff:/var/lib/docker/overlay2/e06515bb91756221300de52336376d32ef9bd8685a92352e522936c4947b88ee/diff:/var/lib/docker/overlay2/a2f615fb794b704dc3823080c47e2c357cf4826ec91f6ae190c7497bb18a80cd/diff:/var/lib/docker/overlay2/22f99f
8a3da21c6e2be4c5c5e9d969af73e7695aaf9b0c7d0d09b5795ba76416/diff:/var/lib/docker/overlay2/9c0266785c64b9f6c471863067ca9db045a5aa61167a7817217cf01825a7d868/diff:/var/lib/docker/overlay2/b8a0250c9ae7d899ee3e46414c2db7f7ba363793900f8fcbf1b470586ebe7bd9/diff:/var/lib/docker/overlay2/00afbeac619cb9c06d4da311f5fc5aa3f5147b88b291acf06d4c4b36984ad5a2/diff:/var/lib/docker/overlay2/da51241ed08bd861b9d27902198eae13c3e4aac5c79f522e9f3fa209ea35e8d3/diff:/var/lib/docker/overlay2/b01176f7dbe98e3004db7c0fe45d94616a803dd8ae9cbdf3a1f2a188604178af/diff:/var/lib/docker/overlay2/0ebb0ff0177c8116e72a14ac704b161f75922cea05fe804ad1f7b83f4cd3dd70/diff:/var/lib/docker/overlay2/bae8d175bc3e334a70aaa239643efa0e8b453ab163f077d9cef60e3840c717ba/diff:/var/lib/docker/overlay2/e72a79f763a44dc32f9a2e84dc5e28a060e7fbb9f4624cb8aaa084dd356522ec/diff:/var/lib/docker/overlay2/2e1bc304b205033ad7f49fb8db243b0991596e0eec913fd13e8382aa25767e21/diff:/var/lib/docker/overlay2/ebb9b39dedfc09f9f34ea879f56a8ffd24ab9f9bf8acc93aa9df5eb93dba58e8/diff:/var/lib/d
ocker/overlay2/bffdca36eba4bce9086f2c269bcfe5b915d807483717f0e27acbd51b5bbfc11b/diff:/var/lib/docker/overlay2/96c321cbf06c0050c8a0a7897e9533db1ee5788eb09b1e1d605bdd1134af8eca/diff:/var/lib/docker/overlay2/735422b44af98e330209fe1c4273bf57aa33fcfd770f3e9d6f1a6e59f7545920/diff:/var/lib/docker/overlay2/8dc177c0589f67ded7d9c229d3c587fe77b3d1c68cf0a5af871bc23768d67d84/diff:/var/lib/docker/overlay2/9a29541ccfee3849e0691950c599bb7e4e51d9026724b1ad13abc8d8e9c140e0/diff:/var/lib/docker/overlay2/50fe1dc8f357b5d624681e6f14d98e6d33a8b6b53d70293ba90ac4435a1e18d8/diff:/var/lib/docker/overlay2/86f301a296dbb7422a3d55a008a9f38278a7a19d68a0f735d298c0c2a431ee30/diff:/var/lib/docker/overlay2/dc8087ea592587f8cb5392cc0ee739c33f2724c47b83767d593b3065914820b0/diff:/var/lib/docker/overlay2/15163601889f0d414f35ccd64ae33a52958605b5b7e50618ed5d4f4bd06ec65b/diff:/var/lib/docker/overlay2/a50cf19d9d69b9c68c6c66a918cbde678b49e8d566d06772af22bf99191b08f3/diff:/var/lib/docker/overlay2/621f3b0fc578721c5d0465771ad007f022ed238fa5a2076f807c077680c
26d27/diff:/var/lib/docker/overlay2/2652f9ffde92786a77e3bb35fe07c03a623aaad541f0ca9710839800c4b470e4/diff:/var/lib/docker/overlay2/c853755ee76ea55ad6c00f5eaff82196f4953ee6fb2d27e27ba35f86d56bfc32/diff:/var/lib/docker/overlay2/a0f70e6416a8e618ea7475b5e7f4cdc9a66ac39f0a6c1969c569d8e4f0b5e9eb/diff:/var/lib/docker/overlay2/275d2c643ecb011298df16e0794bebb9a7ec82e190aea53a90369288c521f75e/diff:/var/lib/docker/overlay2/a7e78f238badc23c2c38b7e9b9c4428c0614e825744076161295740d46a20957/diff:/var/lib/docker/overlay2/39fcd4c392271449973511a31d445289c1f8d378d01759fef12c430c9f44f2b8/diff:/var/lib/docker/overlay2/e1c51360d327e86575fe8248415fae12e9dbdde580db0e6f4f4e485ac9f92e3b/diff:/var/lib/docker/overlay2/fecd88783858177cbe3b751f0717b370c5556d7cf0ef163e2710f16fce09d53c/diff:/var/lib/docker/overlay2/3b4c7afaac6f5818bc33bec8c0ec442eb5a1010d0de6fe488460ee83a3901b21/diff:/var/lib/docker/overlay2/47d0047bc42c34ea02c33c1500f96c5109f27f84f973a5636832bbc855761e3f/diff",
	                "MergedDir": "/var/lib/docker/overlay2/f9a6db0863ea21cba32fe26a339ddd0c48312b903df5653e1b208d284dac1e2d/merged",
	                "UpperDir": "/var/lib/docker/overlay2/f9a6db0863ea21cba32fe26a339ddd0c48312b903df5653e1b208d284dac1e2d/diff",
	                "WorkDir": "/var/lib/docker/overlay2/f9a6db0863ea21cba32fe26a339ddd0c48312b903df5653e1b208d284dac1e2d/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "newest-cni-20220602193528-12108",
	                "Source": "/var/lib/docker/volumes/newest-cni-20220602193528-12108/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "newest-cni-20220602193528-12108",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "newest-cni-20220602193528-12108",
	                "name.minikube.sigs.k8s.io": "newest-cni-20220602193528-12108",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "39e9ce1af27b6e7b2cf3511d876ba94d60b25e9fb53562144ceda7121413b8be",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "54943"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "54944"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "54945"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "54946"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "54947"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/39e9ce1af27b",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "newest-cni-20220602193528-12108": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.58.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "494859eb0fb1",
	                        "newest-cni-20220602193528-12108"
	                    ],
	                    "NetworkID": "4da5d80e8d86dd2da8c242516e00ea62a0606e89d2c53fe365cac4b3373e53c6",
	                    "EndpointID": "146dd801d3f4d1133b863665a10cd83555444977db5d090d9502d2c0072a3932",
	                    "Gateway": "192.168.58.1",
	                    "IPAddress": "192.168.58.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:3a:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p newest-cni-20220602193528-12108 -n newest-cni-20220602193528-12108
E0602 19:46:32.189236   12108 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\old-k8s-version-20220602192231-12108\client.crt: The system cannot find the path specified.
helpers_test.go:239: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.Host}} -p newest-cni-20220602193528-12108 -n newest-cni-20220602193528-12108: (8.8257057s)
helpers_test.go:244: <<< TestStartStop/group/newest-cni/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-windows-amd64.exe -p newest-cni-20220602193528-12108 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-windows-amd64.exe -p newest-cni-20220602193528-12108 logs -n 25: (14.6264437s)
helpers_test.go:252: TestStartStop/group/newest-cni/serial/Pause logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|------------------------------------------------------------|-------------------------------------------------|-------------------|----------------|---------------------|---------------------|
	| Command |                            Args                            |                     Profile                     |       User        |    Version     |     Start Time      |      End Time       |
	|---------|------------------------------------------------------------|-------------------------------------------------|-------------------|----------------|---------------------|---------------------|
	| start   | -p                                                         | old-k8s-version-20220602192231-12108            | minikube7\jenkins | v1.26.0-beta.1 | 02 Jun 22 19:26 GMT | 02 Jun 22 19:34 GMT |
	|         | old-k8s-version-20220602192231-12108                       |                                                 |                   |                |                     |                     |
	|         | --memory=2200 --alsologtostderr                            |                                                 |                   |                |                     |                     |
	|         | --wait=true --kvm-network=default                          |                                                 |                   |                |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                              |                                                 |                   |                |                     |                     |
	|         | --disable-driver-mounts                                    |                                                 |                   |                |                     |                     |
	|         | --keep-context=false --driver=docker                       |                                                 |                   |                |                     |                     |
	|         | --kubernetes-version=v1.16.0                               |                                                 |                   |                |                     |                     |
	| start   | -p                                                         | default-k8s-different-port-20220602192441-12108 | minikube7\jenkins | v1.26.0-beta.1 | 02 Jun 22 19:27 GMT | 02 Jun 22 19:34 GMT |
	|         | default-k8s-different-port-20220602192441-12108            |                                                 |                   |                |                     |                     |
	|         | --memory=2200 --alsologtostderr --wait=true                |                                                 |                   |                |                     |                     |
	|         | --apiserver-port=8444 --driver=docker                      |                                                 |                   |                |                     |                     |
	|         | --kubernetes-version=v1.23.6                               |                                                 |                   |                |                     |                     |
	| ssh     | -p                                                         | old-k8s-version-20220602192231-12108            | minikube7\jenkins | v1.26.0-beta.1 | 02 Jun 22 19:35 GMT | 02 Jun 22 19:35 GMT |
	|         | old-k8s-version-20220602192231-12108                       |                                                 |                   |                |                     |                     |
	|         | sudo crictl images -o json                                 |                                                 |                   |                |                     |                     |
	| delete  | -p                                                         | embed-certs-20220602192235-12108                | minikube7\jenkins | v1.26.0-beta.1 | 02 Jun 22 19:34 GMT | 02 Jun 22 19:35 GMT |
	|         | embed-certs-20220602192235-12108                           |                                                 |                   |                |                     |                     |
	| ssh     | -p                                                         | no-preload-20220602192234-12108                 | minikube7\jenkins | v1.26.0-beta.1 | 02 Jun 22 19:35 GMT | 02 Jun 22 19:35 GMT |
	|         | no-preload-20220602192234-12108                            |                                                 |                   |                |                     |                     |
	|         | sudo crictl images -o json                                 |                                                 |                   |                |                     |                     |
	| pause   | -p                                                         | old-k8s-version-20220602192231-12108            | minikube7\jenkins | v1.26.0-beta.1 | 02 Jun 22 19:35 GMT | 02 Jun 22 19:35 GMT |
	|         | old-k8s-version-20220602192231-12108                       |                                                 |                   |                |                     |                     |
	|         | --alsologtostderr -v=1                                     |                                                 |                   |                |                     |                     |
	| pause   | -p                                                         | no-preload-20220602192234-12108                 | minikube7\jenkins | v1.26.0-beta.1 | 02 Jun 22 19:35 GMT | 02 Jun 22 19:35 GMT |
	|         | no-preload-20220602192234-12108                            |                                                 |                   |                |                     |                     |
	|         | --alsologtostderr -v=1                                     |                                                 |                   |                |                     |                     |
	| delete  | -p                                                         | embed-certs-20220602192235-12108                | minikube7\jenkins | v1.26.0-beta.1 | 02 Jun 22 19:35 GMT | 02 Jun 22 19:35 GMT |
	|         | embed-certs-20220602192235-12108                           |                                                 |                   |                |                     |                     |
	| unpause | -p                                                         | old-k8s-version-20220602192231-12108            | minikube7\jenkins | v1.26.0-beta.1 | 02 Jun 22 19:35 GMT | 02 Jun 22 19:35 GMT |
	|         | old-k8s-version-20220602192231-12108                       |                                                 |                   |                |                     |                     |
	|         | --alsologtostderr -v=1                                     |                                                 |                   |                |                     |                     |
	| unpause | -p                                                         | no-preload-20220602192234-12108                 | minikube7\jenkins | v1.26.0-beta.1 | 02 Jun 22 19:35 GMT | 02 Jun 22 19:35 GMT |
	|         | no-preload-20220602192234-12108                            |                                                 |                   |                |                     |                     |
	|         | --alsologtostderr -v=1                                     |                                                 |                   |                |                     |                     |
	| ssh     | -p                                                         | default-k8s-different-port-20220602192441-12108 | minikube7\jenkins | v1.26.0-beta.1 | 02 Jun 22 19:35 GMT | 02 Jun 22 19:35 GMT |
	|         | default-k8s-different-port-20220602192441-12108            |                                                 |                   |                |                     |                     |
	|         | sudo crictl images -o json                                 |                                                 |                   |                |                     |                     |
	| pause   | -p                                                         | default-k8s-different-port-20220602192441-12108 | minikube7\jenkins | v1.26.0-beta.1 | 02 Jun 22 19:35 GMT | 02 Jun 22 19:36 GMT |
	|         | default-k8s-different-port-20220602192441-12108            |                                                 |                   |                |                     |                     |
	|         | --alsologtostderr -v=1                                     |                                                 |                   |                |                     |                     |
	| unpause | -p                                                         | default-k8s-different-port-20220602192441-12108 | minikube7\jenkins | v1.26.0-beta.1 | 02 Jun 22 19:36 GMT | 02 Jun 22 19:36 GMT |
	|         | default-k8s-different-port-20220602192441-12108            |                                                 |                   |                |                     |                     |
	|         | --alsologtostderr -v=1                                     |                                                 |                   |                |                     |                     |
	| delete  | -p                                                         | old-k8s-version-20220602192231-12108            | minikube7\jenkins | v1.26.0-beta.1 | 02 Jun 22 19:35 GMT | 02 Jun 22 19:36 GMT |
	|         | old-k8s-version-20220602192231-12108                       |                                                 |                   |                |                     |                     |
	| delete  | -p                                                         | no-preload-20220602192234-12108                 | minikube7\jenkins | v1.26.0-beta.1 | 02 Jun 22 19:35 GMT | 02 Jun 22 19:36 GMT |
	|         | no-preload-20220602192234-12108                            |                                                 |                   |                |                     |                     |
	| delete  | -p                                                         | old-k8s-version-20220602192231-12108            | minikube7\jenkins | v1.26.0-beta.1 | 02 Jun 22 19:36 GMT | 02 Jun 22 19:36 GMT |
	|         | old-k8s-version-20220602192231-12108                       |                                                 |                   |                |                     |                     |
	| delete  | -p                                                         | no-preload-20220602192234-12108                 | minikube7\jenkins | v1.26.0-beta.1 | 02 Jun 22 19:36 GMT | 02 Jun 22 19:36 GMT |
	|         | no-preload-20220602192234-12108                            |                                                 |                   |                |                     |                     |
	| delete  | -p                                                         | default-k8s-different-port-20220602192441-12108 | minikube7\jenkins | v1.26.0-beta.1 | 02 Jun 22 19:36 GMT | 02 Jun 22 19:37 GMT |
	|         | default-k8s-different-port-20220602192441-12108            |                                                 |                   |                |                     |                     |
	| delete  | -p                                                         | default-k8s-different-port-20220602192441-12108 | minikube7\jenkins | v1.26.0-beta.1 | 02 Jun 22 19:37 GMT | 02 Jun 22 19:37 GMT |
	|         | default-k8s-different-port-20220602192441-12108            |                                                 |                   |                |                     |                     |
	| start   | -p newest-cni-20220602193528-12108 --memory=2200           | newest-cni-20220602193528-12108                 | minikube7\jenkins | v1.26.0-beta.1 | 02 Jun 22 19:35 GMT | 02 Jun 22 19:44 GMT |
	|         | --alsologtostderr --wait=apiserver,system_pods,default_sa  |                                                 |                   |                |                     |                     |
	|         | --feature-gates ServerSideApply=true --network-plugin=cni  |                                                 |                   |                |                     |                     |
	|         | --extra-config=kubelet.network-plugin=cni                  |                                                 |                   |                |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 |                                                 |                   |                |                     |                     |
	|         | --driver=docker --kubernetes-version=v1.23.6               |                                                 |                   |                |                     |                     |
	| addons  | enable metrics-server -p                                   | newest-cni-20220602193528-12108                 | minikube7\jenkins | v1.26.0-beta.1 | 02 Jun 22 19:44 GMT | 02 Jun 22 19:44 GMT |
	|         | newest-cni-20220602193528-12108                            |                                                 |                   |                |                     |                     |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4           |                                                 |                   |                |                     |                     |
	|         | --registries=MetricsServer=fake.domain                     |                                                 |                   |                |                     |                     |
	| stop    | -p                                                         | newest-cni-20220602193528-12108                 | minikube7\jenkins | v1.26.0-beta.1 | 02 Jun 22 19:44 GMT | 02 Jun 22 19:44 GMT |
	|         | newest-cni-20220602193528-12108                            |                                                 |                   |                |                     |                     |
	|         | --alsologtostderr -v=3                                     |                                                 |                   |                |                     |                     |
	| addons  | enable dashboard -p                                        | newest-cni-20220602193528-12108                 | minikube7\jenkins | v1.26.0-beta.1 | 02 Jun 22 19:44 GMT | 02 Jun 22 19:44 GMT |
	|         | newest-cni-20220602193528-12108                            |                                                 |                   |                |                     |                     |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4          |                                                 |                   |                |                     |                     |
	| start   | -p newest-cni-20220602193528-12108 --memory=2200           | newest-cni-20220602193528-12108                 | minikube7\jenkins | v1.26.0-beta.1 | 02 Jun 22 19:44 GMT | 02 Jun 22 19:45 GMT |
	|         | --alsologtostderr --wait=apiserver,system_pods,default_sa  |                                                 |                   |                |                     |                     |
	|         | --feature-gates ServerSideApply=true --network-plugin=cni  |                                                 |                   |                |                     |                     |
	|         | --extra-config=kubelet.network-plugin=cni                  |                                                 |                   |                |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 |                                                 |                   |                |                     |                     |
	|         | --driver=docker --kubernetes-version=v1.23.6               |                                                 |                   |                |                     |                     |
	| ssh     | -p                                                         | newest-cni-20220602193528-12108                 | minikube7\jenkins | v1.26.0-beta.1 | 02 Jun 22 19:46 GMT | 02 Jun 22 19:46 GMT |
	|         | newest-cni-20220602193528-12108                            |                                                 |                   |                |                     |                     |
	|         | sudo crictl images -o json                                 |                                                 |                   |                |                     |                     |
	|---------|------------------------------------------------------------|-------------------------------------------------|-------------------|----------------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2022/06/02 19:44:41
	Running on machine: minikube7
	Binary: Built with gc go1.18.2 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0602 19:44:41.496479   13568 out.go:296] Setting OutFile to fd 704 ...
	I0602 19:44:41.560249   13568 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0602 19:44:41.560249   13568 out.go:309] Setting ErrFile to fd 1964...
	I0602 19:44:41.560249   13568 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0602 19:44:41.577216   13568 out.go:303] Setting JSON to false
	I0602 19:44:41.580894   13568 start.go:115] hostinfo: {"hostname":"minikube7","uptime":62223,"bootTime":1654136858,"procs":188,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19044 Build 19044","kernelVersion":"10.0.19044 Build 19044","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f66ed2ea-6c04-4a6b-8eea-b2fb0953e990"}
	W0602 19:44:41.581464   13568 start.go:123] gopshost.Virtualization returned error: not implemented yet
	I0602 19:44:41.587340   13568 out.go:177] * [newest-cni-20220602193528-12108] minikube v1.26.0-beta.1 on Microsoft Windows 10 Enterprise N 10.0.19044 Build 19044
	I0602 19:44:41.589716   13568 notify.go:193] Checking for updates...
	I0602 19:44:41.591861   13568 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube7\minikube-integration\kubeconfig
	I0602 19:44:41.594967   13568 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube7\minikube-integration\.minikube
	I0602 19:44:41.598290   13568 out.go:177]   - MINIKUBE_LOCATION=14269
	I0602 19:44:41.601364   13568 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0602 19:44:42.420346   11612 pod_ready.go:102] pod "coredns-64897985d-6flrb" in "kube-system" namespace has status "Ready":"False"
	I0602 19:44:44.424745   11612 pod_ready.go:102] pod "coredns-64897985d-6flrb" in "kube-system" namespace has status "Ready":"False"
	I0602 19:44:41.604841   13568 config.go:178] Loaded profile config "newest-cni-20220602193528-12108": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.6
	I0602 19:44:41.605718   13568 driver.go:358] Setting default libvirt URI to qemu:///system
	I0602 19:44:44.403715   13568 docker.go:137] docker version: linux-20.10.16
	I0602 19:44:44.417723   13568 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0602 19:44:46.667733   13568 cli_runner.go:217] Completed: docker system info --format "{{json .}}": (2.2498578s)
	I0602 19:44:46.668460   13568 info.go:265] docker info: {ID:DTJ5:6HSD:HQYO:NIUY:6CLH:5UQL:6FBK:7BMO:GVE6:JAI7:26MB:GZA3 Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:90 OomKillDisable:true NGoroutines:65 SystemTime:2022-06-02 19:44:45.5123415 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:4 KernelVersion:5.10.102.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_
64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:53902323712 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.16 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16 Expected:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16} RuncCommit:{ID:v1.1.1-0-g52de29d Expected:v1.1.1-0-g52de29d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,
profile=default] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.6.0] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:C:\Program Files\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc.
Version:v0.17.0]] Warnings:<nil>}}
	I0602 19:44:46.677206   13568 out.go:177] * Using the docker driver based on existing profile
	I0602 19:44:46.680824   13568 start.go:284] selected driver: docker
	I0602 19:44:46.680824   13568 start.go:806] validating driver "docker" against &{Name:newest-cni-20220602193528-12108 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.6 ClusterName:newest-cni-20220602193528-12108 Namespace:
default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:network-plugin Value:cni} {Component:kubeadm Key:pod-network-cidr Value:192.168.111.111/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.23.6 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.6.0@sha256:4af9580485920635d888efe1eddbd67e12f9d5d84dba87100e93feb4e46636b3 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map
[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube7:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0602 19:44:46.680824   13568 start.go:817] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0602 19:44:46.812980   13568 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0602 19:44:49.211713   13568 cli_runner.go:217] Completed: docker system info --format "{{json .}}": (2.3987221s)
	I0602 19:44:49.211890   13568 info.go:265] docker info: {ID:DTJ5:6HSD:HQYO:NIUY:6CLH:5UQL:6FBK:7BMO:GVE6:JAI7:26MB:GZA3 Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:90 OomKillDisable:true NGoroutines:65 SystemTime:2022-06-02 19:44:48.025779 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:4 KernelVersion:5.10.102.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_6
4 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:53902323712 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.16 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16 Expected:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16} RuncCommit:{ID:v1.1.1-0-g52de29d Expected:v1.1.1-0-g52de29d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,p
rofile=default] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.6.0] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:C:\Program Files\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc.
Version:v0.17.0]] Warnings:<nil>}}
	I0602 19:44:49.212532   13568 start_flags.go:866] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0602 19:44:49.212532   13568 cni.go:95] Creating CNI manager for ""
	I0602 19:44:49.212532   13568 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0602 19:44:49.212532   13568 start_flags.go:306] config:
	{Name:newest-cni-20220602193528-12108 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.6 ClusterName:newest-cni-20220602193528-12108 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:clu
ster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:network-plugin Value:cni} {Component:kubeadm Key:pod-network-cidr Value:192.168.111.111/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.23.6 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.6.0@sha256:4af9580485920635d888efe1eddbd67e12f9d5d84dba87100e93feb4e46636b3 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false nod
e_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube7:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0602 19:44:49.216732   13568 out.go:177] * Starting control plane node newest-cni-20220602193528-12108 in cluster newest-cni-20220602193528-12108
	I0602 19:44:49.219113   13568 cache.go:120] Beginning downloading kic base image for docker with docker
	I0602 19:44:49.221725   13568 out.go:177] * Pulling base image ...
	I0602 19:44:46.934589   11612 pod_ready.go:102] pod "coredns-64897985d-6flrb" in "kube-system" namespace has status "Ready":"False"
	I0602 19:44:49.415977   11612 pod_ready.go:102] pod "coredns-64897985d-6flrb" in "kube-system" namespace has status "Ready":"False"
	I0602 19:44:49.223487   13568 preload.go:132] Checking if preload exists for k8s version v1.23.6 and runtime docker
	I0602 19:44:49.223487   13568 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496 in local docker daemon
	I0602 19:44:49.223747   13568 preload.go:148] Found local preload: C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.23.6-docker-overlay2-amd64.tar.lz4
	I0602 19:44:49.223747   13568 cache.go:57] Caching tarball of preloaded images
	I0602 19:44:49.224300   13568 preload.go:174] Found C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.23.6-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0602 19:44:49.224481   13568 cache.go:60] Finished verifying existence of preloaded tar for  v1.23.6 on docker
	I0602 19:44:49.224898   13568 profile.go:148] Saving config to C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\newest-cni-20220602193528-12108\config.json ...
	I0602 19:44:50.473910   13568 image.go:79] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496 in local docker daemon, skipping pull
	I0602 19:44:50.474138   13568 cache.go:141] gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496 exists in daemon, skipping load
	I0602 19:44:50.474138   13568 cache.go:206] Successfully downloaded all kic artifacts
	I0602 19:44:50.474138   13568 start.go:352] acquiring machines lock for newest-cni-20220602193528-12108: {Name:mk244be8bfa86d8f96622244132b3a037ccada35 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0602 19:44:50.474138   13568 start.go:356] acquired machines lock for "newest-cni-20220602193528-12108" in 0s
	I0602 19:44:50.474138   13568 start.go:94] Skipping create...Using existing machine configuration
	I0602 19:44:50.474138   13568 fix.go:55] fixHost starting: 
	I0602 19:44:50.489121   13568 cli_runner.go:164] Run: docker container inspect newest-cni-20220602193528-12108 --format={{.State.Status}}
	I0602 19:44:51.833358   13568 cli_runner.go:217] Completed: docker container inspect newest-cni-20220602193528-12108 --format={{.State.Status}}: (1.3442309s)
	I0602 19:44:51.833443   13568 fix.go:103] recreateIfNeeded on newest-cni-20220602193528-12108: state=Stopped err=<nil>
	W0602 19:44:51.833443   13568 fix.go:129] unexpected machine state, will restart: <nil>
	I0602 19:44:51.843959   13568 out.go:177] * Restarting existing docker container for "newest-cni-20220602193528-12108" ...
	I0602 19:44:51.912972   11612 pod_ready.go:102] pod "coredns-64897985d-6flrb" in "kube-system" namespace has status "Ready":"False"
	I0602 19:44:53.923388   11612 pod_ready.go:102] pod "coredns-64897985d-6flrb" in "kube-system" namespace has status "Ready":"False"
	I0602 19:44:51.872962   13568 cli_runner.go:164] Run: docker start newest-cni-20220602193528-12108
	I0602 19:44:54.200741   13568 cli_runner.go:217] Completed: docker start newest-cni-20220602193528-12108: (2.3277688s)
	I0602 19:44:54.210764   13568 cli_runner.go:164] Run: docker container inspect newest-cni-20220602193528-12108 --format={{.State.Status}}
	I0602 19:44:55.505806   13568 cli_runner.go:217] Completed: docker container inspect newest-cni-20220602193528-12108 --format={{.State.Status}}: (1.2950365s)
	I0602 19:44:55.505806   13568 kic.go:416] container "newest-cni-20220602193528-12108" state is running.
	I0602 19:44:55.515774   13568 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-20220602193528-12108
	I0602 19:44:56.418137   11612 pod_ready.go:102] pod "coredns-64897985d-6flrb" in "kube-system" namespace has status "Ready":"False"
	I0602 19:44:58.913097   11612 pod_ready.go:102] pod "coredns-64897985d-6flrb" in "kube-system" namespace has status "Ready":"False"
	I0602 19:44:56.822329   13568 cli_runner.go:217] Completed: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-20220602193528-12108: (1.3065496s)
	I0602 19:44:56.822329   13568 profile.go:148] Saving config to C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\newest-cni-20220602193528-12108\config.json ...
	I0602 19:44:56.824322   13568 machine.go:88] provisioning docker machine ...
	I0602 19:44:56.824322   13568 ubuntu.go:169] provisioning hostname "newest-cni-20220602193528-12108"
	I0602 19:44:56.835344   13568 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220602193528-12108
	I0602 19:44:58.141724   13568 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220602193528-12108: (1.3062118s)
	I0602 19:44:58.148930   13568 main.go:134] libmachine: Using SSH client type: native
	I0602 19:44:58.149171   13568 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x5b2ea0] 0x5b5d00 <nil>  [] 0s} 127.0.0.1 54943 <nil> <nil>}
	I0602 19:44:58.149171   13568 main.go:134] libmachine: About to run SSH command:
	sudo hostname newest-cni-20220602193528-12108 && echo "newest-cni-20220602193528-12108" | sudo tee /etc/hostname
	I0602 19:44:58.389505   13568 main.go:134] libmachine: SSH cmd err, output: <nil>: newest-cni-20220602193528-12108
	
	I0602 19:44:58.398345   13568 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220602193528-12108
	I0602 19:44:59.676621   13568 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220602193528-12108: (1.278216s)
	I0602 19:44:59.682531   13568 main.go:134] libmachine: Using SSH client type: native
	I0602 19:44:59.683534   13568 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x5b2ea0] 0x5b5d00 <nil>  [] 0s} 127.0.0.1 54943 <nil> <nil>}
	I0602 19:44:59.683534   13568 main.go:134] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-20220602193528-12108' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-20220602193528-12108/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-20220602193528-12108' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0602 19:44:59.822921   13568 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I0602 19:44:59.822921   13568 ubuntu.go:175] set auth options {CertDir:C:\Users\jenkins.minikube7\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube7\minikube-integration\.minikube}
	I0602 19:44:59.822921   13568 ubuntu.go:177] setting up certificates
	I0602 19:44:59.823899   13568 provision.go:83] configureAuth start
	I0602 19:44:59.830897   13568 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-20220602193528-12108
	I0602 19:45:01.075848   13568 cli_runner.go:217] Completed: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-20220602193528-12108: (1.2449458s)
	I0602 19:45:01.075848   13568 provision.go:138] copyHostCerts
	I0602 19:45:01.075848   13568 exec_runner.go:144] found C:\Users\jenkins.minikube7\minikube-integration\.minikube/ca.pem, removing ...
	I0602 19:45:01.075848   13568 exec_runner.go:207] rm: C:\Users\jenkins.minikube7\minikube-integration\.minikube\ca.pem
	I0602 19:45:01.076851   13568 exec_runner.go:151] cp: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube7\minikube-integration\.minikube/ca.pem (1078 bytes)
	I0602 19:45:01.078834   13568 exec_runner.go:144] found C:\Users\jenkins.minikube7\minikube-integration\.minikube/cert.pem, removing ...
	I0602 19:45:01.078834   13568 exec_runner.go:207] rm: C:\Users\jenkins.minikube7\minikube-integration\.minikube\cert.pem
	I0602 19:45:01.078834   13568 exec_runner.go:151] cp: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube7\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0602 19:45:01.079834   13568 exec_runner.go:144] found C:\Users\jenkins.minikube7\minikube-integration\.minikube/key.pem, removing ...
	I0602 19:45:01.079834   13568 exec_runner.go:207] rm: C:\Users\jenkins.minikube7\minikube-integration\.minikube\key.pem
	I0602 19:45:01.080821   13568 exec_runner.go:151] cp: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube7\minikube-integration\.minikube/key.pem (1675 bytes)
	I0602 19:45:01.081822   13568 provision.go:112] generating server cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.newest-cni-20220602193528-12108 san=[192.168.58.2 127.0.0.1 localhost 127.0.0.1 minikube newest-cni-20220602193528-12108]
	I0602 19:45:01.452887   13568 provision.go:172] copyRemoteCerts
	I0602 19:45:01.476806   13568 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0602 19:45:01.485801   13568 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220602193528-12108
	I0602 19:45:02.168867   12568 out.go:204]   - Generating certificates and keys ...
	I0602 19:45:02.176856   12568 out.go:204]   - Booting up control plane ...
	I0602 19:45:02.184845   12568 out.go:204]   - Configuring RBAC rules ...
	I0602 19:45:02.189843   12568 cni.go:95] Creating CNI manager for "calico"
	I0602 19:45:02.194836   12568 out.go:177] * Configuring Calico (Container Networking Interface) ...
	I0602 19:45:03.360204    7936 out.go:204]   - Generating certificates and keys ...
	I0602 19:45:03.367424    7936 out.go:204]   - Booting up control plane ...
	I0602 19:45:00.926017   11612 pod_ready.go:102] pod "coredns-64897985d-6flrb" in "kube-system" namespace has status "Ready":"False"
	I0602 19:45:03.420253   11612 pod_ready.go:102] pod "coredns-64897985d-6flrb" in "kube-system" namespace has status "Ready":"False"
	I0602 19:45:05.421419   11612 pod_ready.go:102] pod "coredns-64897985d-6flrb" in "kube-system" namespace has status "Ready":"False"
	I0602 19:45:03.373207    7936 out.go:204]   - Configuring RBAC rules ...
	I0602 19:45:03.376660    7936 cni.go:95] Creating CNI manager for ""
	I0602 19:45:03.376660    7936 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0602 19:45:03.376660    7936 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0602 19:45:03.391258    7936 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl label nodes minikube.k8s.io/version=v1.26.0-beta.1 minikube.k8s.io/commit=408dc4036f5a6d8b1313a2031b5dcb646a720fae minikube.k8s.io/name=auto-20220602191545-12108 minikube.k8s.io/updated_at=2022_06_02T19_45_03_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0602 19:45:03.394263    7936 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0602 19:45:03.402292    7936 ops.go:34] apiserver oom_adj: -16
	I0602 19:45:05.567230    7936 ssh_runner.go:235] Completed: sudo /var/lib/minikube/binaries/v1.23.6/kubectl label nodes minikube.k8s.io/version=v1.26.0-beta.1 minikube.k8s.io/commit=408dc4036f5a6d8b1313a2031b5dcb646a720fae minikube.k8s.io/name=auto-20220602191545-12108 minikube.k8s.io/updated_at=2022_06_02T19_45_03_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig: (2.1758709s)
	I0602 19:45:05.567271    7936 ssh_runner.go:235] Completed: sudo /var/lib/minikube/binaries/v1.23.6/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig: (2.1729578s)
	I0602 19:45:05.585998    7936 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0602 19:45:02.197833   12568 cni.go:189] applying CNI manifest using /var/lib/minikube/binaries/v1.23.6/kubectl ...
	I0602 19:45:02.197833   12568 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (202049 bytes)
	I0602 19:45:02.299376   12568 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0602 19:45:02.775165   13568 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220602193528-12108: (1.2893581s)
	I0602 19:45:02.775642   13568 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:54943 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\newest-cni-20220602193528-12108\id_rsa Username:docker}
	I0602 19:45:02.893906   13568 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (1.417094s)
	I0602 19:45:02.893906   13568 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0602 19:45:02.972035   13568 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1265 bytes)
	I0602 19:45:03.026753   13568 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0602 19:45:03.090723   13568 provision.go:86] duration metric: configureAuth took 3.2668103s
	I0602 19:45:03.090723   13568 ubuntu.go:193] setting minikube options for container-runtime
	I0602 19:45:03.092719   13568 config.go:178] Loaded profile config "newest-cni-20220602193528-12108": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.6
	I0602 19:45:03.099718   13568 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220602193528-12108
	I0602 19:45:04.397842   13568 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220602193528-12108: (1.298118s)
	I0602 19:45:04.400841   13568 main.go:134] libmachine: Using SSH client type: native
	I0602 19:45:04.401861   13568 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x5b2ea0] 0x5b5d00 <nil>  [] 0s} 127.0.0.1 54943 <nil> <nil>}
	I0602 19:45:04.401861   13568 main.go:134] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0602 19:45:04.554686   13568 main.go:134] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0602 19:45:04.554686   13568 ubuntu.go:71] root file system type: overlay
	I0602 19:45:04.555695   13568 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0602 19:45:04.570681   13568 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220602193528-12108
	I0602 19:45:05.809103   13568 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220602193528-12108: (1.2381808s)
	I0602 19:45:05.814611   13568 main.go:134] libmachine: Using SSH client type: native
	I0602 19:45:05.815060   13568 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x5b2ea0] 0x5b5d00 <nil>  [] 0s} 127.0.0.1 54943 <nil> <nil>}
	I0602 19:45:05.815060   13568 main.go:134] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0602 19:45:06.039515   13568 main.go:134] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0602 19:45:06.052500   13568 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220602193528-12108
	I0602 19:45:07.914845   11612 pod_ready.go:102] pod "coredns-64897985d-6flrb" in "kube-system" namespace has status "Ready":"False"
	I0602 19:45:09.938824   11612 pod_ready.go:102] pod "coredns-64897985d-6flrb" in "kube-system" namespace has status "Ready":"False"
	I0602 19:45:06.267653    7936 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0602 19:45:06.766675    7936 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0602 19:45:07.278466    7936 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0602 19:45:07.763872    7936 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0602 19:45:08.272005    7936 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0602 19:45:08.790342    7936 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0602 19:45:09.269470    7936 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0602 19:45:09.769916    7936 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0602 19:45:10.267794    7936 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0602 19:45:10.773803    7936 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0602 19:45:07.859017   12568 ssh_runner.go:235] Completed: sudo /var/lib/minikube/binaries/v1.23.6/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml: (5.5596171s)
	I0602 19:45:07.859017   12568 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0602 19:45:07.881503   12568 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0602 19:45:07.882383   12568 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl label nodes minikube.k8s.io/version=v1.26.0-beta.1 minikube.k8s.io/commit=408dc4036f5a6d8b1313a2031b5dcb646a720fae minikube.k8s.io/name=calico-20220602191616-12108 minikube.k8s.io/updated_at=2022_06_02T19_45_07_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0602 19:45:07.892171   12568 ops.go:34] apiserver oom_adj: -16
	I0602 19:45:08.186764   12568 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0602 19:45:08.920983   12568 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0602 19:45:09.421301   12568 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0602 19:45:09.918889   12568 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0602 19:45:10.421877   12568 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0602 19:45:10.927612   12568 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0602 19:45:11.417579   12568 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0602 19:45:07.318908   13568 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220602193528-12108: (1.2663066s)
	I0602 19:45:07.321909   13568 main.go:134] libmachine: Using SSH client type: native
	I0602 19:45:07.322906   13568 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x5b2ea0] 0x5b5d00 <nil>  [] 0s} 127.0.0.1 54943 <nil> <nil>}
	I0602 19:45:07.322906   13568 main.go:134] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0602 19:45:07.559165   13568 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I0602 19:45:07.559165   13568 machine.go:91] provisioned docker machine in 10.7347973s
	I0602 19:45:07.559165   13568 start.go:306] post-start starting for "newest-cni-20220602193528-12108" (driver="docker")
	I0602 19:45:07.559165   13568 start.go:316] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0602 19:45:07.578887   13568 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0602 19:45:07.592670   13568 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220602193528-12108
	I0602 19:45:08.848136   13568 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220602193528-12108: (1.2554608s)
	I0602 19:45:08.848136   13568 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:54943 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\newest-cni-20220602193528-12108\id_rsa Username:docker}
	I0602 19:45:08.995068   13568 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (1.4161745s)
	I0602 19:45:09.007638   13568 ssh_runner.go:195] Run: cat /etc/os-release
	I0602 19:45:09.025916   13568 main.go:134] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0602 19:45:09.025916   13568 main.go:134] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0602 19:45:09.025916   13568 main.go:134] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0602 19:45:09.025916   13568 info.go:137] Remote host: Ubuntu 20.04.4 LTS
	I0602 19:45:09.026467   13568 filesync.go:126] Scanning C:\Users\jenkins.minikube7\minikube-integration\.minikube\addons for local assets ...
	I0602 19:45:09.026962   13568 filesync.go:126] Scanning C:\Users\jenkins.minikube7\minikube-integration\.minikube\files for local assets ...
	I0602 19:45:09.027917   13568 filesync.go:149] local asset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\files\etc\ssl\certs\121082.pem -> 121082.pem in /etc/ssl/certs
	I0602 19:45:09.038847   13568 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0602 19:45:09.065810   13568 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\files\etc\ssl\certs\121082.pem --> /etc/ssl/certs/121082.pem (1708 bytes)
	I0602 19:45:09.125134   13568 start.go:309] post-start completed in 1.5659622s
	I0602 19:45:09.139028   13568 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0602 19:45:09.146768   13568 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220602193528-12108
	I0602 19:45:10.318137   13568 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220602193528-12108: (1.1713645s)
	I0602 19:45:10.318137   13568 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:54943 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\newest-cni-20220602193528-12108\id_rsa Username:docker}
	I0602 19:45:10.468508   13568 ssh_runner.go:235] Completed: sh -c "df -h /var | awk 'NR==2{print $5}'": (1.3293914s)
	I0602 19:45:10.485355   13568 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0602 19:45:10.503253   13568 fix.go:57] fixHost completed within 20.0290297s
	I0602 19:45:10.503253   13568 start.go:81] releasing machines lock for "newest-cni-20220602193528-12108", held for 20.0290297s
	I0602 19:45:10.510240   13568 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-20220602193528-12108
	I0602 19:45:11.764835   13568 cli_runner.go:217] Completed: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-20220602193528-12108: (1.2545895s)
	I0602 19:45:11.766841   13568 ssh_runner.go:195] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0602 19:45:11.775838   13568 ssh_runner.go:195] Run: systemctl --version
	I0602 19:45:11.776832   13568 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220602193528-12108
	I0602 19:45:11.781837   13568 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220602193528-12108
	I0602 19:45:13.110193   13568 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220602193528-12108: (1.3333555s)
	I0602 19:45:13.110193   13568 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:54943 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\newest-cni-20220602193528-12108\id_rsa Username:docker}
	I0602 19:45:13.126150   13568 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220602193528-12108: (1.3443077s)
	I0602 19:45:13.126150   13568 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:54943 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\newest-cni-20220602193528-12108\id_rsa Username:docker}
	I0602 19:45:13.338888   13568 ssh_runner.go:235] Completed: curl -sS -m 2 https://k8s.gcr.io/: (1.5720402s)
	I0602 19:45:13.339010   13568 ssh_runner.go:235] Completed: systemctl --version: (1.5631659s)
	I0602 19:45:13.355269   13568 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0602 19:45:13.399856   13568 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0602 19:45:13.429867   13568 cruntime.go:273] skipping containerd shutdown because we are bound to it
	I0602 19:45:13.446896   13568 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0602 19:45:13.496691   13568 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/dockershim.sock
	image-endpoint: unix:///var/run/dockershim.sock
	" | sudo tee /etc/crictl.yaml"
	I0602 19:45:13.554954   13568 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0602 19:45:13.782934   13568 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0602 19:45:13.982227   13568 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0602 19:45:14.025906   13568 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0602 19:45:14.226407   13568 ssh_runner.go:195] Run: sudo systemctl start docker
	I0602 19:45:14.279119   13568 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0602 19:45:14.396377   13568 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0602 19:45:12.422144   11612 pod_ready.go:102] pod "coredns-64897985d-6flrb" in "kube-system" namespace has status "Ready":"False"
	I0602 19:45:14.915305   11612 pod_ready.go:102] pod "coredns-64897985d-6flrb" in "kube-system" namespace has status "Ready":"False"
	I0602 19:45:11.272276    7936 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0602 19:45:11.763865    7936 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0602 19:45:12.269347    7936 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0602 19:45:12.765737    7936 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0602 19:45:13.771943    7936 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0602 19:45:14.775567    7936 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0602 19:45:15.356980    7936 kubeadm.go:1045] duration metric: took 11.9798453s to wait for elevateKubeSystemPrivileges.
	I0602 19:45:15.357054    7936 kubeadm.go:397] StartCluster complete in 35.8082943s
	I0602 19:45:15.357125    7936 settings.go:142] acquiring lock: {Name:mk2f48f1c2db86c45c5c20d13312e07e9c171d09 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0602 19:45:15.357125    7936 settings.go:150] Updating kubeconfig:  C:\Users\jenkins.minikube7\minikube-integration\kubeconfig
	I0602 19:45:15.359683    7936 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube7\minikube-integration\kubeconfig: {Name:mk3b9816be6135fef42a073128e9ed356868417a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0602 19:45:15.982921    7936 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "auto-20220602191545-12108" rescaled to 1
	I0602 19:45:15.982921    7936 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.23.6/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0602 19:45:15.982921    7936 start.go:208] Will wait 5m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.23.6 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0602 19:45:15.986924    7936 out.go:177] * Verifying Kubernetes components...
	I0602 19:45:15.982921    7936 addons.go:415] enableAddons start: toEnable=map[], additional=[]
	I0602 19:45:15.983881    7936 config.go:178] Loaded profile config "auto-20220602191545-12108": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.6
	I0602 19:45:15.986924    7936 addons.go:65] Setting storage-provisioner=true in profile "auto-20220602191545-12108"
	I0602 19:45:15.987879    7936 addons.go:65] Setting default-storageclass=true in profile "auto-20220602191545-12108"
	I0602 19:45:16.008518    7936 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "auto-20220602191545-12108"
	I0602 19:45:16.008518    7936 addons.go:153] Setting addon storage-provisioner=true in "auto-20220602191545-12108"
	W0602 19:45:16.008518    7936 addons.go:165] addon storage-provisioner should already be in state true
	I0602 19:45:16.009115    7936 host.go:66] Checking if "auto-20220602191545-12108" exists ...
	I0602 19:45:16.027094    7936 ssh_runner.go:195] Run: sudo service kubelet status
	I0602 19:45:16.033100    7936 cli_runner.go:164] Run: docker container inspect auto-20220602191545-12108 --format={{.State.Status}}
	I0602 19:45:16.034090    7936 cli_runner.go:164] Run: docker container inspect auto-20220602191545-12108 --format={{.State.Status}}
	I0602 19:45:11.917089   12568 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0602 19:45:12.416135   12568 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0602 19:45:13.413869   12568 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0602 19:45:13.919325   12568 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0602 19:45:14.426836   12568 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0602 19:45:15.260435   12568 kubeadm.go:1045] duration metric: took 7.4003542s to wait for elevateKubeSystemPrivileges.
	I0602 19:45:15.260435   12568 kubeadm.go:397] StartCluster complete in 35.7891698s
	I0602 19:45:15.260435   12568 settings.go:142] acquiring lock: {Name:mk2f48f1c2db86c45c5c20d13312e07e9c171d09 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0602 19:45:15.261310   12568 settings.go:150] Updating kubeconfig:  C:\Users\jenkins.minikube7\minikube-integration\kubeconfig
	I0602 19:45:15.263674   12568 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube7\minikube-integration\kubeconfig: {Name:mk3b9816be6135fef42a073128e9ed356868417a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0602 19:45:16.162120   12568 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "calico-20220602191616-12108" rescaled to 1
	I0602 19:45:16.162120   12568 start.go:208] Will wait 5m0s for node &{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.23.6 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0602 19:45:16.163104   12568 addons.go:415] enableAddons start: toEnable=map[], additional=[]
	I0602 19:45:16.163104   12568 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.23.6/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0602 19:45:16.166127   12568 addons.go:65] Setting storage-provisioner=true in profile "calico-20220602191616-12108"
	I0602 19:45:16.166127   12568 addons.go:65] Setting default-storageclass=true in profile "calico-20220602191616-12108"
	I0602 19:45:16.166127   12568 addons.go:153] Setting addon storage-provisioner=true in "calico-20220602191616-12108"
	W0602 19:45:16.166127   12568 addons.go:165] addon storage-provisioner should already be in state true
	I0602 19:45:16.166127   12568 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "calico-20220602191616-12108"
	I0602 19:45:16.167093   12568 host.go:66] Checking if "calico-20220602191616-12108" exists ...
	I0602 19:45:16.166127   12568 out.go:177] * Verifying Kubernetes components...
	I0602 19:45:16.163104   12568 config.go:178] Loaded profile config "calico-20220602191616-12108": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.6
	I0602 19:45:16.188102   12568 ssh_runner.go:195] Run: sudo service kubelet status
	I0602 19:45:16.190099   12568 cli_runner.go:164] Run: docker container inspect calico-20220602191616-12108 --format={{.State.Status}}
	I0602 19:45:16.191095   12568 cli_runner.go:164] Run: docker container inspect calico-20220602191616-12108 --format={{.State.Status}}
	I0602 19:45:14.514227   13568 out.go:204] * Preparing Kubernetes v1.23.6 on Docker 20.10.16 ...
	I0602 19:45:14.521221   13568 cli_runner.go:164] Run: docker exec -t newest-cni-20220602193528-12108 dig +short host.docker.internal
	I0602 19:45:16.040121   13568 cli_runner.go:217] Completed: docker exec -t newest-cni-20220602193528-12108 dig +short host.docker.internal: (1.5188935s)
	I0602 19:45:16.040121   13568 network.go:96] got host ip for mount in container by digging dns: 192.168.65.2
	I0602 19:45:16.062113   13568 ssh_runner.go:195] Run: grep 192.168.65.2	host.minikube.internal$ /etc/hosts
	I0602 19:45:16.077109   13568 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.65.2	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0602 19:45:16.121098   13568 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" newest-cni-20220602193528-12108
	I0602 19:45:16.467700    7936 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.23.6/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.65.2 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.23.6/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0602 19:45:16.484695    7936 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" auto-20220602191545-12108
	I0602 19:45:17.700097    7936 cli_runner.go:217] Completed: docker container inspect auto-20220602191545-12108 --format={{.State.Status}}: (1.6669892s)
	I0602 19:45:17.703095    7936 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0602 19:45:17.065196   12568 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.23.6/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.65.2 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.23.6/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0602 19:45:17.077175   12568 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" calico-20220602191616-12108
	I0602 19:45:17.855080   12568 cli_runner.go:217] Completed: docker container inspect calico-20220602191616-12108 --format={{.State.Status}}: (1.663978s)
	I0602 19:45:17.855080   13568 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" newest-cni-20220602193528-12108: (1.7339754s)
	I0602 19:45:17.863095   12568 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0602 19:45:17.866114   13568 out.go:177]   - kubelet.network-plugin=cni
	I0602 19:45:17.872106   13568 out.go:177]   - kubeadm.pod-network-cidr=192.168.111.111/16
	I0602 19:45:16.925627   11612 pod_ready.go:102] pod "coredns-64897985d-6flrb" in "kube-system" namespace has status "Ready":"False"
	I0602 19:45:18.928961   11612 pod_ready.go:102] pod "coredns-64897985d-6flrb" in "kube-system" namespace has status "Ready":"False"
	I0602 19:45:17.706080    7936 addons.go:348] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0602 19:45:17.706080    7936 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0602 19:45:17.722081    7936 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-20220602191545-12108
	I0602 19:45:17.723098    7936 cli_runner.go:217] Completed: docker container inspect auto-20220602191545-12108 --format={{.State.Status}}: (1.689001s)
	I0602 19:45:17.751103    7936 addons.go:153] Setting addon default-storageclass=true in "auto-20220602191545-12108"
	W0602 19:45:17.751103    7936 addons.go:165] addon default-storageclass should already be in state true
	I0602 19:45:17.751103    7936 host.go:66] Checking if "auto-20220602191545-12108" exists ...
	I0602 19:45:17.789090    7936 cli_runner.go:164] Run: docker container inspect auto-20220602191545-12108 --format={{.State.Status}}
	I0602 19:45:18.136359    7936 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" auto-20220602191545-12108: (1.6516568s)
	I0602 19:45:18.146383    7936 node_ready.go:35] waiting up to 5m0s for node "auto-20220602191545-12108" to be "Ready" ...
	I0602 19:45:18.167398    7936 node_ready.go:49] node "auto-20220602191545-12108" has status "Ready":"True"
	I0602 19:45:18.167398    7936 node_ready.go:38] duration metric: took 20.0156ms waiting for node "auto-20220602191545-12108" to be "Ready" ...
	I0602 19:45:18.167398    7936 pod_ready.go:35] extra waiting up to 5m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0602 19:45:18.264735    7936 pod_ready.go:78] waiting up to 5m0s for pod "coredns-64897985d-dsm5l" in "kube-system" namespace to be "Ready" ...
	I0602 19:45:19.473063    7936 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-20220602191545-12108: (1.7509742s)
	I0602 19:45:19.473063    7936 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:54862 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\auto-20220602191545-12108\id_rsa Username:docker}
	I0602 19:45:19.505057    7936 cli_runner.go:217] Completed: docker container inspect auto-20220602191545-12108 --format={{.State.Status}}: (1.7159596s)
	I0602 19:45:19.505057    7936 addons.go:348] installing /etc/kubernetes/addons/storageclass.yaml
	I0602 19:45:19.505057    7936 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0602 19:45:19.515052    7936 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-20220602191545-12108
	I0602 19:45:20.277466    7936 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.6/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0602 19:45:20.448049    7936 pod_ready.go:102] pod "coredns-64897985d-dsm5l" in "kube-system" namespace has status "Ready":"False"
	I0602 19:45:21.005189    7936 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-20220602191545-12108: (1.4901307s)
	I0602 19:45:21.005189    7936 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:54862 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\auto-20220602191545-12108\id_rsa Username:docker}
	I0602 19:45:17.868088   12568 addons.go:348] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0602 19:45:17.868088   12568 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0602 19:45:17.878092   12568 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220602191616-12108
	I0602 19:45:17.883102   12568 cli_runner.go:217] Completed: docker container inspect calico-20220602191616-12108 --format={{.State.Status}}: (1.6929964s)
	I0602 19:45:17.952088   12568 addons.go:153] Setting addon default-storageclass=true in "calico-20220602191616-12108"
	W0602 19:45:17.952088   12568 addons.go:165] addon default-storageclass should already be in state true
	I0602 19:45:17.952088   12568 host.go:66] Checking if "calico-20220602191616-12108" exists ...
	I0602 19:45:17.985092   12568 cli_runner.go:164] Run: docker container inspect calico-20220602191616-12108 --format={{.State.Status}}
	I0602 19:45:18.779249   12568 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" calico-20220602191616-12108: (1.7020666s)
	I0602 19:45:18.782292   12568 node_ready.go:35] waiting up to 5m0s for node "calico-20220602191616-12108" to be "Ready" ...
	I0602 19:45:18.793244   12568 node_ready.go:49] node "calico-20220602191616-12108" has status "Ready":"True"
	I0602 19:45:18.793244   12568 node_ready.go:38] duration metric: took 10.9527ms waiting for node "calico-20220602191616-12108" to be "Ready" ...
	I0602 19:45:18.793244   12568 pod_ready.go:35] extra waiting up to 5m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0602 19:45:18.852810   12568 pod_ready.go:78] waiting up to 5m0s for pod "calico-kube-controllers-8594699699-m62nx" in "kube-system" namespace to be "Ready" ...
	I0602 19:45:19.538197   12568 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220602191616-12108: (1.6600982s)
	I0602 19:45:19.538197   12568 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:54867 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\calico-20220602191616-12108\id_rsa Username:docker}
	I0602 19:45:19.632410   12568 cli_runner.go:217] Completed: docker container inspect calico-20220602191616-12108 --format={{.State.Status}}: (1.6473107s)
	I0602 19:45:19.632410   12568 addons.go:348] installing /etc/kubernetes/addons/storageclass.yaml
	I0602 19:45:19.632410   12568 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0602 19:45:19.647445   12568 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220602191616-12108
	I0602 19:45:20.180208   12568 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.6/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0602 19:45:21.046208   12568 pod_ready.go:102] pod "calico-kube-controllers-8594699699-m62nx" in "kube-system" namespace has status "Ready":"False"
	I0602 19:45:21.101235   12568 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220602191616-12108: (1.4537834s)
	I0602 19:45:21.101235   12568 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:54867 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\calico-20220602191616-12108\id_rsa Username:docker}
	I0602 19:45:17.874108   13568 preload.go:132] Checking if preload exists for k8s version v1.23.6 and runtime docker
	I0602 19:45:17.883102   13568 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0602 19:45:18.002086   13568 docker.go:610] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.23.6
	k8s.gcr.io/kube-controller-manager:v1.23.6
	k8s.gcr.io/kube-proxy:v1.23.6
	k8s.gcr.io/kube-scheduler:v1.23.6
	k8s.gcr.io/etcd:3.5.1-0
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0602 19:45:18.002086   13568 docker.go:541] Images already preloaded, skipping extraction
	I0602 19:45:18.012123   13568 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0602 19:45:18.119372   13568 docker.go:610] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.23.6
	k8s.gcr.io/kube-proxy:v1.23.6
	k8s.gcr.io/kube-scheduler:v1.23.6
	k8s.gcr.io/kube-controller-manager:v1.23.6
	k8s.gcr.io/etcd:3.5.1-0
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0602 19:45:18.119372   13568 cache_images.go:84] Images are preloaded, skipping loading
	I0602 19:45:18.129365   13568 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0602 19:45:18.385655   13568 cni.go:95] Creating CNI manager for ""
	I0602 19:45:18.385655   13568 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0602 19:45:18.385655   13568 kubeadm.go:87] Using pod CIDR: 192.168.111.111/16
	I0602 19:45:18.385655   13568 kubeadm.go:158] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:192.168.111.111/16 AdvertiseAddress:192.168.58.2 APIServerPort:8443 KubernetesVersion:v1.23.6 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-20220602193528-12108 NodeName:newest-cni-20220602193528-12108 DNSDomain:cluster.local CRISocket:/var/run/dockershim.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota feature-gates:ServerSideApply=true] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.58.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true feature-gates:ServerSideApply=true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[feature-gates:ServerSideApply=true leader-elect:fals
e] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.58.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	I0602 19:45:18.385655   13568 kubeadm.go:162] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.58.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/dockershim.sock
	  name: "newest-cni-20220602193528-12108"
	  kubeletExtraArgs:
	    node-ip: 192.168.58.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.58.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	    feature-gates: "ServerSideApply=true"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    feature-gates: "ServerSideApply=true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    feature-gates: "ServerSideApply=true"
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.23.6
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "192.168.111.111/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "192.168.111.111/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0602 19:45:18.385655   13568 kubeadm.go:961] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.23.6/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=docker --feature-gates=ServerSideApply=true --hostname-override=newest-cni-20220602193528-12108 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.58.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.23.6 ClusterName:newest-cni-20220602193528-12108 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:network-plugin Value:cni} {Component:kubeadm Key:pod-network-cidr Value:192.168.111.111/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0602 19:45:18.395641   13568 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.23.6
	I0602 19:45:18.430353   13568 binaries.go:44] Found k8s binaries, skipping transfer
	I0602 19:45:18.456378   13568 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0602 19:45:18.489659   13568 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (415 bytes)
	I0602 19:45:18.529655   13568 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0602 19:45:18.591880   13568 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2189 bytes)
	I0602 19:45:18.665601   13568 ssh_runner.go:195] Run: grep 192.168.58.2	control-plane.minikube.internal$ /etc/hosts
	I0602 19:45:18.680608   13568 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.58.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0602 19:45:18.709613   13568 certs.go:54] Setting up C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\newest-cni-20220602193528-12108 for IP: 192.168.58.2
	I0602 19:45:18.714605   13568 certs.go:182] skipping minikubeCA CA generation: C:\Users\jenkins.minikube7\minikube-integration\.minikube\ca.key
	I0602 19:45:18.715629   13568 certs.go:182] skipping proxyClientCA CA generation: C:\Users\jenkins.minikube7\minikube-integration\.minikube\proxy-client-ca.key
	I0602 19:45:18.716606   13568 certs.go:298] skipping minikube-user signed cert generation: C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\newest-cni-20220602193528-12108\client.key
	I0602 19:45:18.716606   13568 certs.go:298] skipping minikube signed cert generation: C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\newest-cni-20220602193528-12108\apiserver.key.cee25041
	I0602 19:45:18.716606   13568 certs.go:298] skipping aggregator signed cert generation: C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\newest-cni-20220602193528-12108\proxy-client.key
	I0602 19:45:18.718608   13568 certs.go:388] found cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\12108.pem (1338 bytes)
	W0602 19:45:18.719611   13568 certs.go:384] ignoring C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\12108_empty.pem, impossibly tiny 0 bytes
	I0602 19:45:18.719611   13568 certs.go:388] found cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca-key.pem (1675 bytes)
	I0602 19:45:18.719611   13568 certs.go:388] found cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem (1078 bytes)
	I0602 19:45:18.719611   13568 certs.go:388] found cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0602 19:45:18.720612   13568 certs.go:388] found cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\key.pem (1675 bytes)
	I0602 19:45:18.720612   13568 certs.go:388] found cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\files\etc\ssl\certs\C:\Users\jenkins.minikube7\minikube-integration\.minikube\files\etc\ssl\certs\121082.pem (1708 bytes)
	I0602 19:45:18.722606   13568 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\newest-cni-20220602193528-12108\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0602 19:45:18.801249   13568 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\newest-cni-20220602193528-12108\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0602 19:45:18.873739   13568 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\newest-cni-20220602193528-12108\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0602 19:45:18.947214   13568 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\newest-cni-20220602193528-12108\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0602 19:45:19.019021   13568 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0602 19:45:19.101812   13568 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0602 19:45:19.165962   13568 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0602 19:45:19.501057   13568 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0602 19:45:19.568074   13568 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\files\etc\ssl\certs\121082.pem --> /usr/share/ca-certificates/121082.pem (1708 bytes)
	I0602 19:45:19.622432   13568 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0602 19:45:19.690417   13568 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\12108.pem --> /usr/share/ca-certificates/12108.pem (1338 bytes)
	I0602 19:45:19.749142   13568 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (752 bytes)
	I0602 19:45:19.808211   13568 ssh_runner.go:195] Run: openssl version
	I0602 19:45:19.830196   13568 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/121082.pem && ln -fs /usr/share/ca-certificates/121082.pem /etc/ssl/certs/121082.pem"
	I0602 19:45:19.893481   13568 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/121082.pem
	I0602 19:45:19.903547   13568 certs.go:431] hashing: -rw-r--r-- 1 root root 1708 Jun  2 17:28 /usr/share/ca-certificates/121082.pem
	I0602 19:45:19.914475   13568 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/121082.pem
	I0602 19:45:19.935474   13568 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/121082.pem /etc/ssl/certs/3ec20f2e.0"
	I0602 19:45:19.992510   13568 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0602 19:45:20.074759   13568 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0602 19:45:20.085762   13568 certs.go:431] hashing: -rw-r--r-- 1 root root 1111 Jun  2 17:16 /usr/share/ca-certificates/minikubeCA.pem
	I0602 19:45:20.095755   13568 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0602 19:45:20.132752   13568 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0602 19:45:20.189200   13568 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/12108.pem && ln -fs /usr/share/ca-certificates/12108.pem /etc/ssl/certs/12108.pem"
	I0602 19:45:20.228223   13568 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12108.pem
	I0602 19:45:20.247519   13568 certs.go:431] hashing: -rw-r--r-- 1 root root 1338 Jun  2 17:28 /usr/share/ca-certificates/12108.pem
	I0602 19:45:20.271468   13568 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12108.pem
	I0602 19:45:20.295462   13568 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/12108.pem /etc/ssl/certs/51391683.0"
	I0602 19:45:20.320031   13568 kubeadm.go:395] StartCluster: {Name:newest-cni-20220602193528-12108 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.6 ClusterName:newest-cni-20220602193528-12108 Namespace:default APIServerNam
e:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:network-plugin Value:cni} {Component:kubeadm Key:pod-network-cidr Value:192.168.111.111/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.23.6 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.6.0@sha256:4af9580485920635d888efe1eddbd67e12f9d5d84dba87100e93feb4e46636b3 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps
_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube7:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0602 19:45:20.330784   13568 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0602 19:45:20.436429   13568 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0602 19:45:20.552771   13568 kubeadm.go:410] found existing configuration files, will attempt cluster restart
	I0602 19:45:20.552771   13568 kubeadm.go:626] restartCluster start
	I0602 19:45:20.573731   13568 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0602 19:45:20.609703   13568 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0602 19:45:20.623595   13568 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" newest-cni-20220602193528-12108
	I0602 19:45:21.287482    7936 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.6/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0602 19:45:22.470550    7936 pod_ready.go:102] pod "coredns-64897985d-dsm5l" in "kube-system" namespace has status "Ready":"False"
	I0602 19:45:22.553895    7936 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.23.6/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.65.2 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.23.6/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (6.086169s)
	I0602 19:45:22.554945    7936 start.go:806] {"host.minikube.internal": 192.168.65.2} host record injected into CoreDNS
	I0602 19:45:23.045096    7936 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.6/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.766915s)
	I0602 19:45:23.045096    7936 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.6/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.7569034s)
	I0602 19:45:23.049091    7936 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0602 19:45:21.777443   12568 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.6/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0602 19:45:23.744608   12568 pod_ready.go:102] pod "calico-kube-controllers-8594699699-m62nx" in "kube-system" namespace has status "Ready":"False"
	I0602 19:45:24.858043   12568 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.23.6/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.65.2 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.23.6/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (7.792813s)
	I0602 19:45:24.858043   12568 start.go:806] {"host.minikube.internal": 192.168.65.2} host record injected into CoreDNS
	I0602 19:45:25.545725   12568 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.6/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (3.7672601s)
	I0602 19:45:25.545725   12568 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.6/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (5.3654933s)
	I0602 19:45:25.549754   12568 out.go:177] * Enabled addons: default-storageclass, storage-provisioner
	I0602 19:45:21.424687   11612 pod_ready.go:102] pod "coredns-64897985d-6flrb" in "kube-system" namespace has status "Ready":"False"
	I0602 19:45:23.908577   11612 pod_ready.go:102] pod "coredns-64897985d-6flrb" in "kube-system" namespace has status "Ready":"False"
	I0602 19:45:23.052527    7936 addons.go:417] enableAddons completed in 7.0695754s
	I0602 19:45:24.876019    7936 pod_ready.go:102] pod "coredns-64897985d-dsm5l" in "kube-system" namespace has status "Ready":"False"
	I0602 19:45:25.553701   12568 addons.go:417] enableAddons completed in 9.3895777s
	I0602 19:45:26.047046   12568 pod_ready.go:102] pod "calico-kube-controllers-8594699699-m62nx" in "kube-system" namespace has status "Ready":"False"
	I0602 19:45:22.062468   13568 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" newest-cni-20220602193528-12108: (1.4388661s)
	I0602 19:45:22.064441   13568 kubeconfig.go:116] verify returned: extract IP: "newest-cni-20220602193528-12108" does not appear in C:\Users\jenkins.minikube7\minikube-integration\kubeconfig
	I0602 19:45:22.066452   13568 kubeconfig.go:127] "newest-cni-20220602193528-12108" context is missing from C:\Users\jenkins.minikube7\minikube-integration\kubeconfig - will repair!
	I0602 19:45:22.068428   13568 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube7\minikube-integration\kubeconfig: {Name:mk3b9816be6135fef42a073128e9ed356868417a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0602 19:45:22.092682   13568 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0602 19:45:22.113602   13568 api_server.go:165] Checking apiserver status ...
	I0602 19:45:22.126562   13568 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0602 19:45:22.172238   13568 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0602 19:45:22.372845   13568 api_server.go:165] Checking apiserver status ...
	I0602 19:45:22.387277   13568 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0602 19:45:22.471542   13568 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0602 19:45:22.574891   13568 api_server.go:165] Checking apiserver status ...
	I0602 19:45:22.584878   13568 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0602 19:45:22.609901   13568 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0602 19:45:22.786637   13568 api_server.go:165] Checking apiserver status ...
	I0602 19:45:22.798246   13568 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0602 19:45:22.966111   13568 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0602 19:45:22.986704   13568 api_server.go:165] Checking apiserver status ...
	I0602 19:45:22.998742   13568 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0602 19:45:23.025213   13568 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0602 19:45:23.188077   13568 api_server.go:165] Checking apiserver status ...
	I0602 19:45:23.198577   13568 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0602 19:45:23.234568   13568 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0602 19:45:23.373670   13568 api_server.go:165] Checking apiserver status ...
	I0602 19:45:23.387106   13568 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0602 19:45:23.411695   13568 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0602 19:45:23.577397   13568 api_server.go:165] Checking apiserver status ...
	I0602 19:45:23.588194   13568 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0602 19:45:23.616393   13568 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0602 19:45:23.778604   13568 api_server.go:165] Checking apiserver status ...
	I0602 19:45:23.789303   13568 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0602 19:45:23.814065   13568 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0602 19:45:23.986333   13568 api_server.go:165] Checking apiserver status ...
	I0602 19:45:23.997920   13568 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0602 19:45:24.028726   13568 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0602 19:45:24.186179   13568 api_server.go:165] Checking apiserver status ...
	I0602 19:45:24.198529   13568 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0602 19:45:24.232261   13568 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0602 19:45:24.375290   13568 api_server.go:165] Checking apiserver status ...
	I0602 19:45:24.393445   13568 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0602 19:45:24.426015   13568 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0602 19:45:24.577736   13568 api_server.go:165] Checking apiserver status ...
	I0602 19:45:24.587592   13568 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0602 19:45:24.619981   13568 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0602 19:45:24.782977   13568 api_server.go:165] Checking apiserver status ...
	I0602 19:45:24.791981   13568 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0602 19:45:24.820410   13568 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0602 19:45:24.984347   13568 api_server.go:165] Checking apiserver status ...
	I0602 19:45:24.995639   13568 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0602 19:45:25.026581   13568 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0602 19:45:25.185586   13568 api_server.go:165] Checking apiserver status ...
	I0602 19:45:25.197172   13568 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0602 19:45:25.223868   13568 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0602 19:45:25.223868   13568 api_server.go:165] Checking apiserver status ...
	I0602 19:45:25.232856   13568 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0602 19:45:25.289039   13568 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0602 19:45:25.289039   13568 kubeadm.go:601] needs reconfigure: apiserver error: timed out waiting for the condition
	I0602 19:45:25.289039   13568 kubeadm.go:1092] stopping kube-system containers ...
	I0602 19:45:25.295999   13568 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0602 19:45:25.395481   13568 docker.go:442] Stopping containers: [d2c5fc0230ad 8e1f9558c0ca 1d1446ebcd29 448dbf0a9b78 24eaf3189696 2d2f8a82cd4f db4c6833f821 d3d06040213e 657a9269a4b8 00aa836bb1a9 7e0c5a6877c8 e576c766bf2a 40851dc91e39 f8bc1117b670 9d6ff8386a76 f0f4349e2a50 37eb5623cd99 fcbc8319890a d617f2adfb0d d9cb2e830613 001ff57088e9 7a54dbda0d91 bc5560d151d2 4cfadebe4a13]
	I0602 19:45:25.404451   13568 ssh_runner.go:195] Run: docker stop d2c5fc0230ad 8e1f9558c0ca 1d1446ebcd29 448dbf0a9b78 24eaf3189696 2d2f8a82cd4f db4c6833f821 d3d06040213e 657a9269a4b8 00aa836bb1a9 7e0c5a6877c8 e576c766bf2a 40851dc91e39 f8bc1117b670 9d6ff8386a76 f0f4349e2a50 37eb5623cd99 fcbc8319890a d617f2adfb0d d9cb2e830613 001ff57088e9 7a54dbda0d91 bc5560d151d2 4cfadebe4a13
	I0602 19:45:25.517720   13568 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0602 19:45:25.568707   13568 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0602 19:45:25.592701   13568 kubeadm.go:155] found existing configuration files:
	-rw------- 1 root root 5639 Jun  2 19:43 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5652 Jun  2 19:43 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2063 Jun  2 19:43 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5604 Jun  2 19:43 /etc/kubernetes/scheduler.conf
	
	I0602 19:45:25.602703   13568 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0602 19:45:25.642424   13568 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0602 19:45:25.687517   13568 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0602 19:45:25.709176   13568 kubeadm.go:166] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0602 19:45:25.725650   13568 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0602 19:45:25.774027   13568 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0602 19:45:25.795775   13568 kubeadm.go:166] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0602 19:45:25.808264   13568 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0602 19:45:25.862843   13568 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0602 19:45:25.886823   13568 kubeadm.go:703] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0602 19:45:25.886823   13568 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0602 19:45:26.028041   13568 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0602 19:45:26.408188   11612 pod_ready.go:102] pod "coredns-64897985d-6flrb" in "kube-system" namespace has status "Ready":"False"
	I0602 19:45:28.410072   11612 pod_ready.go:102] pod "coredns-64897985d-6flrb" in "kube-system" namespace has status "Ready":"False"
	I0602 19:45:30.425070   11612 pod_ready.go:102] pod "coredns-64897985d-6flrb" in "kube-system" namespace has status "Ready":"False"
	I0602 19:45:27.397407    7936 pod_ready.go:102] pod "coredns-64897985d-dsm5l" in "kube-system" namespace has status "Ready":"False"
	I0602 19:45:29.872237    7936 pod_ready.go:102] pod "coredns-64897985d-dsm5l" in "kube-system" namespace has status "Ready":"False"
	I0602 19:45:28.556289   12568 pod_ready.go:102] pod "calico-kube-controllers-8594699699-m62nx" in "kube-system" namespace has status "Ready":"False"
	I0602 19:45:30.957286   12568 pod_ready.go:102] pod "calico-kube-controllers-8594699699-m62nx" in "kube-system" namespace has status "Ready":"False"
	I0602 19:45:28.165756   13568 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (2.1377054s)
	I0602 19:45:28.165756   13568 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0602 19:45:28.525225   13568 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0602 19:45:28.840566   13568 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0602 19:45:29.088918   13568 api_server.go:51] waiting for apiserver process to appear ...
	I0602 19:45:29.101919   13568 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0602 19:45:29.687355   13568 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0602 19:45:30.181868   13568 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0602 19:45:30.683275   13568 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0602 19:45:31.183167   13568 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0602 19:45:32.432855   11612 pod_ready.go:102] pod "coredns-64897985d-6flrb" in "kube-system" namespace has status "Ready":"False"
	I0602 19:45:34.988897   11612 pod_ready.go:102] pod "coredns-64897985d-6flrb" in "kube-system" namespace has status "Ready":"False"
	I0602 19:45:31.882862    7936 pod_ready.go:102] pod "coredns-64897985d-dsm5l" in "kube-system" namespace has status "Ready":"False"
	I0602 19:45:34.375631    7936 pod_ready.go:102] pod "coredns-64897985d-dsm5l" in "kube-system" namespace has status "Ready":"False"
	I0602 19:45:32.958874   12568 pod_ready.go:102] pod "calico-kube-controllers-8594699699-m62nx" in "kube-system" namespace has status "Ready":"False"
	I0602 19:45:35.451130   12568 pod_ready.go:102] pod "calico-kube-controllers-8594699699-m62nx" in "kube-system" namespace has status "Ready":"False"
	I0602 19:45:31.689475   13568 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0602 19:45:32.182798   13568 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0602 19:45:32.675827   13568 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0602 19:45:33.178329   13568 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0602 19:45:33.679995   13568 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0602 19:45:33.846188   13568 api_server.go:71] duration metric: took 4.7572492s to wait for apiserver process to appear ...
	I0602 19:45:33.846188   13568 api_server.go:87] waiting for apiserver healthz status ...
	I0602 19:45:33.846188   13568 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:54947/healthz ...
	I0602 19:45:33.852390   13568 api_server.go:256] stopped: https://127.0.0.1:54947/healthz: Get "https://127.0.0.1:54947/healthz": EOF
	I0602 19:45:34.365644   13568 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:54947/healthz ...
	I0602 19:45:37.411055   11612 pod_ready.go:102] pod "coredns-64897985d-6flrb" in "kube-system" namespace has status "Ready":"False"
	I0602 19:45:39.421012   11612 pod_ready.go:102] pod "coredns-64897985d-6flrb" in "kube-system" namespace has status "Ready":"False"
	I0602 19:45:36.377758    7936 pod_ready.go:102] pod "coredns-64897985d-dsm5l" in "kube-system" namespace has status "Ready":"False"
	I0602 19:45:38.386042    7936 pod_ready.go:102] pod "coredns-64897985d-dsm5l" in "kube-system" namespace has status "Ready":"False"
	I0602 19:45:40.891552    7936 pod_ready.go:102] pod "coredns-64897985d-dsm5l" in "kube-system" namespace has status "Ready":"False"
	I0602 19:45:37.455873   12568 pod_ready.go:102] pod "calico-kube-controllers-8594699699-m62nx" in "kube-system" namespace has status "Ready":"False"
	I0602 19:45:39.952037   12568 pod_ready.go:102] pod "calico-kube-controllers-8594699699-m62nx" in "kube-system" namespace has status "Ready":"False"
	I0602 19:45:39.374333   13568 api_server.go:256] stopped: https://127.0.0.1:54947/healthz: Get "https://127.0.0.1:54947/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0602 19:45:39.861043   13568 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:54947/healthz ...
	I0602 19:45:40.613416   13568 api_server.go:266] https://127.0.0.1:54947/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0602 19:45:40.613416   13568 api_server.go:102] status: https://127.0.0.1:54947/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0602 19:45:40.861510   13568 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:54947/healthz ...
	I0602 19:45:41.040568   13568 api_server.go:266] https://127.0.0.1:54947/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0602 19:45:41.040568   13568 api_server.go:102] status: https://127.0.0.1:54947/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0602 19:45:41.360373   13568 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:54947/healthz ...
	I0602 19:45:41.384512   13568 api_server.go:266] https://127.0.0.1:54947/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0602 19:45:41.384512   13568 api_server.go:102] status: https://127.0.0.1:54947/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0602 19:45:41.907546   11612 pod_ready.go:102] pod "coredns-64897985d-6flrb" in "kube-system" namespace has status "Ready":"False"
	I0602 19:45:43.909987   11612 pod_ready.go:102] pod "coredns-64897985d-6flrb" in "kube-system" namespace has status "Ready":"False"
	I0602 19:45:42.899904    7936 pod_ready.go:102] pod "coredns-64897985d-dsm5l" in "kube-system" namespace has status "Ready":"False"
	I0602 19:45:45.527740    7936 pod_ready.go:102] pod "coredns-64897985d-dsm5l" in "kube-system" namespace has status "Ready":"False"
	I0602 19:45:41.953590   12568 pod_ready.go:102] pod "calico-kube-controllers-8594699699-m62nx" in "kube-system" namespace has status "Ready":"False"
	I0602 19:45:44.055320   12568 pod_ready.go:102] pod "calico-kube-controllers-8594699699-m62nx" in "kube-system" namespace has status "Ready":"False"
	I0602 19:45:41.861124   13568 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:54947/healthz ...
	I0602 19:45:41.949494   13568 api_server.go:266] https://127.0.0.1:54947/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0602 19:45:41.949494   13568 api_server.go:102] status: https://127.0.0.1:54947/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0602 19:45:42.364745   13568 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:54947/healthz ...
	I0602 19:45:42.455024   13568 api_server.go:266] https://127.0.0.1:54947/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0602 19:45:42.456003   13568 api_server.go:102] status: https://127.0.0.1:54947/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0602 19:45:42.863056   13568 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:54947/healthz ...
	I0602 19:45:42.960010   13568 api_server.go:266] https://127.0.0.1:54947/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0602 19:45:42.960010   13568 api_server.go:102] status: https://127.0.0.1:54947/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0602 19:45:43.354853   13568 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:54947/healthz ...
	I0602 19:45:43.465127   13568 api_server.go:266] https://127.0.0.1:54947/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0602 19:45:43.465127   13568 api_server.go:102] status: https://127.0.0.1:54947/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0602 19:45:43.863296   13568 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:54947/healthz ...
	I0602 19:45:44.041753   13568 api_server.go:266] https://127.0.0.1:54947/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0602 19:45:44.041836   13568 api_server.go:102] status: https://127.0.0.1:54947/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0602 19:45:44.366599   13568 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:54947/healthz ...
	I0602 19:45:44.849840   13568 api_server.go:266] https://127.0.0.1:54947/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0602 19:45:44.849840   13568 api_server.go:102] status: https://127.0.0.1:54947/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0602 19:45:44.853919   13568 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:54947/healthz ...
	I0602 19:45:44.880015   13568 api_server.go:266] https://127.0.0.1:54947/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0602 19:45:44.880015   13568 api_server.go:102] status: https://127.0.0.1:54947/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0602 19:45:45.359749   13568 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:54947/healthz ...
	I0602 19:45:45.529710   13568 api_server.go:266] https://127.0.0.1:54947/healthz returned 200:
	ok
	I0602 19:45:45.552708   13568 api_server.go:140] control plane version: v1.23.6
	I0602 19:45:45.552708   13568 api_server.go:130] duration metric: took 11.7064701s to wait for apiserver health ...
	I0602 19:45:45.552708   13568 cni.go:95] Creating CNI manager for ""
	I0602 19:45:45.552708   13568 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0602 19:45:45.552708   13568 system_pods.go:43] waiting for kube-system pods to appear ...
	I0602 19:45:45.755019   13568 system_pods.go:59] 8 kube-system pods found
	I0602 19:45:45.755019   13568 system_pods.go:61] "coredns-64897985d-nvh82" [e020a13f-06c3-4682-8596-3644e6368c0d] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0602 19:45:45.755019   13568 system_pods.go:61] "etcd-newest-cni-20220602193528-12108" [9ef3d8bd-c960-4a4c-94cf-c13c0e665943] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0602 19:45:45.755019   13568 system_pods.go:61] "kube-apiserver-newest-cni-20220602193528-12108" [bde9ca58-c780-44d8-95d5-ae32ca2ec9e7] Running
	I0602 19:45:45.755019   13568 system_pods.go:61] "kube-controller-manager-newest-cni-20220602193528-12108" [4a3f5b11-4274-48f1-adba-16d5ee24cef6] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0602 19:45:45.755019   13568 system_pods.go:61] "kube-proxy-6qlxd" [83790132-5a2f-4b5b-9e93-dea1fd63879f] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0602 19:45:45.755019   13568 system_pods.go:61] "kube-scheduler-newest-cni-20220602193528-12108" [8d248962-47a0-44ab-b62e-d7215d2438b0] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0602 19:45:45.755019   13568 system_pods.go:61] "metrics-server-b955d9d8-4zjkc" [f7310338-75db-4112-9f21-d33fba8787e7] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0602 19:45:45.755019   13568 system_pods.go:61] "storage-provisioner" [05e51f8c-9b94-44b1-867a-06909461c1d3] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0602 19:45:45.755019   13568 system_pods.go:74] duration metric: took 202.3101ms to wait for pod list to return data ...
	I0602 19:45:45.755019   13568 node_conditions.go:102] verifying NodePressure condition ...
	I0602 19:45:45.863551   13568 node_conditions.go:122] node storage ephemeral capacity is 263174212Ki
	I0602 19:45:45.863706   13568 node_conditions.go:123] node cpu capacity is 16
	I0602 19:45:45.863706   13568 node_conditions.go:105] duration metric: took 108.6867ms to run NodePressure ...
	I0602 19:45:45.863706   13568 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0602 19:45:47.658309   13568 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml": (1.7944465s)
	I0602 19:45:47.658309   13568 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0602 19:45:47.692004   13568 ops.go:34] apiserver oom_adj: -16
	I0602 19:45:47.692004   13568 kubeadm.go:630] restartCluster took 27.1391171s
	I0602 19:45:47.692081   13568 kubeadm.go:397] StartCluster complete in 27.3719325s
	I0602 19:45:47.692127   13568 settings.go:142] acquiring lock: {Name:mk2f48f1c2db86c45c5c20d13312e07e9c171d09 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0602 19:45:47.692404   13568 settings.go:150] Updating kubeconfig:  C:\Users\jenkins.minikube7\minikube-integration\kubeconfig
	I0602 19:45:47.699337   13568 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube7\minikube-integration\kubeconfig: {Name:mk3b9816be6135fef42a073128e9ed356868417a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0602 19:45:47.761081   13568 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "newest-cni-20220602193528-12108" rescaled to 1
	I0602 19:45:47.761081   13568 start.go:208] Will wait 6m0s for node &{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.23.6 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0602 19:45:47.761081   13568 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.23.6/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0602 19:45:47.766080   13568 out.go:177] * Verifying Kubernetes components...
	I0602 19:45:47.761081   13568 addons.go:415] enableAddons start: toEnable=map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true], additional=[]
	I0602 19:45:47.762080   13568 config.go:178] Loaded profile config "newest-cni-20220602193528-12108": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.6
	I0602 19:45:47.767084   13568 addons.go:65] Setting dashboard=true in profile "newest-cni-20220602193528-12108"
	I0602 19:45:47.767084   13568 addons.go:65] Setting metrics-server=true in profile "newest-cni-20220602193528-12108"
	I0602 19:45:47.767084   13568 addons.go:65] Setting default-storageclass=true in profile "newest-cni-20220602193528-12108"
	I0602 19:45:47.771085   13568 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-20220602193528-12108"
	I0602 19:45:47.767084   13568 addons.go:65] Setting storage-provisioner=true in profile "newest-cni-20220602193528-12108"
	I0602 19:45:47.771085   13568 addons.go:153] Setting addon dashboard=true in "newest-cni-20220602193528-12108"
	W0602 19:45:47.771085   13568 addons.go:165] addon dashboard should already be in state true
	I0602 19:45:47.771085   13568 addons.go:153] Setting addon storage-provisioner=true in "newest-cni-20220602193528-12108"
	W0602 19:45:47.771085   13568 addons.go:165] addon storage-provisioner should already be in state true
	I0602 19:45:47.771085   13568 host.go:66] Checking if "newest-cni-20220602193528-12108" exists ...
	I0602 19:45:47.771085   13568 addons.go:153] Setting addon metrics-server=true in "newest-cni-20220602193528-12108"
	W0602 19:45:47.771085   13568 addons.go:165] addon metrics-server should already be in state true
	I0602 19:45:47.771085   13568 host.go:66] Checking if "newest-cni-20220602193528-12108" exists ...
	I0602 19:45:47.772082   13568 host.go:66] Checking if "newest-cni-20220602193528-12108" exists ...
	I0602 19:45:47.787060   13568 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0602 19:45:47.792060   13568 cli_runner.go:164] Run: docker container inspect newest-cni-20220602193528-12108 --format={{.State.Status}}
	I0602 19:45:47.793109   13568 cli_runner.go:164] Run: docker container inspect newest-cni-20220602193528-12108 --format={{.State.Status}}
	I0602 19:45:47.794075   13568 cli_runner.go:164] Run: docker container inspect newest-cni-20220602193528-12108 --format={{.State.Status}}
	I0602 19:45:47.795068   13568 cli_runner.go:164] Run: docker container inspect newest-cni-20220602193528-12108 --format={{.State.Status}}
	I0602 19:45:48.251664   13568 start.go:786] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I0602 19:45:48.270554   13568 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" newest-cni-20220602193528-12108
	I0602 19:45:49.486707   13568 cli_runner.go:217] Completed: docker container inspect newest-cni-20220602193528-12108 --format={{.State.Status}}: (1.6926254s)
	I0602 19:45:49.501754   13568 out.go:177]   - Using image kubernetesui/dashboard:v2.6.0
	I0602 19:45:49.510708   13568 out.go:177]   - Using image k8s.gcr.io/echoserver:1.4
	I0602 19:45:49.514711   13568 addons.go:348] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0602 19:45:49.514711   13568 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0602 19:45:49.517721   13568 cli_runner.go:217] Completed: docker container inspect newest-cni-20220602193528-12108 --format={{.State.Status}}: (1.7246046s)
	I0602 19:45:49.517721   13568 cli_runner.go:217] Completed: docker container inspect newest-cni-20220602193528-12108 --format={{.State.Status}}: (1.7226457s)
	I0602 19:45:49.521743   13568 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0602 19:45:49.525713   13568 addons.go:348] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0602 19:45:49.525713   13568 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0602 19:45:49.525713   13568 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220602193528-12108
	I0602 19:45:49.533710   13568 cli_runner.go:217] Completed: docker container inspect newest-cni-20220602193528-12108 --format={{.State.Status}}: (1.7416425s)
	I0602 19:45:49.533710   13568 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220602193528-12108
	I0602 19:45:49.536735   13568 out.go:177]   - Using image fake.domain/k8s.gcr.io/echoserver:1.4
	I0602 19:45:46.456242   11612 pod_ready.go:102] pod "coredns-64897985d-6flrb" in "kube-system" namespace has status "Ready":"False"
	I0602 19:45:48.966120   11612 pod_ready.go:102] pod "coredns-64897985d-6flrb" in "kube-system" namespace has status "Ready":"False"
	I0602 19:45:47.885090    7936 pod_ready.go:102] pod "coredns-64897985d-dsm5l" in "kube-system" namespace has status "Ready":"False"
	I0602 19:45:49.888764    7936 pod_ready.go:102] pod "coredns-64897985d-dsm5l" in "kube-system" namespace has status "Ready":"False"
	I0602 19:45:46.460244   12568 pod_ready.go:102] pod "calico-kube-controllers-8594699699-m62nx" in "kube-system" namespace has status "Ready":"False"
	I0602 19:45:48.464960   12568 pod_ready.go:102] pod "calico-kube-controllers-8594699699-m62nx" in "kube-system" namespace has status "Ready":"False"
	I0602 19:45:50.957962   12568 pod_ready.go:102] pod "calico-kube-controllers-8594699699-m62nx" in "kube-system" namespace has status "Ready":"False"
	I0602 19:45:49.546118   13568 addons.go:348] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0602 19:45:49.546118   13568 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0602 19:45:49.564755   13568 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220602193528-12108
	I0602 19:45:49.567734   13568 addons.go:153] Setting addon default-storageclass=true in "newest-cni-20220602193528-12108"
	W0602 19:45:49.567734   13568 addons.go:165] addon default-storageclass should already be in state true
	I0602 19:45:49.567734   13568 host.go:66] Checking if "newest-cni-20220602193528-12108" exists ...
	I0602 19:45:49.601733   13568 cli_runner.go:164] Run: docker container inspect newest-cni-20220602193528-12108 --format={{.State.Status}}
	I0602 19:45:49.990230   13568 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" newest-cni-20220602193528-12108: (1.7196688s)
	I0602 19:45:49.990230   13568 api_server.go:51] waiting for apiserver process to appear ...
	I0602 19:45:50.010206   13568 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0602 19:45:50.073210   13568 api_server.go:71] duration metric: took 2.3121189s to wait for apiserver process to appear ...
	I0602 19:45:50.073210   13568 api_server.go:87] waiting for apiserver healthz status ...
	I0602 19:45:50.073210   13568 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:54947/healthz ...
	I0602 19:45:50.094211   13568 api_server.go:266] https://127.0.0.1:54947/healthz returned 200:
	ok
	I0602 19:45:50.099215   13568 api_server.go:140] control plane version: v1.23.6
	I0602 19:45:50.099215   13568 api_server.go:130] duration metric: took 26.0041ms to wait for apiserver health ...
	I0602 19:45:50.099215   13568 system_pods.go:43] waiting for kube-system pods to appear ...
	I0602 19:45:50.155229   13568 system_pods.go:59] 8 kube-system pods found
	I0602 19:45:50.155229   13568 system_pods.go:61] "coredns-64897985d-nvh82" [e020a13f-06c3-4682-8596-3644e6368c0d] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0602 19:45:50.155229   13568 system_pods.go:61] "etcd-newest-cni-20220602193528-12108" [9ef3d8bd-c960-4a4c-94cf-c13c0e665943] Running
	I0602 19:45:50.155229   13568 system_pods.go:61] "kube-apiserver-newest-cni-20220602193528-12108" [bde9ca58-c780-44d8-95d5-ae32ca2ec9e7] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0602 19:45:50.155229   13568 system_pods.go:61] "kube-controller-manager-newest-cni-20220602193528-12108" [4a3f5b11-4274-48f1-adba-16d5ee24cef6] Running
	I0602 19:45:50.155229   13568 system_pods.go:61] "kube-proxy-6qlxd" [83790132-5a2f-4b5b-9e93-dea1fd63879f] Running
	I0602 19:45:50.155229   13568 system_pods.go:61] "kube-scheduler-newest-cni-20220602193528-12108" [8d248962-47a0-44ab-b62e-d7215d2438b0] Running
	I0602 19:45:50.155229   13568 system_pods.go:61] "metrics-server-b955d9d8-4zjkc" [f7310338-75db-4112-9f21-d33fba8787e7] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0602 19:45:50.155229   13568 system_pods.go:61] "storage-provisioner" [05e51f8c-9b94-44b1-867a-06909461c1d3] Running
	I0602 19:45:50.155229   13568 system_pods.go:74] duration metric: took 56.0143ms to wait for pod list to return data ...
	I0602 19:45:50.155229   13568 default_sa.go:34] waiting for default service account to be created ...
	I0602 19:45:50.166988   13568 default_sa.go:45] found service account: "default"
	I0602 19:45:50.167149   13568 default_sa.go:55] duration metric: took 11.8514ms for default service account to be created ...
	I0602 19:45:50.167149   13568 kubeadm.go:572] duration metric: took 2.406057s to wait for : map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] ...
	I0602 19:45:50.167149   13568 node_conditions.go:102] verifying NodePressure condition ...
	I0602 19:45:50.184766   13568 node_conditions.go:122] node storage ephemeral capacity is 263174212Ki
	I0602 19:45:50.184766   13568 node_conditions.go:123] node cpu capacity is 16
	I0602 19:45:50.184766   13568 node_conditions.go:105] duration metric: took 17.617ms to run NodePressure ...
	I0602 19:45:50.184766   13568 start.go:213] waiting for startup goroutines ...
	I0602 19:45:51.290384   13568 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220602193528-12108: (1.7566664s)
	I0602 19:45:51.290384   13568 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220602193528-12108: (1.7646634s)
	I0602 19:45:51.290384   13568 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:54943 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\newest-cni-20220602193528-12108\id_rsa Username:docker}
	I0602 19:45:51.290384   13568 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:54943 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\newest-cni-20220602193528-12108\id_rsa Username:docker}
	I0602 19:45:51.339386   13568 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220602193528-12108: (1.7746237s)
	I0602 19:45:51.340760   13568 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:54943 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\newest-cni-20220602193528-12108\id_rsa Username:docker}
	I0602 19:45:51.368695   13568 cli_runner.go:217] Completed: docker container inspect newest-cni-20220602193528-12108 --format={{.State.Status}}: (1.7669547s)
	I0602 19:45:51.369047   13568 addons.go:348] installing /etc/kubernetes/addons/storageclass.yaml
	I0602 19:45:51.369087   13568 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0602 19:45:51.386688   13568 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220602193528-12108
	I0602 19:45:51.423686   11612 pod_ready.go:102] pod "coredns-64897985d-6flrb" in "kube-system" namespace has status "Ready":"False"
	I0602 19:45:53.429991   11612 pod_ready.go:102] pod "coredns-64897985d-6flrb" in "kube-system" namespace has status "Ready":"False"
	I0602 19:45:52.381387    7936 pod_ready.go:102] pod "coredns-64897985d-dsm5l" in "kube-system" namespace has status "Ready":"False"
	I0602 19:45:54.387104    7936 pod_ready.go:102] pod "coredns-64897985d-dsm5l" in "kube-system" namespace has status "Ready":"False"
	I0602 19:45:53.456004   12568 pod_ready.go:102] pod "calico-kube-controllers-8594699699-m62nx" in "kube-system" namespace has status "Ready":"False"
	I0602 19:45:55.961545   12568 pod_ready.go:102] pod "calico-kube-controllers-8594699699-m62nx" in "kube-system" namespace has status "Ready":"False"
	I0602 19:45:51.715375   13568 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.6/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0602 19:45:51.742885   13568 addons.go:348] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0602 19:45:51.742965   13568 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0602 19:45:51.860842   13568 addons.go:348] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0602 19:45:51.860842   13568 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0602 19:45:51.864810   13568 addons.go:348] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0602 19:45:51.864810   13568 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1849 bytes)
	I0602 19:45:51.978825   13568 addons.go:348] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0602 19:45:51.978825   13568 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0602 19:45:52.041211   13568 addons.go:348] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0602 19:45:52.041211   13568 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0602 19:45:52.080073   13568 addons.go:348] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0602 19:45:52.080073   13568 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4196 bytes)
	I0602 19:45:52.158643   13568 addons.go:348] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0602 19:45:52.158643   13568 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0602 19:45:52.258640   13568 addons.go:348] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0602 19:45:52.258640   13568 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0602 19:45:52.371390   13568 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.6/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0602 19:45:52.461418   13568 addons.go:348] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0602 19:45:52.461418   13568 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0602 19:45:52.656675   13568 addons.go:348] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0602 19:45:52.656675   13568 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0602 19:45:52.779132   13568 addons.go:348] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0602 19:45:52.779132   13568 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0602 19:45:52.892716   13568 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220602193528-12108: (1.5060214s)
	I0602 19:45:52.892716   13568 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:54943 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\newest-cni-20220602193528-12108\id_rsa Username:docker}
	I0602 19:45:52.947185   13568 addons.go:348] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0602 19:45:52.947333   13568 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0602 19:45:53.260646   13568 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.6/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0602 19:45:53.580387   13568 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.6/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0602 19:45:55.767378   13568 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.6/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (4.0517429s)
	I0602 19:45:55.880534   13568 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.6/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (3.5091289s)
	I0602 19:45:55.880534   13568 addons.go:386] Verifying addon metrics-server=true in "newest-cni-20220602193528-12108"
	I0602 19:45:56.658178   13568 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.6/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (3.3975176s)
	I0602 19:45:56.659168   13568 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.6/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (3.0777781s)
	I0602 19:45:56.663269   13568 out.go:177] * Enabled addons: storage-provisioner, metrics-server, dashboard, default-storageclass
	I0602 19:45:56.667539   13568 addons.go:417] enableAddons completed in 8.9064194s
	I0602 19:45:56.893715   13568 start.go:504] kubectl: 1.18.2, cluster: 1.23.6 (minor skew: 5)
	I0602 19:45:56.895730   13568 out.go:177] 
	W0602 19:45:56.898471   13568 out.go:239] ! C:\ProgramData\chocolatey\bin\kubectl.exe is version 1.18.2, which may have incompatibilites with Kubernetes 1.23.6.
	I0602 19:45:56.902106   13568 out.go:177]   - Want kubectl v1.23.6? Try 'minikube kubectl -- get pods -A'
	I0602 19:45:56.906953   13568 out.go:177] * Done! kubectl is now configured to use "newest-cni-20220602193528-12108" cluster and "default" namespace by default
	I0602 19:45:55.937158   11612 pod_ready.go:102] pod "coredns-64897985d-6flrb" in "kube-system" namespace has status "Ready":"False"
	I0602 19:45:58.412172   11612 pod_ready.go:102] pod "coredns-64897985d-6flrb" in "kube-system" namespace has status "Ready":"False"
	I0602 19:46:00.446385   11612 pod_ready.go:102] pod "coredns-64897985d-6flrb" in "kube-system" namespace has status "Ready":"False"
	I0602 19:45:56.391721    7936 pod_ready.go:102] pod "coredns-64897985d-dsm5l" in "kube-system" namespace has status "Ready":"False"
	I0602 19:45:58.878490    7936 pod_ready.go:102] pod "coredns-64897985d-dsm5l" in "kube-system" namespace has status "Ready":"False"
	I0602 19:46:00.891997    7936 pod_ready.go:102] pod "coredns-64897985d-dsm5l" in "kube-system" namespace has status "Ready":"False"
	I0602 19:45:58.542909   12568 pod_ready.go:102] pod "calico-kube-controllers-8594699699-m62nx" in "kube-system" namespace has status "Ready":"False"
	I0602 19:46:00.912996   12568 pod_ready.go:102] pod "calico-kube-controllers-8594699699-m62nx" in "kube-system" namespace has status "Ready":"False"
	I0602 19:46:02.920944   11612 pod_ready.go:102] pod "coredns-64897985d-6flrb" in "kube-system" namespace has status "Ready":"False"
	I0602 19:46:04.925373   11612 pod_ready.go:102] pod "coredns-64897985d-6flrb" in "kube-system" namespace has status "Ready":"False"
	I0602 19:46:03.389662    7936 pod_ready.go:102] pod "coredns-64897985d-dsm5l" in "kube-system" namespace has status "Ready":"False"
	I0602 19:46:05.898031    7936 pod_ready.go:102] pod "coredns-64897985d-dsm5l" in "kube-system" namespace has status "Ready":"False"
	I0602 19:46:02.956957   12568 pod_ready.go:102] pod "calico-kube-controllers-8594699699-m62nx" in "kube-system" namespace has status "Ready":"False"
	I0602 19:46:05.048708   12568 pod_ready.go:102] pod "calico-kube-controllers-8594699699-m62nx" in "kube-system" namespace has status "Ready":"False"
	I0602 19:46:07.418700   11612 pod_ready.go:102] pod "coredns-64897985d-6flrb" in "kube-system" namespace has status "Ready":"False"
	I0602 19:46:09.917140   11612 pod_ready.go:102] pod "coredns-64897985d-6flrb" in "kube-system" namespace has status "Ready":"False"
	I0602 19:46:08.389076    7936 pod_ready.go:102] pod "coredns-64897985d-dsm5l" in "kube-system" namespace has status "Ready":"False"
	I0602 19:46:10.877594    7936 pod_ready.go:102] pod "coredns-64897985d-dsm5l" in "kube-system" namespace has status "Ready":"False"
	I0602 19:46:07.462698   12568 pod_ready.go:102] pod "calico-kube-controllers-8594699699-m62nx" in "kube-system" namespace has status "Ready":"False"
	I0602 19:46:09.954846   12568 pod_ready.go:102] pod "calico-kube-controllers-8594699699-m62nx" in "kube-system" namespace has status "Ready":"False"
	I0602 19:46:11.927311   11612 pod_ready.go:102] pod "coredns-64897985d-6flrb" in "kube-system" namespace has status "Ready":"False"
	I0602 19:46:14.415947   11612 pod_ready.go:102] pod "coredns-64897985d-6flrb" in "kube-system" namespace has status "Ready":"False"
	I0602 19:46:13.384426    7936 pod_ready.go:102] pod "coredns-64897985d-dsm5l" in "kube-system" namespace has status "Ready":"False"
	I0602 19:46:15.884637    7936 pod_ready.go:102] pod "coredns-64897985d-dsm5l" in "kube-system" namespace has status "Ready":"False"
	I0602 19:46:12.462765   12568 pod_ready.go:102] pod "calico-kube-controllers-8594699699-m62nx" in "kube-system" namespace has status "Ready":"False"
	I0602 19:46:14.969890   12568 pod_ready.go:102] pod "calico-kube-controllers-8594699699-m62nx" in "kube-system" namespace has status "Ready":"False"
	I0602 19:46:16.927070   11612 pod_ready.go:102] pod "coredns-64897985d-6flrb" in "kube-system" namespace has status "Ready":"False"
	I0602 19:46:19.412475   11612 pod_ready.go:102] pod "coredns-64897985d-6flrb" in "kube-system" namespace has status "Ready":"False"
	I0602 19:46:18.381763    7936 pod_ready.go:102] pod "coredns-64897985d-dsm5l" in "kube-system" namespace has status "Ready":"False"
	I0602 19:46:20.386682    7936 pod_ready.go:102] pod "coredns-64897985d-dsm5l" in "kube-system" namespace has status "Ready":"False"
	I0602 19:46:17.463199   12568 pod_ready.go:102] pod "calico-kube-controllers-8594699699-m62nx" in "kube-system" namespace has status "Ready":"False"
	I0602 19:46:19.959815   12568 pod_ready.go:102] pod "calico-kube-controllers-8594699699-m62nx" in "kube-system" namespace has status "Ready":"False"
	I0602 19:46:21.421850   11612 pod_ready.go:102] pod "coredns-64897985d-6flrb" in "kube-system" namespace has status "Ready":"False"
	I0602 19:46:23.423626   11612 pod_ready.go:102] pod "coredns-64897985d-6flrb" in "kube-system" namespace has status "Ready":"False"
	I0602 19:46:22.882125    7936 pod_ready.go:102] pod "coredns-64897985d-dsm5l" in "kube-system" namespace has status "Ready":"False"
	I0602 19:46:24.882194    7936 pod_ready.go:102] pod "coredns-64897985d-dsm5l" in "kube-system" namespace has status "Ready":"False"
	I0602 19:46:21.959877   12568 pod_ready.go:102] pod "calico-kube-controllers-8594699699-m62nx" in "kube-system" namespace has status "Ready":"False"
	I0602 19:46:24.458199   12568 pod_ready.go:102] pod "calico-kube-controllers-8594699699-m62nx" in "kube-system" namespace has status "Ready":"False"
	I0602 19:46:25.931595   11612 pod_ready.go:102] pod "coredns-64897985d-6flrb" in "kube-system" namespace has status "Ready":"False"
	I0602 19:46:28.423791   11612 pod_ready.go:102] pod "coredns-64897985d-6flrb" in "kube-system" namespace has status "Ready":"False"
	I0602 19:46:27.380583    7936 pod_ready.go:102] pod "coredns-64897985d-dsm5l" in "kube-system" namespace has status "Ready":"False"
	I0602 19:46:29.871026    7936 pod_ready.go:102] pod "coredns-64897985d-dsm5l" in "kube-system" namespace has status "Ready":"False"
	I0602 19:46:26.935194   12568 pod_ready.go:102] pod "calico-kube-controllers-8594699699-m62nx" in "kube-system" namespace has status "Ready":"False"
	I0602 19:46:28.963416   12568 pod_ready.go:102] pod "calico-kube-controllers-8594699699-m62nx" in "kube-system" namespace has status "Ready":"False"
	
	* 
	* ==> Docker <==
	* -- Logs begin at Thu 2022-06-02 19:44:54 UTC, end at Thu 2022-06-02 19:46:42 UTC. --
	Jun 02 19:46:13 newest-cni-20220602193528-12108 dockerd[251]: time="2022-06-02T19:46:13.583741400Z" level=info msg="ignoring event" container=ddee99dde179ce339c62ec0b247c9ab7cc9b70ea1ea85e3552928b63b48c0f2d module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 02 19:46:14 newest-cni-20220602193528-12108 dockerd[251]: time="2022-06-02T19:46:14.540213800Z" level=info msg="ignoring event" container=11e411025822e7cd472debbc2f78098c4cb2a77f9c3adea640f602e3b8b565b4 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 02 19:46:15 newest-cni-20220602193528-12108 dockerd[251]: time="2022-06-02T19:46:15.985167900Z" level=info msg="ignoring event" container=7aaa157101aec42e913a7860c0d55a9dee0ec3398da52ea346081d393a14cc0c module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 02 19:46:16 newest-cni-20220602193528-12108 dockerd[251]: time="2022-06-02T19:46:16.187539000Z" level=info msg="ignoring event" container=cf8a565534698a32d94131ba77e80b2d0551fe4272423844ec9b3c6fe921d039 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 02 19:46:16 newest-cni-20220602193528-12108 dockerd[251]: time="2022-06-02T19:46:16.383357900Z" level=info msg="ignoring event" container=e2eed53500998fefdfd7f1fec75a06b70b05964b4df4f437585500201084d312 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 02 19:46:17 newest-cni-20220602193528-12108 dockerd[251]: time="2022-06-02T19:46:17.631649800Z" level=info msg="ignoring event" container=062b1872a8d78e6bedab43b82ac80b28ad01ca60cd390b8645e42de30e313c03 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 02 19:46:19 newest-cni-20220602193528-12108 dockerd[251]: time="2022-06-02T19:46:19.538697100Z" level=info msg="ignoring event" container=66cd74b2c3ead279455e5cb8b76e98880b7c6ebcffd6d2751d860bcdac9358fc module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 02 19:46:19 newest-cni-20220602193528-12108 dockerd[251]: time="2022-06-02T19:46:19.662498600Z" level=info msg="ignoring event" container=e220a7ae49f7c4041ef22632e322968fe95735e45793be3f6909441973c98970 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 02 19:46:21 newest-cni-20220602193528-12108 dockerd[251]: time="2022-06-02T19:46:21.754598600Z" level=info msg="ignoring event" container=b70b862b0ce8a05651b45849f3822df47df672f2493a7679f420c6e7ff2a0b22 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 02 19:46:22 newest-cni-20220602193528-12108 dockerd[251]: time="2022-06-02T19:46:22.051137600Z" level=info msg="ignoring event" container=7c23ac3f27b77038c9c09a10207ce17b1b84335a9dda21c5991d4589da56838c module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 02 19:46:24 newest-cni-20220602193528-12108 dockerd[251]: time="2022-06-02T19:46:24.011797800Z" level=info msg="ignoring event" container=78221bebf0e55843644aff3e31eccdbef81a1004f99fb87e546cc771a054864e module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 02 19:46:24 newest-cni-20220602193528-12108 dockerd[251]: time="2022-06-02T19:46:24.174241700Z" level=info msg="ignoring event" container=7958d7b49442cbe06ee58459047f6fd98adf0dd4cf7dc48bbe8da238aa6b5955 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 02 19:46:25 newest-cni-20220602193528-12108 dockerd[251]: time="2022-06-02T19:46:25.469331900Z" level=info msg="ignoring event" container=b853edb5215d1f47a0741424f59a3be6d0af7bdaa7650ede5beb0424e70e510d module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 02 19:46:28 newest-cni-20220602193528-12108 dockerd[251]: time="2022-06-02T19:46:28.041451400Z" level=info msg="ignoring event" container=e81351c59c85939928c2162ebf7a1e8b3b7ee2fea6f234a6d9bf1ae7d690d4b1 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 02 19:46:28 newest-cni-20220602193528-12108 dockerd[251]: time="2022-06-02T19:46:28.150956200Z" level=info msg="ignoring event" container=7c48e607a61d1c5d2b367461dd7dc795c9624504c824df3279cc32ef8416ed59 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 02 19:46:28 newest-cni-20220602193528-12108 dockerd[251]: time="2022-06-02T19:46:28.309737100Z" level=info msg="ignoring event" container=11052fef95f11d752ae47646ef19c08052bbb15eace4ae3dd6dd112d537ab1b9 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 02 19:46:30 newest-cni-20220602193528-12108 dockerd[251]: time="2022-06-02T19:46:30.279064800Z" level=info msg="ignoring event" container=e19c4ffd54a63cef26163274bdf5a436a6ffd4393ba07fcb9d8ae85db5e21543 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 02 19:46:33 newest-cni-20220602193528-12108 dockerd[251]: time="2022-06-02T19:46:33.542014400Z" level=info msg="ignoring event" container=a7afff2dfc7067898a0c0e197b3c46b753952c83274b40c7af91e85d9869554b module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 02 19:46:33 newest-cni-20220602193528-12108 dockerd[251]: time="2022-06-02T19:46:33.749768200Z" level=info msg="ignoring event" container=afc06d036210681efc66cceaed7a08b7f2a30561e673bc0ce420454fb66f80ec module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 02 19:46:33 newest-cni-20220602193528-12108 dockerd[251]: time="2022-06-02T19:46:33.754953300Z" level=info msg="ignoring event" container=08e6df97dd740f4a4b334c33cf6fd52d817984ae6ea7ec818a623ee1d01dddb8 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 02 19:46:36 newest-cni-20220602193528-12108 dockerd[251]: time="2022-06-02T19:46:36.088411400Z" level=info msg="ignoring event" container=fa4a3cede53bac52b35b355e446a29d6ffa2d1e0f773ed466b6f91ddc099550e module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 02 19:46:37 newest-cni-20220602193528-12108 dockerd[251]: time="2022-06-02T19:46:37.054106500Z" level=info msg="ignoring event" container=d216310b2d2bf444458e635579390a836e59d818aaae8a5915a50cbebf89e580 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 02 19:46:38 newest-cni-20220602193528-12108 dockerd[251]: time="2022-06-02T19:46:38.255433000Z" level=info msg="ignoring event" container=814aa72ff69b2567779518709f0b22e89c5e45df25b8dec06f83b2fd4fcf7f76 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 02 19:46:39 newest-cni-20220602193528-12108 dockerd[251]: time="2022-06-02T19:46:39.264675700Z" level=info msg="ignoring event" container=dcb91e03fd5e0c111be2ca50b8cf6960cd911c7393f132742fdd4353808eef00 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 02 19:46:39 newest-cni-20220602193528-12108 dockerd[251]: time="2022-06-02T19:46:39.976673500Z" level=info msg="ignoring event" container=819d2cc0a4fcdb73c8f363e0a0c4f5fac4ebb8a994ed820490dba3a24cdbeecf module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE               CREATED              STATE               NAME                      ATTEMPT             POD ID
	f3713257900fb       6e38f40d628db       15 seconds ago       Running             storage-provisioner       2                   23b54706dc012
	1705a550b9b6c       4c03754524064       58 seconds ago       Running             kube-proxy                1                   731b52d4bbe49
	11e411025822e       6e38f40d628db       About a minute ago   Exited              storage-provisioner       1                   23b54706dc012
	93d0a0699aa39       25f8c7f3da61c       About a minute ago   Running             etcd                      1                   9bfbfb42dc1e8
	2e5f7364c8bed       595f327f224a4       About a minute ago   Running             kube-scheduler            1                   9fa60b2829a78
	2bdb3f26ce03b       8fa62c12256df       About a minute ago   Running             kube-apiserver            1                   1d4b824c3839a
	a13b2fac238d7       df7b72818ad2e       About a minute ago   Running             kube-controller-manager   2                   ce12276d46737
	e576c766bf2ab       4c03754524064       2 minutes ago        Exited              kube-proxy                0                   9d6ff8386a767
	f0f4349e2a508       df7b72818ad2e       3 minutes ago        Exited              kube-controller-manager   1                   bc5560d151d25
	37eb5623cd991       25f8c7f3da61c       3 minutes ago        Exited              etcd                      0                   4cfadebe4a138
	fcbc8319890a5       8fa62c12256df       3 minutes ago        Exited              kube-apiserver            0                   001ff57088e9c
	d9cb2e8306132       595f327f224a4       3 minutes ago        Exited              kube-scheduler            0                   7a54dbda0d91f
	
	* 
	* ==> describe nodes <==
	* Name:               newest-cni-20220602193528-12108
	Roles:              control-plane,master
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=newest-cni-20220602193528-12108
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=408dc4036f5a6d8b1313a2031b5dcb646a720fae
	                    minikube.k8s.io/name=newest-cni-20220602193528-12108
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2022_06_02T19_43_43_0700
	                    minikube.k8s.io/version=v1.26.0-beta.1
	                    node-role.kubernetes.io/control-plane=
	                    node-role.kubernetes.io/master=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 02 Jun 2022 19:43:13 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  newest-cni-20220602193528-12108
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 02 Jun 2022 19:46:32 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 02 Jun 2022 19:45:41 +0000   Thu, 02 Jun 2022 19:43:10 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 02 Jun 2022 19:45:41 +0000   Thu, 02 Jun 2022 19:43:10 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 02 Jun 2022 19:45:41 +0000   Thu, 02 Jun 2022 19:43:10 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 02 Jun 2022 19:45:41 +0000   Thu, 02 Jun 2022 19:43:55 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.58.2
	  Hostname:    newest-cni-20220602193528-12108
	Capacity:
	  cpu:                16
	  ephemeral-storage:  263174212Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             52638988Ki
	  pods:               110
	Allocatable:
	  cpu:                16
	  ephemeral-storage:  263174212Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             52638988Ki
	  pods:               110
	System Info:
	  Machine ID:                 a34bb2508bce429bb90502b0ef044420
	  System UUID:                a34bb2508bce429bb90502b0ef044420
	  Boot ID:                    174c87a1-4ba0-4f3f-a840-04757270163f
	  Kernel Version:             5.10.102.1-microsoft-standard-WSL2
	  OS Image:                   Ubuntu 20.04.4 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://20.10.16
	  Kubelet Version:            v1.23.6
	  Kube-Proxy Version:         v1.23.6
	PodCIDR:                      192.168.0.0/24
	PodCIDRs:                     192.168.0.0/24
	Non-terminated Pods:          (10 in total)
	  Namespace                   Name                                                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                       ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-64897985d-nvh82                                    100m (0%!)(MISSING)     0 (0%!)(MISSING)      70Mi (0%!)(MISSING)        170Mi (0%!)(MISSING)     2m43s
	  kube-system                 etcd-newest-cni-20220602193528-12108                       100m (0%!)(MISSING)     0 (0%!)(MISSING)      100Mi (0%!)(MISSING)       0 (0%!)(MISSING)         3m22s
	  kube-system                 kube-apiserver-newest-cni-20220602193528-12108             250m (1%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m57s
	  kube-system                 kube-controller-manager-newest-cni-20220602193528-12108    200m (1%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m27s
	  kube-system                 kube-proxy-6qlxd                                           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m43s
	  kube-system                 kube-scheduler-newest-cni-20220602193528-12108             100m (0%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m26s
	  kube-system                 metrics-server-b955d9d8-4zjkc                              100m (0%!)(MISSING)     0 (0%!)(MISSING)      200Mi (0%!)(MISSING)       0 (0%!)(MISSING)         2m29s
	  kube-system                 storage-provisioner                                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m35s
	  kubernetes-dashboard        dashboard-metrics-scraper-56974995fc-9zf2w                 0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         44s
	  kubernetes-dashboard        kubernetes-dashboard-cd7c84bfc-xsbcz                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         44s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (5%!)(MISSING)   0 (0%!)(MISSING)
	  memory             370Mi (0%!)(MISSING)  170Mi (0%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From        Message
	  ----    ------                   ----                   ----        -------
	  Normal  Starting                 56s                    kube-proxy  
	  Normal  Starting                 2m37s                  kube-proxy  
	  Normal  NodeHasNoDiskPressure    3m39s (x8 over 3m40s)  kubelet     Node newest-cni-20220602193528-12108 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m39s (x7 over 3m40s)  kubelet     Node newest-cni-20220602193528-12108 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientMemory  3m39s (x8 over 3m40s)  kubelet     Node newest-cni-20220602193528-12108 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m59s                  kubelet     Node newest-cni-20220602193528-12108 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m59s                  kubelet     Node newest-cni-20220602193528-12108 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientMemory  2m59s                  kubelet     Node newest-cni-20220602193528-12108 status is now: NodeHasSufficientMemory
	  Normal  Starting                 2m59s                  kubelet     Starting kubelet.
	  Normal  NodeNotReady             2m58s                  kubelet     Node newest-cni-20220602193528-12108 status is now: NodeNotReady
	  Normal  NodeAllocatableEnforced  2m57s                  kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeReady                2m48s                  kubelet     Node newest-cni-20220602193528-12108 status is now: NodeReady
	  Normal  Starting                 74s                    kubelet     Starting kubelet.
	  Normal  NodeHasSufficientMemory  73s (x8 over 74s)      kubelet     Node newest-cni-20220602193528-12108 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    73s (x8 over 74s)      kubelet     Node newest-cni-20220602193528-12108 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     73s (x7 over 74s)      kubelet     Node newest-cni-20220602193528-12108 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  73s                    kubelet     Updated Node Allocatable limit across pods
	
	* 
	* ==> dmesg <==
	* [Jun 2 19:17] WSL2: Performing memory compaction.
	[Jun 2 19:20] overlayfs: upperdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
	[  +0.000007] overlayfs: workdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
	[  +0.000524] overlayfs: upperdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
	[  +0.000005] overlayfs: workdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
	[  +0.001857] overlayfs: upperdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
	[  +0.000005] overlayfs: workdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
	[  +0.086702] overlayfs: upperdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
	[  +0.000007] overlayfs: workdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
	[  +0.013102] overlayfs: upperdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
	[  +0.000004] overlayfs: workdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
	[  +0.007275] overlayfs: upperdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
	[  +0.000004] overlayfs: workdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
	[Jun 2 19:22] WSL2: Performing memory compaction.
	[Jun 2 19:23] WSL2: Performing memory compaction.
	[Jun 2 19:25] WSL2: Performing memory compaction.
	[Jun 2 19:35] WSL2: Performing memory compaction.
	[Jun 2 19:36] WSL2: Performing memory compaction.
	[Jun 2 19:37] WSL2: Performing memory compaction.
	[Jun 2 19:38] WSL2: Performing memory compaction.
	[Jun 2 19:41] WSL2: Performing memory compaction.
	[Jun 2 19:42] WSL2: Performing memory compaction.
	
	* 
	* ==> etcd [37eb5623cd99] <==
	* {"level":"info","ts":"2022-06-02T19:44:00.445Z","caller":"traceutil/trace.go:171","msg":"trace[1358101858] transaction","detail":"{read_only:false; response_revision:432; number_of_response:1; }","duration":"101.6597ms","start":"2022-06-02T19:44:00.344Z","end":"2022-06-02T19:44:00.445Z","steps":["trace[1358101858] 'process raft request'  (duration: 101.2861ms)"],"step_count":1}
	{"level":"info","ts":"2022-06-02T19:44:00.446Z","caller":"traceutil/trace.go:171","msg":"trace[2062613631] transaction","detail":"{read_only:false; response_revision:431; number_of_response:1; }","duration":"103.3103ms","start":"2022-06-02T19:44:00.343Z","end":"2022-06-02T19:44:00.446Z","steps":["trace[2062613631] 'process raft request'  (duration: 101.6054ms)"],"step_count":1}
	{"level":"info","ts":"2022-06-02T19:44:00.604Z","caller":"traceutil/trace.go:171","msg":"trace[1432950124] transaction","detail":"{read_only:false; response_revision:439; number_of_response:1; }","duration":"143.1198ms","start":"2022-06-02T19:44:00.461Z","end":"2022-06-02T19:44:00.604Z","steps":["trace[1432950124] 'process raft request'  (duration: 130.8677ms)","trace[1432950124] 'compare'  (duration: 11.6224ms)"],"step_count":2}
	{"level":"info","ts":"2022-06-02T19:44:00.839Z","caller":"traceutil/trace.go:171","msg":"trace[956500219] transaction","detail":"{read_only:false; response_revision:450; number_of_response:1; }","duration":"101.6569ms","start":"2022-06-02T19:44:00.737Z","end":"2022-06-02T19:44:00.839Z","steps":["trace[956500219] 'compare'  (duration: 99.5461ms)"],"step_count":1}
	{"level":"warn","ts":"2022-06-02T19:44:00.968Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"110.6166ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/kube-proxy-6qlxd\" ","response":"range_response_count:1 size:4448"}
	{"level":"info","ts":"2022-06-02T19:44:00.968Z","caller":"traceutil/trace.go:171","msg":"trace[1357411940] range","detail":"{range_begin:/registry/pods/kube-system/kube-proxy-6qlxd; range_end:; response_count:1; response_revision:454; }","duration":"110.9896ms","start":"2022-06-02T19:44:00.857Z","end":"2022-06-02T19:44:00.968Z","steps":["trace[1357411940] 'agreement among raft nodes before linearized reading'  (duration: 89.0991ms)","trace[1357411940] 'range keys from in-memory index tree'  (duration: 21.4928ms)"],"step_count":2}
	{"level":"warn","ts":"2022-06-02T19:44:00.968Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"108.1911ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/coredns-64897985d-2sl4x\" ","response":"range_response_count:1 size:4337"}
	{"level":"info","ts":"2022-06-02T19:44:00.968Z","caller":"traceutil/trace.go:171","msg":"trace[1004335176] range","detail":"{range_begin:/registry/pods/kube-system/coredns-64897985d-2sl4x; range_end:; response_count:1; response_revision:454; }","duration":"108.9072ms","start":"2022-06-02T19:44:00.860Z","end":"2022-06-02T19:44:00.968Z","steps":["trace[1004335176] 'agreement among raft nodes before linearized reading'  (duration: 86.6981ms)","trace[1004335176] 'range keys from in-memory index tree'  (duration: 21.5285ms)"],"step_count":2}
	{"level":"warn","ts":"2022-06-02T19:44:00.968Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"107.8543ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/coredns-64897985d-nvh82\" ","response":"range_response_count:1 size:3461"}
	{"level":"info","ts":"2022-06-02T19:44:00.969Z","caller":"traceutil/trace.go:171","msg":"trace[331850363] range","detail":"{range_begin:/registry/pods/kube-system/coredns-64897985d-nvh82; range_end:; response_count:1; response_revision:454; }","duration":"108.4961ms","start":"2022-06-02T19:44:00.860Z","end":"2022-06-02T19:44:00.968Z","steps":["trace[331850363] 'agreement among raft nodes before linearized reading'  (duration: 86.1808ms)","trace[331850363] 'range keys from in-memory index tree'  (duration: 21.6444ms)"],"step_count":2}
	{"level":"warn","ts":"2022-06-02T19:44:08.052Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"103.7827ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/roles/kube-system/system:persistent-volume-provisioner\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2022-06-02T19:44:08.052Z","caller":"traceutil/trace.go:171","msg":"trace[994262197] range","detail":"{range_begin:/registry/roles/kube-system/system:persistent-volume-provisioner; range_end:; response_count:0; response_revision:476; }","duration":"104.0265ms","start":"2022-06-02T19:44:07.948Z","end":"2022-06-02T19:44:08.052Z","steps":["trace[994262197] 'agreement among raft nodes before linearized reading'  (duration: 87.5831ms)"],"step_count":1}
	{"level":"warn","ts":"2022-06-02T19:44:08.052Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"102.8469ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/\" range_end:\"/registry/minions0\" ","response":"range_response_count:1 size:4518"}
	{"level":"info","ts":"2022-06-02T19:44:08.052Z","caller":"traceutil/trace.go:171","msg":"trace[614362432] range","detail":"{range_begin:/registry/minions/; range_end:/registry/minions0; response_count:1; response_revision:476; }","duration":"103.3016ms","start":"2022-06-02T19:44:07.949Z","end":"2022-06-02T19:44:08.052Z","steps":["trace[614362432] 'agreement among raft nodes before linearized reading'  (duration: 86.5535ms)","trace[614362432] 'range keys from in-memory index tree'  (duration: 16.2372ms)"],"step_count":2}
	{"level":"warn","ts":"2022-06-02T19:44:14.581Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"118.6865ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/deployments/kube-system/metrics-server\" ","response":"range_response_count:1 size:4883"}
	{"level":"info","ts":"2022-06-02T19:44:14.581Z","caller":"traceutil/trace.go:171","msg":"trace[331367817] range","detail":"{range_begin:/registry/deployments/kube-system/metrics-server; range_end:; response_count:1; response_revision:515; }","duration":"119.0079ms","start":"2022-06-02T19:44:14.462Z","end":"2022-06-02T19:44:14.581Z","steps":["trace[331367817] 'agreement among raft nodes before linearized reading'  (duration: 108.5714ms)"],"step_count":1}
	{"level":"info","ts":"2022-06-02T19:44:22.347Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2022-06-02T19:44:22.348Z","caller":"embed/etcd.go:367","msg":"closing etcd server","name":"newest-cni-20220602193528-12108","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.58.2:2380"],"advertise-client-urls":["https://192.168.58.2:2379"]}
	WARNING: 2022/06/02 19:44:22 [core] grpc: addrConn.createTransport failed to connect to {192.168.58.2:2379 192.168.58.2:2379 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 192.168.58.2:2379: connect: connection refused". Reconnecting...
	WARNING: 2022/06/02 19:44:22 [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1:2379 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	WARNING: 2022/06/02 19:44:22 [core] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	{"level":"info","ts":"2022-06-02T19:44:22.535Z","caller":"etcdserver/server.go:1438","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"b2c6679ac05f2cf1","current-leader-member-id":"b2c6679ac05f2cf1"}
	{"level":"info","ts":"2022-06-02T19:44:22.642Z","caller":"embed/etcd.go:562","msg":"stopping serving peer traffic","address":"192.168.58.2:2380"}
	{"level":"info","ts":"2022-06-02T19:44:22.644Z","caller":"embed/etcd.go:567","msg":"stopped serving peer traffic","address":"192.168.58.2:2380"}
	{"level":"info","ts":"2022-06-02T19:44:22.644Z","caller":"embed/etcd.go:369","msg":"closed etcd server","name":"newest-cni-20220602193528-12108","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.58.2:2380"],"advertise-client-urls":["https://192.168.58.2:2379"]}
	
	* 
	* ==> etcd [93d0a0699aa3] <==
	* {"level":"warn","ts":"2022-06-02T19:45:45.526Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"144.3113ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2022-06-02T19:45:45.527Z","caller":"traceutil/trace.go:171","msg":"trace[2000617987] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:579; }","duration":"144.404ms","start":"2022-06-02T19:45:45.382Z","end":"2022-06-02T19:45:45.527Z","steps":["trace[2000617987] 'agreement among raft nodes before linearized reading'  (duration: 142.9119ms)"],"step_count":1}
	{"level":"warn","ts":"2022-06-02T19:45:45.660Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"117.7292ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/masterleases/192.168.58.2\" ","response":"range_response_count:0 size:5"}
	{"level":"warn","ts":"2022-06-02T19:45:45.660Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"102.3676ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/\" range_end:\"/registry/pods/kube-system0\" ","response":"range_response_count:8 size:41515"}
	{"level":"info","ts":"2022-06-02T19:45:45.660Z","caller":"traceutil/trace.go:171","msg":"trace[147537117] range","detail":"{range_begin:/registry/pods/kube-system/; range_end:/registry/pods/kube-system0; response_count:8; response_revision:580; }","duration":"102.5288ms","start":"2022-06-02T19:45:45.558Z","end":"2022-06-02T19:45:45.660Z","steps":["trace[147537117] 'agreement among raft nodes before linearized reading'  (duration: 78.9483ms)","trace[147537117] 'range keys from in-memory index tree'  (duration: 22.7079ms)"],"step_count":2}
	{"level":"info","ts":"2022-06-02T19:45:45.660Z","caller":"traceutil/trace.go:171","msg":"trace[622534470] range","detail":"{range_begin:/registry/masterleases/192.168.58.2; range_end:; response_count:0; response_revision:580; }","duration":"118.427ms","start":"2022-06-02T19:45:45.542Z","end":"2022-06-02T19:45:45.660Z","steps":["trace[622534470] 'agreement among raft nodes before linearized reading'  (duration: 94.9777ms)","trace[622534470] 'range keys from in-memory index tree'  (duration: 22.7024ms)"],"step_count":2}
	{"level":"warn","ts":"2022-06-02T19:45:56.355Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"105.5863ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/specs/kubernetes-dashboard/kubernetes-dashboard\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2022-06-02T19:45:56.355Z","caller":"traceutil/trace.go:171","msg":"trace[120776893] range","detail":"{range_begin:/registry/services/specs/kubernetes-dashboard/kubernetes-dashboard; range_end:; response_count:0; response_revision:642; }","duration":"105.784ms","start":"2022-06-02T19:45:56.250Z","end":"2022-06-02T19:45:56.355Z","steps":["trace[120776893] 'range keys from in-memory index tree'  (duration: 102.9097ms)"],"step_count":1}
	{"level":"info","ts":"2022-06-02T19:45:59.337Z","caller":"traceutil/trace.go:171","msg":"trace[249791148] transaction","detail":"{read_only:false; response_revision:657; number_of_response:1; }","duration":"100.1132ms","start":"2022-06-02T19:45:59.237Z","end":"2022-06-02T19:45:59.337Z","steps":["trace[249791148] 'process raft request'  (duration: 99.7216ms)"],"step_count":1}
	{"level":"info","ts":"2022-06-02T19:45:59.338Z","caller":"traceutil/trace.go:171","msg":"trace[1734302757] linearizableReadLoop","detail":"{readStateIndex:695; appliedIndex:694; }","duration":"100.4407ms","start":"2022-06-02T19:45:59.238Z","end":"2022-06-02T19:45:59.338Z","steps":["trace[1734302757] 'read index received'  (duration: 98.8763ms)","trace[1734302757] 'applied index is now lower than readState.Index'  (duration: 1.5611ms)"],"step_count":2}
	{"level":"warn","ts":"2022-06-02T19:45:59.339Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"101.5752ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/replicaset-controller\" ","response":"range_response_count:1 size:260"}
	{"level":"info","ts":"2022-06-02T19:45:59.339Z","caller":"traceutil/trace.go:171","msg":"trace[2788297] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/replicaset-controller; range_end:; response_count:1; response_revision:663; }","duration":"101.9595ms","start":"2022-06-02T19:45:59.237Z","end":"2022-06-02T19:45:59.339Z","steps":["trace[2788297] 'agreement among raft nodes before linearized reading'  (duration: 101.4912ms)"],"step_count":1}
	{"level":"warn","ts":"2022-06-02T19:45:59.339Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"179.8989ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/endpointslice-controller\" ","response":"range_response_count:1 size:269"}
	{"level":"info","ts":"2022-06-02T19:45:59.339Z","caller":"traceutil/trace.go:171","msg":"trace[1587263312] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/endpointslice-controller; range_end:; response_count:1; response_revision:663; }","duration":"179.9674ms","start":"2022-06-02T19:45:59.159Z","end":"2022-06-02T19:45:59.339Z","steps":["trace[1587263312] 'agreement among raft nodes before linearized reading'  (duration: 179.8528ms)"],"step_count":1}
	{"level":"warn","ts":"2022-06-02T19:45:59.362Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"106.8839ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/deployments/kube-system/coredns\" ","response":"range_response_count:1 size:3982"}
	{"level":"info","ts":"2022-06-02T19:45:59.363Z","caller":"traceutil/trace.go:171","msg":"trace[2101052752] range","detail":"{range_begin:/registry/deployments/kube-system/coredns; range_end:; response_count:1; response_revision:663; }","duration":"107.1489ms","start":"2022-06-02T19:45:59.256Z","end":"2022-06-02T19:45:59.363Z","steps":["trace[2101052752] 'agreement among raft nodes before linearized reading'  (duration: 106.797ms)"],"step_count":1}
	{"level":"warn","ts":"2022-06-02T19:45:59.363Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"115.7089ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kubernetes-dashboard/default\" ","response":"range_response_count:1 size:199"}
	{"level":"info","ts":"2022-06-02T19:45:59.363Z","caller":"traceutil/trace.go:171","msg":"trace[183619020] range","detail":"{range_begin:/registry/serviceaccounts/kubernetes-dashboard/default; range_end:; response_count:1; response_revision:663; }","duration":"116.0293ms","start":"2022-06-02T19:45:59.247Z","end":"2022-06-02T19:45:59.363Z","steps":["trace[183619020] 'agreement among raft nodes before linearized reading'  (duration: 115.5069ms)"],"step_count":1}
	{"level":"warn","ts":"2022-06-02T19:45:59.648Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"104.3687ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/deployments/kubernetes-dashboard/kubernetes-dashboard\" ","response":"range_response_count:1 size:4887"}
	{"level":"info","ts":"2022-06-02T19:45:59.649Z","caller":"traceutil/trace.go:171","msg":"trace[1128804163] range","detail":"{range_begin:/registry/deployments/kubernetes-dashboard/kubernetes-dashboard; range_end:; response_count:1; response_revision:674; }","duration":"104.653ms","start":"2022-06-02T19:45:59.544Z","end":"2022-06-02T19:45:59.649Z","steps":["trace[1128804163] 'agreement among raft nodes before linearized reading'  (duration: 93.1928ms)"],"step_count":1}
	{"level":"warn","ts":"2022-06-02T19:45:59.648Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"103.3026ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/deployments/kube-system/coredns\" ","response":"range_response_count:1 size:3982"}
	{"level":"info","ts":"2022-06-02T19:45:59.649Z","caller":"traceutil/trace.go:171","msg":"trace[196657013] range","detail":"{range_begin:/registry/deployments/kube-system/coredns; range_end:; response_count:1; response_revision:674; }","duration":"103.7158ms","start":"2022-06-02T19:45:59.545Z","end":"2022-06-02T19:45:59.649Z","steps":["trace[196657013] 'agreement among raft nodes before linearized reading'  (duration: 92.018ms)"],"step_count":1}
	{"level":"info","ts":"2022-06-02T19:45:59.776Z","caller":"traceutil/trace.go:171","msg":"trace[706700929] transaction","detail":"{read_only:false; response_revision:683; number_of_response:1; }","duration":"109.4469ms","start":"2022-06-02T19:45:59.666Z","end":"2022-06-02T19:45:59.776Z","steps":["trace[706700929] 'process raft request'  (duration: 102.1447ms)"],"step_count":1}
	{"level":"warn","ts":"2022-06-02T19:46:42.645Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"101.8677ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/\" range_end:\"/registry/minions0\" count_only:true ","response":"range_response_count:0 size:7"}
	{"level":"info","ts":"2022-06-02T19:46:42.645Z","caller":"traceutil/trace.go:171","msg":"trace[97225568] range","detail":"{range_begin:/registry/minions/; range_end:/registry/minions0; response_count:0; response_revision:781; }","duration":"102.1615ms","start":"2022-06-02T19:46:42.543Z","end":"2022-06-02T19:46:42.645Z","steps":["trace[97225568] 'count revisions from in-memory index tree'  (duration: 101.6883ms)"],"step_count":1}
	
	* 
	* ==> kernel <==
	*  19:46:44 up  2:36,  0 users,  load average: 10.10, 6.48, 5.38
	Linux newest-cni-20220602193528-12108 5.10.102.1-microsoft-standard-WSL2 #1 SMP Wed Mar 2 00:30:59 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 20.04.4 LTS"
	
	* 
	* ==> kube-apiserver [2bdb3f26ce03] <==
	* I0602 19:45:41.637652       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0602 19:45:41.648768       1 storage_scheduling.go:109] all system priority classes are created successfully or already exist.
	W0602 19:45:42.058100       1 handler_proxy.go:104] no RequestInfo found in the context
	E0602 19:45:42.058250       1 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0602 19:45:42.058286       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0602 19:45:44.848949       1 trace.go:205] Trace[456318723]: "Get" url:/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:pvc-protection-controller,user-agent:kube-apiserver/v1.23.6 (linux/amd64) kubernetes/ad33385,audit-id:0e1ba5c1-5210-4204-b433-79a743d24410,client:127.0.0.1,accept:application/vnd.kubernetes.protobuf, */*,protocol:HTTP/2.0 (02-Jun-2022 19:45:44.345) (total time: 503ms):
	Trace[456318723]: ---"About to write a response" 503ms (19:45:44.848)
	Trace[456318723]: [503.5121ms] [503.5121ms] END
	I0602 19:45:46.748430       1 controller.go:611] quota admission added evaluator for: serviceaccounts
	I0602 19:45:46.768848       1 controller.go:611] quota admission added evaluator for: events.events.k8s.io
	I0602 19:45:46.847843       1 controller.go:611] quota admission added evaluator for: deployments.apps
	I0602 19:45:47.253913       1 controller.go:611] quota admission added evaluator for: daemonsets.apps
	I0602 19:45:47.476389       1 controller.go:611] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0602 19:45:47.559530       1 controller.go:611] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0602 19:45:55.249411       1 controller.go:611] quota admission added evaluator for: namespaces
	I0602 19:45:56.448379       1 alloc.go:329] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs=map[IPv4:10.111.246.143]
	I0602 19:45:56.645509       1 alloc.go:329] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs=map[IPv4:10.106.248.67]
	I0602 19:45:59.143225       1 controller.go:611] quota admission added evaluator for: replicasets.apps
	I0602 19:45:59.255000       1 controller.go:611] quota admission added evaluator for: endpoints
	I0602 19:45:59.348013       1 controller.go:611] quota admission added evaluator for: endpointslices.discovery.k8s.io
	W0602 19:46:42.058486       1 handler_proxy.go:104] no RequestInfo found in the context
	E0602 19:46:42.058605       1 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0602 19:46:42.058625       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	* 
	* ==> kube-apiserver [fcbc8319890a] <==
	* W0602 19:44:23.441474       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0602 19:44:23.441485       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0602 19:44:23.441230       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0602 19:44:23.441598       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0602 19:44:23.441633       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0602 19:44:23.441645       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0602 19:44:23.441668       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0602 19:44:23.441266       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0602 19:44:23.441713       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0602 19:44:23.441693       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0602 19:44:23.441732       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0602 19:44:23.441747       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0602 19:44:23.441762       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0602 19:44:23.441822       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0602 19:44:23.441825       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0602 19:44:23.441788       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0602 19:44:23.441790       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0602 19:44:23.441885       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0602 19:44:23.441793       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0602 19:44:23.442043       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0602 19:44:23.442059       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0602 19:44:23.442105       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0602 19:44:23.442208       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0602 19:44:23.442300       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0602 19:44:23.441924       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	
	* 
	* ==> kube-controller-manager [a13b2fac238d] <==
	* I0602 19:45:59.039290       1 shared_informer.go:247] Caches are synced for expand 
	I0602 19:45:59.040087       1 taint_manager.go:187] "Starting NoExecuteTaintManager"
	I0602 19:45:59.040292       1 shared_informer.go:247] Caches are synced for namespace 
	I0602 19:45:59.041684       1 range_allocator.go:173] Starting range CIDR allocator
	I0602 19:45:59.041739       1 shared_informer.go:240] Waiting for caches to sync for cidrallocator
	I0602 19:45:59.041761       1 shared_informer.go:247] Caches are synced for cidrallocator 
	W0602 19:45:59.040448       1 node_lifecycle_controller.go:1012] Missing timestamp for Node newest-cni-20220602193528-12108. Assuming now as a timestamp.
	I0602 19:45:59.042065       1 node_lifecycle_controller.go:1213] Controller detected that zone  is now in state Normal.
	I0602 19:45:59.040611       1 event.go:294] "Event occurred" object="newest-cni-20220602193528-12108" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node newest-cni-20220602193528-12108 event: Registered Node newest-cni-20220602193528-12108 in Controller"
	E0602 19:45:59.047599       1 memcache.go:196] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I0602 19:45:59.048609       1 shared_informer.go:247] Caches are synced for resource quota 
	I0602 19:45:59.049226       1 shared_informer.go:247] Caches are synced for resource quota 
	E0602 19:45:59.051632       1 memcache.go:101] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I0602 19:45:59.136817       1 shared_informer.go:247] Caches are synced for endpoint_slice_mirroring 
	I0602 19:45:59.137256       1 shared_informer.go:247] Caches are synced for endpoint_slice 
	I0602 19:45:59.144608       1 shared_informer.go:240] Waiting for caches to sync for garbage collector
	I0602 19:45:59.156826       1 event.go:294] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set dashboard-metrics-scraper-56974995fc to 1"
	I0602 19:45:59.242636       1 event.go:294] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set kubernetes-dashboard-cd7c84bfc to 1"
	I0602 19:45:59.451140       1 shared_informer.go:247] Caches are synced for garbage collector 
	I0602 19:45:59.451288       1 garbagecollector.go:155] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I0602 19:45:59.546406       1 shared_informer.go:247] Caches are synced for garbage collector 
	I0602 19:45:59.553597       1 event.go:294] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-cd7c84bfc" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kubernetes-dashboard-cd7c84bfc-xsbcz"
	I0602 19:45:59.553925       1 event.go:294] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-56974995fc" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: dashboard-metrics-scraper-56974995fc-9zf2w"
	E0602 19:46:29.158775       1 resource_quota_controller.go:413] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0602 19:46:29.643198       1 garbagecollector.go:707] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	
	* 
	* ==> kube-controller-manager [f0f4349e2a50] <==
	* I0602 19:43:59.436834       1 shared_informer.go:247] Caches are synced for node 
	I0602 19:43:59.436974       1 taint_manager.go:187] "Starting NoExecuteTaintManager"
	I0602 19:43:59.436910       1 shared_informer.go:247] Caches are synced for GC 
	I0602 19:43:59.442785       1 range_allocator.go:173] Starting range CIDR allocator
	I0602 19:43:59.443028       1 shared_informer.go:240] Waiting for caches to sync for cidrallocator
	I0602 19:43:59.443048       1 shared_informer.go:247] Caches are synced for cidrallocator 
	I0602 19:43:59.443124       1 shared_informer.go:247] Caches are synced for resource quota 
	I0602 19:43:59.444243       1 event.go:294] "Event occurred" object="newest-cni-20220602193528-12108" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node newest-cni-20220602193528-12108 event: Registered Node newest-cni-20220602193528-12108 in Controller"
	I0602 19:43:59.448657       1 shared_informer.go:247] Caches are synced for certificate-csrsigning-kubelet-serving 
	I0602 19:43:59.451108       1 shared_informer.go:247] Caches are synced for certificate-csrsigning-kube-apiserver-client 
	I0602 19:43:59.451208       1 shared_informer.go:247] Caches are synced for certificate-csrsigning-kubelet-client 
	I0602 19:43:59.468244       1 shared_informer.go:247] Caches are synced for certificate-csrapproving 
	I0602 19:43:59.536183       1 shared_informer.go:247] Caches are synced for certificate-csrsigning-legacy-unknown 
	I0602 19:43:59.740052       1 range_allocator.go:374] Set node newest-cni-20220602193528-12108 PodCIDR to [192.168.0.0/24]
	I0602 19:43:59.740122       1 event.go:294] "Event occurred" object="kube-system/coredns" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-64897985d to 2"
	I0602 19:43:59.935664       1 shared_informer.go:247] Caches are synced for garbage collector 
	I0602 19:43:59.935742       1 garbagecollector.go:155] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I0602 19:43:59.936505       1 shared_informer.go:247] Caches are synced for garbage collector 
	I0602 19:44:00.454032       1 event.go:294] "Event occurred" object="kube-system/coredns-64897985d" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-64897985d-2sl4x"
	I0602 19:44:00.454577       1 event.go:294] "Event occurred" object="kube-system/kube-proxy" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-6qlxd"
	I0602 19:44:00.608214       1 event.go:294] "Event occurred" object="kube-system/coredns-64897985d" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-64897985d-nvh82"
	I0602 19:44:00.859355       1 event.go:294] "Event occurred" object="kube-system/coredns" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set coredns-64897985d to 1"
	I0602 19:44:00.981277       1 event.go:294] "Event occurred" object="kube-system/coredns-64897985d" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-64897985d-2sl4x"
	I0602 19:44:14.358917       1 event.go:294] "Event occurred" object="kube-system/metrics-server" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set metrics-server-b955d9d8 to 1"
	I0602 19:44:14.456327       1 event.go:294] "Event occurred" object="kube-system/metrics-server-b955d9d8" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: metrics-server-b955d9d8-4zjkc"
	
	* 
	* ==> kube-proxy [1705a550b9b6] <==
	* E0602 19:45:46.346189       1 proxier.go:647] "Failed to read builtin modules file, you can ignore this message when kube-proxy is running inside container without mounting /lib/modules" err="open /lib/modules/5.10.102.1-microsoft-standard-WSL2/modules.builtin: no such file or directory" filePath="/lib/modules/5.10.102.1-microsoft-standard-WSL2/modules.builtin"
	I0602 19:45:46.352530       1 proxier.go:657] "Failed to load kernel module with modprobe, you can ignore this message when kube-proxy is running inside container without mounting /lib/modules" moduleName="ip_vs"
	I0602 19:45:46.358229       1 proxier.go:657] "Failed to load kernel module with modprobe, you can ignore this message when kube-proxy is running inside container without mounting /lib/modules" moduleName="ip_vs_rr"
	I0602 19:45:46.362670       1 proxier.go:657] "Failed to load kernel module with modprobe, you can ignore this message when kube-proxy is running inside container without mounting /lib/modules" moduleName="ip_vs_wrr"
	I0602 19:45:46.370433       1 proxier.go:657] "Failed to load kernel module with modprobe, you can ignore this message when kube-proxy is running inside container without mounting /lib/modules" moduleName="ip_vs_sh"
	I0602 19:45:46.377719       1 proxier.go:657] "Failed to load kernel module with modprobe, you can ignore this message when kube-proxy is running inside container without mounting /lib/modules" moduleName="nf_conntrack"
	I0602 19:45:46.467336       1 node.go:163] Successfully retrieved node IP: 192.168.58.2
	I0602 19:45:46.467548       1 server_others.go:138] "Detected node IP" address="192.168.58.2"
	I0602 19:45:46.467621       1 server_others.go:561] "Unknown proxy mode, assuming iptables proxy" proxyMode=""
	I0602 19:45:46.757244       1 server_others.go:206] "Using iptables Proxier"
	I0602 19:45:46.757709       1 server_others.go:213] "kube-proxy running in dual-stack mode" ipFamily=IPv4
	I0602 19:45:46.757744       1 server_others.go:214] "Creating dualStackProxier for iptables"
	I0602 19:45:46.757850       1 server_others.go:491] "Detect-local-mode set to ClusterCIDR, but no IPv6 cluster CIDR defined, , defaulting to no-op detect-local for IPv6"
	I0602 19:45:46.759429       1 server.go:656] "Version info" version="v1.23.6"
	I0602 19:45:46.763092       1 config.go:226] "Starting endpoint slice config controller"
	I0602 19:45:46.763119       1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
	I0602 19:45:46.763205       1 config.go:317] "Starting service config controller"
	I0602 19:45:46.763219       1 shared_informer.go:240] Waiting for caches to sync for service config
	I0602 19:45:46.863501       1 shared_informer.go:247] Caches are synced for service config 
	I0602 19:45:46.863660       1 shared_informer.go:247] Caches are synced for endpoint slice config 
	
	* 
	* ==> kube-proxy [e576c766bf2a] <==
	* E0602 19:44:05.054582       1 proxier.go:647] "Failed to read builtin modules file, you can ignore this message when kube-proxy is running inside container without mounting /lib/modules" err="open /lib/modules/5.10.102.1-microsoft-standard-WSL2/modules.builtin: no such file or directory" filePath="/lib/modules/5.10.102.1-microsoft-standard-WSL2/modules.builtin"
	I0602 19:44:05.140592       1 proxier.go:657] "Failed to load kernel module with modprobe, you can ignore this message when kube-proxy is running inside container without mounting /lib/modules" moduleName="ip_vs"
	I0602 19:44:05.148491       1 proxier.go:657] "Failed to load kernel module with modprobe, you can ignore this message when kube-proxy is running inside container without mounting /lib/modules" moduleName="ip_vs_rr"
	I0602 19:44:05.153924       1 proxier.go:657] "Failed to load kernel module with modprobe, you can ignore this message when kube-proxy is running inside container without mounting /lib/modules" moduleName="ip_vs_wrr"
	I0602 19:44:05.239074       1 proxier.go:657] "Failed to load kernel module with modprobe, you can ignore this message when kube-proxy is running inside container without mounting /lib/modules" moduleName="ip_vs_sh"
	I0602 19:44:05.243140       1 proxier.go:657] "Failed to load kernel module with modprobe, you can ignore this message when kube-proxy is running inside container without mounting /lib/modules" moduleName="nf_conntrack"
	I0602 19:44:05.449564       1 node.go:163] Successfully retrieved node IP: 192.168.58.2
	I0602 19:44:05.449818       1 server_others.go:138] "Detected node IP" address="192.168.58.2"
	I0602 19:44:05.449868       1 server_others.go:561] "Unknown proxy mode, assuming iptables proxy" proxyMode=""
	I0602 19:44:05.746367       1 server_others.go:206] "Using iptables Proxier"
	I0602 19:44:05.746488       1 server_others.go:213] "kube-proxy running in dual-stack mode" ipFamily=IPv4
	I0602 19:44:05.746502       1 server_others.go:214] "Creating dualStackProxier for iptables"
	I0602 19:44:05.746533       1 server_others.go:491] "Detect-local-mode set to ClusterCIDR, but no IPv6 cluster CIDR defined, , defaulting to no-op detect-local for IPv6"
	I0602 19:44:05.747677       1 server.go:656] "Version info" version="v1.23.6"
	I0602 19:44:05.749240       1 config.go:317] "Starting service config controller"
	I0602 19:44:05.749361       1 shared_informer.go:240] Waiting for caches to sync for service config
	I0602 19:44:05.749278       1 config.go:226] "Starting endpoint slice config controller"
	I0602 19:44:05.749405       1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
	I0602 19:44:05.849742       1 shared_informer.go:247] Caches are synced for endpoint slice config 
	I0602 19:44:05.849914       1 shared_informer.go:247] Caches are synced for service config 
	
	* 
	* ==> kube-scheduler [2e5f7364c8be] <==
	* W0602 19:45:33.944916       1 feature_gate.go:237] Setting GA feature gate ServerSideApply=true. It will be removed in a future release.
	I0602 19:45:35.070930       1 serving.go:348] Generated self-signed cert in-memory
	W0602 19:45:40.647732       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0602 19:45:40.647795       1 authentication.go:345] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0602 19:45:40.647817       1 authentication.go:346] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0602 19:45:40.647829       1 authentication.go:347] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0602 19:45:40.936389       1 server.go:139] "Starting Kubernetes Scheduler" version="v1.23.6"
	I0602 19:45:40.947866       1 configmap_cafile_content.go:201] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0602 19:45:40.951345       1 shared_informer.go:240] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0602 19:45:40.949102       1 secure_serving.go:200] Serving securely on 127.0.0.1:10259
	I0602 19:45:40.949130       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0602 19:45:41.051469       1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	
	* 
	* ==> kube-scheduler [d9cb2e830613] <==
	* E0602 19:43:15.238196       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0602 19:43:15.241464       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0602 19:43:15.241624       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0602 19:43:15.369379       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0602 19:43:15.369519       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0602 19:43:15.373444       1 reflector.go:324] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:205: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0602 19:43:15.373601       1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:205: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0602 19:43:15.437712       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0602 19:43:15.437757       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0602 19:43:15.437795       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0602 19:43:15.437827       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0602 19:43:15.452182       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0602 19:43:15.452374       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0602 19:43:15.538996       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0602 19:43:15.539199       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0602 19:43:15.540511       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0602 19:43:15.540645       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0602 19:43:15.572678       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0602 19:43:15.572794       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0602 19:43:17.586808       1 reflector.go:324] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:205: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0602 19:43:17.586974       1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:205: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0602 19:43:21.848972       1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	I0602 19:44:22.237837       1 configmap_cafile_content.go:222] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0602 19:44:22.238714       1 secure_serving.go:311] Stopped listening on 127.0.0.1:10259
	I0602 19:44:22.240124       1 tlsconfig.go:255] "Shutting down DynamicServingCertificateController"
	
	* 
	* ==> kubelet <==
	* -- Logs begin at Thu 2022-06-02 19:44:54 UTC, end at Thu 2022-06-02 19:46:46 UTC. --
	Jun 02 19:46:43 newest-cni-20220602193528-12108 kubelet[945]: E0602 19:46:43.338113     945 kuberuntime_manager.go:833] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = [failed to set up sandbox container \"69465c0afced5f5f0e8724d1e2a80e0468baebdc6c38b5168f404a879963fd96\" network for pod \"metrics-server-b955d9d8-4zjkc\": networkPlugin cni failed to set up pod \"metrics-server-b955d9d8-4zjkc_kube-system\" network: failed to set bridge addr: could not add IP address to \"cni0\": permission denied, failed to clean up sandbox container \"69465c0afced5f5f0e8724d1e2a80e0468baebdc6c38b5168f404a879963fd96\" network for pod \"metrics-server-b955d9d8-4zjkc\": networkPlugin cni failed to teardown pod \"metrics-server-b955d9d8-4zjkc_kube-system\" network: running [/usr/sbin/iptables -t nat -D POSTROUTING -s 10.85.0.54 -j CNI-fa3ad2ba5ffe593ff2afabd7 -m comment --comment name: \"crio\" id: \"69465c0afced5f5f0e8724d1e2a80e0468baebdc6c38b5168f404a879963fd96\" --wait]: exit status 2: iptable
s v1.8.4 (legacy): Couldn't load target `CNI-fa3ad2ba5ffe593ff2afabd7':No such file or directory\n\nTry `iptables -h' or 'iptables --help' for more information.\n]" pod="kube-system/metrics-server-b955d9d8-4zjkc"
	Jun 02 19:46:43 newest-cni-20220602193528-12108 kubelet[945]: E0602 19:46:43.338288     945 pod_workers.go:951] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"metrics-server-b955d9d8-4zjkc_kube-system(f7310338-75db-4112-9f21-d33fba8787e7)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"metrics-server-b955d9d8-4zjkc_kube-system(f7310338-75db-4112-9f21-d33fba8787e7)\\\": rpc error: code = Unknown desc = [failed to set up sandbox container \\\"69465c0afced5f5f0e8724d1e2a80e0468baebdc6c38b5168f404a879963fd96\\\" network for pod \\\"metrics-server-b955d9d8-4zjkc\\\": networkPlugin cni failed to set up pod \\\"metrics-server-b955d9d8-4zjkc_kube-system\\\" network: failed to set bridge addr: could not add IP address to \\\"cni0\\\": permission denied, failed to clean up sandbox container \\\"69465c0afced5f5f0e8724d1e2a80e0468baebdc6c38b5168f404a879963fd96\\\" network for pod \\\"metrics-server-b955d9d8-4zjkc\\\": networkPlugin cni failed to teardown pod \\\"metr
ics-server-b955d9d8-4zjkc_kube-system\\\" network: running [/usr/sbin/iptables -t nat -D POSTROUTING -s 10.85.0.54 -j CNI-fa3ad2ba5ffe593ff2afabd7 -m comment --comment name: \\\"crio\\\" id: \\\"69465c0afced5f5f0e8724d1e2a80e0468baebdc6c38b5168f404a879963fd96\\\" --wait]: exit status 2: iptables v1.8.4 (legacy): Couldn't load target `CNI-fa3ad2ba5ffe593ff2afabd7':No such file or directory\\n\\nTry `iptables -h' or 'iptables --help' for more information.\\n]\"" pod="kube-system/metrics-server-b955d9d8-4zjkc" podUID=f7310338-75db-4112-9f21-d33fba8787e7
	Jun 02 19:46:43 newest-cni-20220602193528-12108 kubelet[945]: E0602 19:46:43.596479     945 remote_runtime.go:209] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = [failed to set up sandbox container \"6a0f648b0e61f727e133bad1fb539af57607e6ff8f9a10a17fab804b664407fd\" network for pod \"dashboard-metrics-scraper-56974995fc-9zf2w\": networkPlugin cni failed to set up pod \"dashboard-metrics-scraper-56974995fc-9zf2w_kubernetes-dashboard\" network: failed to set bridge addr: could not add IP address to \"cni0\": permission denied, failed to clean up sandbox container \"6a0f648b0e61f727e133bad1fb539af57607e6ff8f9a10a17fab804b664407fd\" network for pod \"dashboard-metrics-scraper-56974995fc-9zf2w\": networkPlugin cni failed to teardown pod \"dashboard-metrics-scraper-56974995fc-9zf2w_kubernetes-dashboard\" network: running [/usr/sbin/iptables -t nat -D POSTROUTING -s 10.85.0.55 -j CNI-13ec0acbe63fd9c5be3be35d -m comment --comment name: \"crio\" id: \"6a0f648b0e61f727e133bad
1fb539af57607e6ff8f9a10a17fab804b664407fd\" --wait]: exit status 2: iptables v1.8.4 (legacy): Couldn't load target `CNI-13ec0acbe63fd9c5be3be35d':No such file or directory\n\nTry `iptables -h' or 'iptables --help' for more information.\n]"
	Jun 02 19:46:43 newest-cni-20220602193528-12108 kubelet[945]: E0602 19:46:43.596729     945 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = [failed to set up sandbox container \"6a0f648b0e61f727e133bad1fb539af57607e6ff8f9a10a17fab804b664407fd\" network for pod \"dashboard-metrics-scraper-56974995fc-9zf2w\": networkPlugin cni failed to set up pod \"dashboard-metrics-scraper-56974995fc-9zf2w_kubernetes-dashboard\" network: failed to set bridge addr: could not add IP address to \"cni0\": permission denied, failed to clean up sandbox container \"6a0f648b0e61f727e133bad1fb539af57607e6ff8f9a10a17fab804b664407fd\" network for pod \"dashboard-metrics-scraper-56974995fc-9zf2w\": networkPlugin cni failed to teardown pod \"dashboard-metrics-scraper-56974995fc-9zf2w_kubernetes-dashboard\" network: running [/usr/sbin/iptables -t nat -D POSTROUTING -s 10.85.0.55 -j CNI-13ec0acbe63fd9c5be3be35d -m comment --comment name: \"crio\" id: \"6a0f648b0e61f727e133bad1fb53
9af57607e6ff8f9a10a17fab804b664407fd\" --wait]: exit status 2: iptables v1.8.4 (legacy): Couldn't load target `CNI-13ec0acbe63fd9c5be3be35d':No such file or directory\n\nTry `iptables -h' or 'iptables --help' for more information.\n]" pod="kubernetes-dashboard/dashboard-metrics-scraper-56974995fc-9zf2w"
	Jun 02 19:46:43 newest-cni-20220602193528-12108 kubelet[945]: E0602 19:46:43.596780     945 kuberuntime_manager.go:833] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = [failed to set up sandbox container \"6a0f648b0e61f727e133bad1fb539af57607e6ff8f9a10a17fab804b664407fd\" network for pod \"dashboard-metrics-scraper-56974995fc-9zf2w\": networkPlugin cni failed to set up pod \"dashboard-metrics-scraper-56974995fc-9zf2w_kubernetes-dashboard\" network: failed to set bridge addr: could not add IP address to \"cni0\": permission denied, failed to clean up sandbox container \"6a0f648b0e61f727e133bad1fb539af57607e6ff8f9a10a17fab804b664407fd\" network for pod \"dashboard-metrics-scraper-56974995fc-9zf2w\": networkPlugin cni failed to teardown pod \"dashboard-metrics-scraper-56974995fc-9zf2w_kubernetes-dashboard\" network: running [/usr/sbin/iptables -t nat -D POSTROUTING -s 10.85.0.55 -j CNI-13ec0acbe63fd9c5be3be35d -m comment --comment name: \"crio\" id: \"6a0f648b0e61f727e133bad1fb53
9af57607e6ff8f9a10a17fab804b664407fd\" --wait]: exit status 2: iptables v1.8.4 (legacy): Couldn't load target `CNI-13ec0acbe63fd9c5be3be35d':No such file or directory\n\nTry `iptables -h' or 'iptables --help' for more information.\n]" pod="kubernetes-dashboard/dashboard-metrics-scraper-56974995fc-9zf2w"
	Jun 02 19:46:43 newest-cni-20220602193528-12108 kubelet[945]: E0602 19:46:43.596873     945 pod_workers.go:951] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"dashboard-metrics-scraper-56974995fc-9zf2w_kubernetes-dashboard(0fcdccde-11fb-4570-a19d-b572b11432d3)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"dashboard-metrics-scraper-56974995fc-9zf2w_kubernetes-dashboard(0fcdccde-11fb-4570-a19d-b572b11432d3)\\\": rpc error: code = Unknown desc = [failed to set up sandbox container \\\"6a0f648b0e61f727e133bad1fb539af57607e6ff8f9a10a17fab804b664407fd\\\" network for pod \\\"dashboard-metrics-scraper-56974995fc-9zf2w\\\": networkPlugin cni failed to set up pod \\\"dashboard-metrics-scraper-56974995fc-9zf2w_kubernetes-dashboard\\\" network: failed to set bridge addr: could not add IP address to \\\"cni0\\\": permission denied, failed to clean up sandbox container \\\"6a0f648b0e61f727e133bad1fb539af57607e6ff8f9a10a17fab804b664407fd\\\" network for pod \\\"dashb
oard-metrics-scraper-56974995fc-9zf2w\\\": networkPlugin cni failed to teardown pod \\\"dashboard-metrics-scraper-56974995fc-9zf2w_kubernetes-dashboard\\\" network: running [/usr/sbin/iptables -t nat -D POSTROUTING -s 10.85.0.55 -j CNI-13ec0acbe63fd9c5be3be35d -m comment --comment name: \\\"crio\\\" id: \\\"6a0f648b0e61f727e133bad1fb539af57607e6ff8f9a10a17fab804b664407fd\\\" --wait]: exit status 2: iptables v1.8.4 (legacy): Couldn't load target `CNI-13ec0acbe63fd9c5be3be35d':No such file or directory\\n\\nTry `iptables -h' or 'iptables --help' for more information.\\n]\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-56974995fc-9zf2w" podUID=0fcdccde-11fb-4570-a19d-b572b11432d3
	Jun 02 19:46:45 newest-cni-20220602193528-12108 kubelet[945]: I0602 19:46:45.147066     945 pod_container_deletor.go:79] "Container not found in pod's containers" containerID="51b0465a017c50edc34b4611eae82c916734780b3eb45d15087b35a050059f9c"
	Jun 02 19:46:45 newest-cni-20220602193528-12108 kubelet[945]: I0602 19:46:45.155698     945 docker_sandbox.go:402] "Failed to read pod IP from plugin/docker" err="networkPlugin cni failed on the status hook for pod \"metrics-server-b955d9d8-4zjkc_kube-system\": CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container \"69465c0afced5f5f0e8724d1e2a80e0468baebdc6c38b5168f404a879963fd96\""
	Jun 02 19:46:45 newest-cni-20220602193528-12108 kubelet[945]: I0602 19:46:45.241744     945 cni.go:334] "CNI failed to retrieve network namespace path" err="cannot find network namespace for the terminated container \"69465c0afced5f5f0e8724d1e2a80e0468baebdc6c38b5168f404a879963fd96\""
	Jun 02 19:46:45 newest-cni-20220602193528-12108 kubelet[945]: I0602 19:46:45.244681     945 docker_sandbox.go:402] "Failed to read pod IP from plugin/docker" err="networkPlugin cni failed on the status hook for pod \"dashboard-metrics-scraper-56974995fc-9zf2w_kubernetes-dashboard\": CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container \"6a0f648b0e61f727e133bad1fb539af57607e6ff8f9a10a17fab804b664407fd\""
	Jun 02 19:46:45 newest-cni-20220602193528-12108 kubelet[945]: I0602 19:46:45.257546     945 cni.go:334] "CNI failed to retrieve network namespace path" err="cannot find network namespace for the terminated container \"6a0f648b0e61f727e133bad1fb539af57607e6ff8f9a10a17fab804b664407fd\""
	Jun 02 19:46:45 newest-cni-20220602193528-12108 kubelet[945]: E0602 19:46:45.257555     945 cni.go:362] "Error adding pod to network" err="failed to set bridge addr: could not add IP address to \"cni0\": permission denied" pod="kube-system/coredns-64897985d-nvh82" podSandboxID={Type:docker ID:c1837e621f27246cd459f050a5bcd83af8816abbb1d593c8c76e5fdcb63f8fba} podNetnsPath="/proc/9889/ns/net" networkType="bridge" networkName="crio"
	Jun 02 19:46:45 newest-cni-20220602193528-12108 kubelet[945]: I0602 19:46:45.341077     945 pod_container_deletor.go:79] "Container not found in pod's containers" containerID="c1837e621f27246cd459f050a5bcd83af8816abbb1d593c8c76e5fdcb63f8fba"
	Jun 02 19:46:45 newest-cni-20220602193528-12108 kubelet[945]: E0602 19:46:45.342867     945 cni.go:362] "Error adding pod to network" err="failed to set bridge addr: could not add IP address to \"cni0\": permission denied" pod="kubernetes-dashboard/kubernetes-dashboard-cd7c84bfc-xsbcz" podSandboxID={Type:docker ID:51b0465a017c50edc34b4611eae82c916734780b3eb45d15087b35a050059f9c} podNetnsPath="/proc/9881/ns/net" networkType="bridge" networkName="crio"
	Jun 02 19:46:45 newest-cni-20220602193528-12108 kubelet[945]: E0602 19:46:45.660845     945 cni.go:381] "Error deleting pod from network" err="running [/usr/sbin/iptables -t nat -D POSTROUTING -s 10.85.0.56 -j CNI-b720f366405dafa6eb578660 -m comment --comment name: \"crio\" id: \"c1837e621f27246cd459f050a5bcd83af8816abbb1d593c8c76e5fdcb63f8fba\" --wait]: exit status 2: iptables v1.8.4 (legacy): Couldn't load target `CNI-b720f366405dafa6eb578660':No such file or directory\n\nTry `iptables -h' or 'iptables --help' for more information.\n" pod="kube-system/coredns-64897985d-nvh82" podSandboxID={Type:docker ID:c1837e621f27246cd459f050a5bcd83af8816abbb1d593c8c76e5fdcb63f8fba} podNetnsPath="/proc/9889/ns/net" networkType="bridge" networkName="crio"
	Jun 02 19:46:45 newest-cni-20220602193528-12108 kubelet[945]: E0602 19:46:45.762980     945 cni.go:381] "Error deleting pod from network" err="running [/usr/sbin/iptables -t nat -D POSTROUTING -s 10.85.0.57 -j CNI-82e5db239e03789aaa4d7ac9 -m comment --comment name: \"crio\" id: \"51b0465a017c50edc34b4611eae82c916734780b3eb45d15087b35a050059f9c\" --wait]: exit status 2: iptables v1.8.4 (legacy): Couldn't load target `CNI-82e5db239e03789aaa4d7ac9':No such file or directory\n\nTry `iptables -h' or 'iptables --help' for more information.\n" pod="kubernetes-dashboard/kubernetes-dashboard-cd7c84bfc-xsbcz" podSandboxID={Type:docker ID:51b0465a017c50edc34b4611eae82c916734780b3eb45d15087b35a050059f9c} podNetnsPath="/proc/9881/ns/net" networkType="bridge" networkName="crio"
	Jun 02 19:46:46 newest-cni-20220602193528-12108 kubelet[945]: E0602 19:46:46.438501     945 remote_runtime.go:209] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = [failed to set up sandbox container \"c1837e621f27246cd459f050a5bcd83af8816abbb1d593c8c76e5fdcb63f8fba\" network for pod \"coredns-64897985d-nvh82\": networkPlugin cni failed to set up pod \"coredns-64897985d-nvh82_kube-system\" network: failed to set bridge addr: could not add IP address to \"cni0\": permission denied, failed to clean up sandbox container \"c1837e621f27246cd459f050a5bcd83af8816abbb1d593c8c76e5fdcb63f8fba\" network for pod \"coredns-64897985d-nvh82\": networkPlugin cni failed to teardown pod \"coredns-64897985d-nvh82_kube-system\" network: running [/usr/sbin/iptables -t nat -D POSTROUTING -s 10.85.0.56 -j CNI-b720f366405dafa6eb578660 -m comment --comment name: \"crio\" id: \"c1837e621f27246cd459f050a5bcd83af8816abbb1d593c8c76e5fdcb63f8fba\" --wait]: exit status 2: iptables v1.8.4 (legacy):
Couldn't load target `CNI-b720f366405dafa6eb578660':No such file or directory\n\nTry `iptables -h' or 'iptables --help' for more information.\n]"
	Jun 02 19:46:46 newest-cni-20220602193528-12108 kubelet[945]: E0602 19:46:46.438667     945 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = [failed to set up sandbox container \"c1837e621f27246cd459f050a5bcd83af8816abbb1d593c8c76e5fdcb63f8fba\" network for pod \"coredns-64897985d-nvh82\": networkPlugin cni failed to set up pod \"coredns-64897985d-nvh82_kube-system\" network: failed to set bridge addr: could not add IP address to \"cni0\": permission denied, failed to clean up sandbox container \"c1837e621f27246cd459f050a5bcd83af8816abbb1d593c8c76e5fdcb63f8fba\" network for pod \"coredns-64897985d-nvh82\": networkPlugin cni failed to teardown pod \"coredns-64897985d-nvh82_kube-system\" network: running [/usr/sbin/iptables -t nat -D POSTROUTING -s 10.85.0.56 -j CNI-b720f366405dafa6eb578660 -m comment --comment name: \"crio\" id: \"c1837e621f27246cd459f050a5bcd83af8816abbb1d593c8c76e5fdcb63f8fba\" --wait]: exit status 2: iptables v1.8.4 (legacy): Could
n't load target `CNI-b720f366405dafa6eb578660':No such file or directory\n\nTry `iptables -h' or 'iptables --help' for more information.\n]" pod="kube-system/coredns-64897985d-nvh82"
	Jun 02 19:46:46 newest-cni-20220602193528-12108 kubelet[945]: E0602 19:46:46.438735     945 kuberuntime_manager.go:833] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = [failed to set up sandbox container \"c1837e621f27246cd459f050a5bcd83af8816abbb1d593c8c76e5fdcb63f8fba\" network for pod \"coredns-64897985d-nvh82\": networkPlugin cni failed to set up pod \"coredns-64897985d-nvh82_kube-system\" network: failed to set bridge addr: could not add IP address to \"cni0\": permission denied, failed to clean up sandbox container \"c1837e621f27246cd459f050a5bcd83af8816abbb1d593c8c76e5fdcb63f8fba\" network for pod \"coredns-64897985d-nvh82\": networkPlugin cni failed to teardown pod \"coredns-64897985d-nvh82_kube-system\" network: running [/usr/sbin/iptables -t nat -D POSTROUTING -s 10.85.0.56 -j CNI-b720f366405dafa6eb578660 -m comment --comment name: \"crio\" id: \"c1837e621f27246cd459f050a5bcd83af8816abbb1d593c8c76e5fdcb63f8fba\" --wait]: exit status 2: iptables v1.8.4 (legacy): Could
n't load target `CNI-b720f366405dafa6eb578660':No such file or directory\n\nTry `iptables -h' or 'iptables --help' for more information.\n]" pod="kube-system/coredns-64897985d-nvh82"
	Jun 02 19:46:46 newest-cni-20220602193528-12108 kubelet[945]: E0602 19:46:46.439465     945 pod_workers.go:951] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-64897985d-nvh82_kube-system(e020a13f-06c3-4682-8596-3644e6368c0d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-64897985d-nvh82_kube-system(e020a13f-06c3-4682-8596-3644e6368c0d)\\\": rpc error: code = Unknown desc = [failed to set up sandbox container \\\"c1837e621f27246cd459f050a5bcd83af8816abbb1d593c8c76e5fdcb63f8fba\\\" network for pod \\\"coredns-64897985d-nvh82\\\": networkPlugin cni failed to set up pod \\\"coredns-64897985d-nvh82_kube-system\\\" network: failed to set bridge addr: could not add IP address to \\\"cni0\\\": permission denied, failed to clean up sandbox container \\\"c1837e621f27246cd459f050a5bcd83af8816abbb1d593c8c76e5fdcb63f8fba\\\" network for pod \\\"coredns-64897985d-nvh82\\\": networkPlugin cni failed to teardown pod \\\"coredns-64897985d-nvh82_kube-syste
m\\\" network: running [/usr/sbin/iptables -t nat -D POSTROUTING -s 10.85.0.56 -j CNI-b720f366405dafa6eb578660 -m comment --comment name: \\\"crio\\\" id: \\\"c1837e621f27246cd459f050a5bcd83af8816abbb1d593c8c76e5fdcb63f8fba\\\" --wait]: exit status 2: iptables v1.8.4 (legacy): Couldn't load target `CNI-b720f366405dafa6eb578660':No such file or directory\\n\\nTry `iptables -h' or 'iptables --help' for more information.\\n]\"" pod="kube-system/coredns-64897985d-nvh82" podUID=e020a13f-06c3-4682-8596-3644e6368c0d
	Jun 02 19:46:46 newest-cni-20220602193528-12108 kubelet[945]: I0602 19:46:46.441453     945 docker_sandbox.go:402] "Failed to read pod IP from plugin/docker" err="networkPlugin cni failed on the status hook for pod \"coredns-64897985d-nvh82_kube-system\": CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container \"c1837e621f27246cd459f050a5bcd83af8816abbb1d593c8c76e5fdcb63f8fba\""
	Jun 02 19:46:46 newest-cni-20220602193528-12108 kubelet[945]: E0602 19:46:46.579264     945 remote_runtime.go:209] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = [failed to set up sandbox container \"51b0465a017c50edc34b4611eae82c916734780b3eb45d15087b35a050059f9c\" network for pod \"kubernetes-dashboard-cd7c84bfc-xsbcz\": networkPlugin cni failed to set up pod \"kubernetes-dashboard-cd7c84bfc-xsbcz_kubernetes-dashboard\" network: failed to set bridge addr: could not add IP address to \"cni0\": permission denied, failed to clean up sandbox container \"51b0465a017c50edc34b4611eae82c916734780b3eb45d15087b35a050059f9c\" network for pod \"kubernetes-dashboard-cd7c84bfc-xsbcz\": networkPlugin cni failed to teardown pod \"kubernetes-dashboard-cd7c84bfc-xsbcz_kubernetes-dashboard\" network: running [/usr/sbin/iptables -t nat -D POSTROUTING -s 10.85.0.57 -j CNI-82e5db239e03789aaa4d7ac9 -m comment --comment name: \"crio\" id: \"51b0465a017c50edc34b4611eae82c916734780b3eb45d1
5087b35a050059f9c\" --wait]: exit status 2: iptables v1.8.4 (legacy): Couldn't load target `CNI-82e5db239e03789aaa4d7ac9':No such file or directory\n\nTry `iptables -h' or 'iptables --help' for more information.\n]"
	Jun 02 19:46:46 newest-cni-20220602193528-12108 kubelet[945]: E0602 19:46:46.579414     945 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = [failed to set up sandbox container \"51b0465a017c50edc34b4611eae82c916734780b3eb45d15087b35a050059f9c\" network for pod \"kubernetes-dashboard-cd7c84bfc-xsbcz\": networkPlugin cni failed to set up pod \"kubernetes-dashboard-cd7c84bfc-xsbcz_kubernetes-dashboard\" network: failed to set bridge addr: could not add IP address to \"cni0\": permission denied, failed to clean up sandbox container \"51b0465a017c50edc34b4611eae82c916734780b3eb45d15087b35a050059f9c\" network for pod \"kubernetes-dashboard-cd7c84bfc-xsbcz\": networkPlugin cni failed to teardown pod \"kubernetes-dashboard-cd7c84bfc-xsbcz_kubernetes-dashboard\" network: running [/usr/sbin/iptables -t nat -D POSTROUTING -s 10.85.0.57 -j CNI-82e5db239e03789aaa4d7ac9 -m comment --comment name: \"crio\" id: \"51b0465a017c50edc34b4611eae82c916734780b3eb45d15087b
35a050059f9c\" --wait]: exit status 2: iptables v1.8.4 (legacy): Couldn't load target `CNI-82e5db239e03789aaa4d7ac9':No such file or directory\n\nTry `iptables -h' or 'iptables --help' for more information.\n]" pod="kubernetes-dashboard/kubernetes-dashboard-cd7c84bfc-xsbcz"
	Jun 02 19:46:46 newest-cni-20220602193528-12108 kubelet[945]: E0602 19:46:46.579496     945 kuberuntime_manager.go:833] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = [failed to set up sandbox container \"51b0465a017c50edc34b4611eae82c916734780b3eb45d15087b35a050059f9c\" network for pod \"kubernetes-dashboard-cd7c84bfc-xsbcz\": networkPlugin cni failed to set up pod \"kubernetes-dashboard-cd7c84bfc-xsbcz_kubernetes-dashboard\" network: failed to set bridge addr: could not add IP address to \"cni0\": permission denied, failed to clean up sandbox container \"51b0465a017c50edc34b4611eae82c916734780b3eb45d15087b35a050059f9c\" network for pod \"kubernetes-dashboard-cd7c84bfc-xsbcz\": networkPlugin cni failed to teardown pod \"kubernetes-dashboard-cd7c84bfc-xsbcz_kubernetes-dashboard\" network: running [/usr/sbin/iptables -t nat -D POSTROUTING -s 10.85.0.57 -j CNI-82e5db239e03789aaa4d7ac9 -m comment --comment name: \"crio\" id: \"51b0465a017c50edc34b4611eae82c916734780b3eb45d15087b
35a050059f9c\" --wait]: exit status 2: iptables v1.8.4 (legacy): Couldn't load target `CNI-82e5db239e03789aaa4d7ac9':No such file or directory\n\nTry `iptables -h' or 'iptables --help' for more information.\n]" pod="kubernetes-dashboard/kubernetes-dashboard-cd7c84bfc-xsbcz"
	Jun 02 19:46:46 newest-cni-20220602193528-12108 kubelet[945]: E0602 19:46:46.579804     945 pod_workers.go:951] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"kubernetes-dashboard-cd7c84bfc-xsbcz_kubernetes-dashboard(6e5f4f81-7f1a-4dbe-acda-91cfedb0abcf)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"kubernetes-dashboard-cd7c84bfc-xsbcz_kubernetes-dashboard(6e5f4f81-7f1a-4dbe-acda-91cfedb0abcf)\\\": rpc error: code = Unknown desc = [failed to set up sandbox container \\\"51b0465a017c50edc34b4611eae82c916734780b3eb45d15087b35a050059f9c\\\" network for pod \\\"kubernetes-dashboard-cd7c84bfc-xsbcz\\\": networkPlugin cni failed to set up pod \\\"kubernetes-dashboard-cd7c84bfc-xsbcz_kubernetes-dashboard\\\" network: failed to set bridge addr: could not add IP address to \\\"cni0\\\": permission denied, failed to clean up sandbox container \\\"51b0465a017c50edc34b4611eae82c916734780b3eb45d15087b35a050059f9c\\\" network for pod \\\"kubernetes-dashboard-cd7c84bf
c-xsbcz\\\": networkPlugin cni failed to teardown pod \\\"kubernetes-dashboard-cd7c84bfc-xsbcz_kubernetes-dashboard\\\" network: running [/usr/sbin/iptables -t nat -D POSTROUTING -s 10.85.0.57 -j CNI-82e5db239e03789aaa4d7ac9 -m comment --comment name: \\\"crio\\\" id: \\\"51b0465a017c50edc34b4611eae82c916734780b3eb45d15087b35a050059f9c\\\" --wait]: exit status 2: iptables v1.8.4 (legacy): Couldn't load target `CNI-82e5db239e03789aaa4d7ac9':No such file or directory\\n\\nTry `iptables -h' or 'iptables --help' for more information.\\n]\"" pod="kubernetes-dashboard/kubernetes-dashboard-cd7c84bfc-xsbcz" podUID=6e5f4f81-7f1a-4dbe-acda-91cfedb0abcf
	
	* 
	* ==> storage-provisioner [11e411025822] <==
	* I0602 19:45:44.345418       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F0602 19:46:14.355427       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	* 
	* ==> storage-provisioner [f3713257900f] <==
	* I0602 19:46:29.142768       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p newest-cni-20220602193528-12108 -n newest-cni-20220602193528-12108
helpers_test.go:254: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p newest-cni-20220602193528-12108 -n newest-cni-20220602193528-12108: (8.5101358s)
helpers_test.go:261: (dbg) Run:  kubectl --context newest-cni-20220602193528-12108 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
E0602 19:46:57.295972   12108 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\addons-20220602171403-12108\client.crt: The system cannot find the path specified.
helpers_test.go:270: non-running pods: coredns-64897985d-nvh82 metrics-server-b955d9d8-4zjkc dashboard-metrics-scraper-56974995fc-9zf2w kubernetes-dashboard-cd7c84bfc-xsbcz
helpers_test.go:272: ======> post-mortem[TestStartStop/group/newest-cni/serial/Pause]: describe non-running pods <======
helpers_test.go:275: (dbg) Run:  kubectl --context newest-cni-20220602193528-12108 describe pod coredns-64897985d-nvh82 metrics-server-b955d9d8-4zjkc dashboard-metrics-scraper-56974995fc-9zf2w kubernetes-dashboard-cd7c84bfc-xsbcz
helpers_test.go:275: (dbg) Non-zero exit: kubectl --context newest-cni-20220602193528-12108 describe pod coredns-64897985d-nvh82 metrics-server-b955d9d8-4zjkc dashboard-metrics-scraper-56974995fc-9zf2w kubernetes-dashboard-cd7c84bfc-xsbcz: exit status 1 (473.6072ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "coredns-64897985d-nvh82" not found
	Error from server (NotFound): pods "metrics-server-b955d9d8-4zjkc" not found
	Error from server (NotFound): pods "dashboard-metrics-scraper-56974995fc-9zf2w" not found
	Error from server (NotFound): pods "kubernetes-dashboard-cd7c84bfc-xsbcz" not found

                                                
                                                
** /stderr **
helpers_test.go:277: kubectl --context newest-cni-20220602193528-12108 describe pod coredns-64897985d-nvh82 metrics-server-b955d9d8-4zjkc dashboard-metrics-scraper-56974995fc-9zf2w kubernetes-dashboard-cd7c84bfc-xsbcz: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect newest-cni-20220602193528-12108
helpers_test.go:231: (dbg) Done: docker inspect newest-cni-20220602193528-12108: (1.3738631s)
helpers_test.go:235: (dbg) docker inspect newest-cni-20220602193528-12108:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "494859eb0fb175e426210a880c5590c7d45afd36495c4b566f182cc72aceea2d",
	        "Created": "2022-06-02T19:42:09.2509866Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 277868,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2022-06-02T19:44:54.1265864Z",
	            "FinishedAt": "2022-06-02T19:44:32.7793205Z"
	        },
	        "Image": "sha256:462790409a0e2520bcd6c4a009da80732d87146c973d78561764290fa691ea41",
	        "ResolvConfPath": "/var/lib/docker/containers/494859eb0fb175e426210a880c5590c7d45afd36495c4b566f182cc72aceea2d/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/494859eb0fb175e426210a880c5590c7d45afd36495c4b566f182cc72aceea2d/hostname",
	        "HostsPath": "/var/lib/docker/containers/494859eb0fb175e426210a880c5590c7d45afd36495c4b566f182cc72aceea2d/hosts",
	        "LogPath": "/var/lib/docker/containers/494859eb0fb175e426210a880c5590c7d45afd36495c4b566f182cc72aceea2d/494859eb0fb175e426210a880c5590c7d45afd36495c4b566f182cc72aceea2d-json.log",
	        "Name": "/newest-cni-20220602193528-12108",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "newest-cni-20220602193528-12108:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "newest-cni-20220602193528-12108",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 2306867200,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/f9a6db0863ea21cba32fe26a339ddd0c48312b903df5653e1b208d284dac1e2d-init/diff:/var/lib/docker/overlay2/dfce970b43800856c522d9750e5e1364e8adf4be4cf71ca7c53d79b33355f5a7/diff:/var/lib/docker/overlay2/4fd23a1b84854239f1bb855d05e42ecd6acbd1b0944b347813a56f5f45356a42/diff:/var/lib/docker/overlay2/864c5b1fbc297750771bb843fdeb4bafa10868a71716f4a01f1119609fb34667/diff:/var/lib/docker/overlay2/0f11f6855118857c743b90ca120ff7aa550f8157d475abf59df950433a5bc6e8/diff:/var/lib/docker/overlay2/2ae7f559725a060dc3b3a9c2fbd554b98114ae47dbf8db75f13bd8a95cbae19a/diff:/var/lib/docker/overlay2/48f41ac288d1037223ac101e6bc07f05729cdcecd98cc85971db99e90765c437/diff:/var/lib/docker/overlay2/8d4eaae639ade3ad3459b4fb67dbcac83774b72a2550b0a4bca1f21d122b20e6/diff:/var/lib/docker/overlay2/e06515bb91756221300de52336376d32ef9bd8685a92352e522936c4947b88ee/diff:/var/lib/docker/overlay2/a2f615fb794b704dc3823080c47e2c357cf4826ec91f6ae190c7497bb18a80cd/diff:/var/lib/docker/overlay2/22f99f
8a3da21c6e2be4c5c5e9d969af73e7695aaf9b0c7d0d09b5795ba76416/diff:/var/lib/docker/overlay2/9c0266785c64b9f6c471863067ca9db045a5aa61167a7817217cf01825a7d868/diff:/var/lib/docker/overlay2/b8a0250c9ae7d899ee3e46414c2db7f7ba363793900f8fcbf1b470586ebe7bd9/diff:/var/lib/docker/overlay2/00afbeac619cb9c06d4da311f5fc5aa3f5147b88b291acf06d4c4b36984ad5a2/diff:/var/lib/docker/overlay2/da51241ed08bd861b9d27902198eae13c3e4aac5c79f522e9f3fa209ea35e8d3/diff:/var/lib/docker/overlay2/b01176f7dbe98e3004db7c0fe45d94616a803dd8ae9cbdf3a1f2a188604178af/diff:/var/lib/docker/overlay2/0ebb0ff0177c8116e72a14ac704b161f75922cea05fe804ad1f7b83f4cd3dd70/diff:/var/lib/docker/overlay2/bae8d175bc3e334a70aaa239643efa0e8b453ab163f077d9cef60e3840c717ba/diff:/var/lib/docker/overlay2/e72a79f763a44dc32f9a2e84dc5e28a060e7fbb9f4624cb8aaa084dd356522ec/diff:/var/lib/docker/overlay2/2e1bc304b205033ad7f49fb8db243b0991596e0eec913fd13e8382aa25767e21/diff:/var/lib/docker/overlay2/ebb9b39dedfc09f9f34ea879f56a8ffd24ab9f9bf8acc93aa9df5eb93dba58e8/diff:/var/lib/d
ocker/overlay2/bffdca36eba4bce9086f2c269bcfe5b915d807483717f0e27acbd51b5bbfc11b/diff:/var/lib/docker/overlay2/96c321cbf06c0050c8a0a7897e9533db1ee5788eb09b1e1d605bdd1134af8eca/diff:/var/lib/docker/overlay2/735422b44af98e330209fe1c4273bf57aa33fcfd770f3e9d6f1a6e59f7545920/diff:/var/lib/docker/overlay2/8dc177c0589f67ded7d9c229d3c587fe77b3d1c68cf0a5af871bc23768d67d84/diff:/var/lib/docker/overlay2/9a29541ccfee3849e0691950c599bb7e4e51d9026724b1ad13abc8d8e9c140e0/diff:/var/lib/docker/overlay2/50fe1dc8f357b5d624681e6f14d98e6d33a8b6b53d70293ba90ac4435a1e18d8/diff:/var/lib/docker/overlay2/86f301a296dbb7422a3d55a008a9f38278a7a19d68a0f735d298c0c2a431ee30/diff:/var/lib/docker/overlay2/dc8087ea592587f8cb5392cc0ee739c33f2724c47b83767d593b3065914820b0/diff:/var/lib/docker/overlay2/15163601889f0d414f35ccd64ae33a52958605b5b7e50618ed5d4f4bd06ec65b/diff:/var/lib/docker/overlay2/a50cf19d9d69b9c68c6c66a918cbde678b49e8d566d06772af22bf99191b08f3/diff:/var/lib/docker/overlay2/621f3b0fc578721c5d0465771ad007f022ed238fa5a2076f807c077680c
26d27/diff:/var/lib/docker/overlay2/2652f9ffde92786a77e3bb35fe07c03a623aaad541f0ca9710839800c4b470e4/diff:/var/lib/docker/overlay2/c853755ee76ea55ad6c00f5eaff82196f4953ee6fb2d27e27ba35f86d56bfc32/diff:/var/lib/docker/overlay2/a0f70e6416a8e618ea7475b5e7f4cdc9a66ac39f0a6c1969c569d8e4f0b5e9eb/diff:/var/lib/docker/overlay2/275d2c643ecb011298df16e0794bebb9a7ec82e190aea53a90369288c521f75e/diff:/var/lib/docker/overlay2/a7e78f238badc23c2c38b7e9b9c4428c0614e825744076161295740d46a20957/diff:/var/lib/docker/overlay2/39fcd4c392271449973511a31d445289c1f8d378d01759fef12c430c9f44f2b8/diff:/var/lib/docker/overlay2/e1c51360d327e86575fe8248415fae12e9dbdde580db0e6f4f4e485ac9f92e3b/diff:/var/lib/docker/overlay2/fecd88783858177cbe3b751f0717b370c5556d7cf0ef163e2710f16fce09d53c/diff:/var/lib/docker/overlay2/3b4c7afaac6f5818bc33bec8c0ec442eb5a1010d0de6fe488460ee83a3901b21/diff:/var/lib/docker/overlay2/47d0047bc42c34ea02c33c1500f96c5109f27f84f973a5636832bbc855761e3f/diff",
	                "MergedDir": "/var/lib/docker/overlay2/f9a6db0863ea21cba32fe26a339ddd0c48312b903df5653e1b208d284dac1e2d/merged",
	                "UpperDir": "/var/lib/docker/overlay2/f9a6db0863ea21cba32fe26a339ddd0c48312b903df5653e1b208d284dac1e2d/diff",
	                "WorkDir": "/var/lib/docker/overlay2/f9a6db0863ea21cba32fe26a339ddd0c48312b903df5653e1b208d284dac1e2d/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "newest-cni-20220602193528-12108",
	                "Source": "/var/lib/docker/volumes/newest-cni-20220602193528-12108/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "newest-cni-20220602193528-12108",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "newest-cni-20220602193528-12108",
	                "name.minikube.sigs.k8s.io": "newest-cni-20220602193528-12108",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "39e9ce1af27b6e7b2cf3511d876ba94d60b25e9fb53562144ceda7121413b8be",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "54943"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "54944"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "54945"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "54946"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "54947"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/39e9ce1af27b",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "newest-cni-20220602193528-12108": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.58.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "494859eb0fb1",
	                        "newest-cni-20220602193528-12108"
	                    ],
	                    "NetworkID": "4da5d80e8d86dd2da8c242516e00ea62a0606e89d2c53fe365cac4b3373e53c6",
	                    "EndpointID": "146dd801d3f4d1133b863665a10cd83555444977db5d090d9502d2c0072a3932",
	                    "Gateway": "192.168.58.1",
	                    "IPAddress": "192.168.58.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:3a:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p newest-cni-20220602193528-12108 -n newest-cni-20220602193528-12108
E0602 19:47:08.001194   12108 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\default-k8s-different-port-20220602192441-12108\client.crt: The system cannot find the path specified.
helpers_test.go:239: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.Host}} -p newest-cni-20220602193528-12108 -n newest-cni-20220602193528-12108: (8.6332933s)
helpers_test.go:244: <<< TestStartStop/group/newest-cni/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-windows-amd64.exe -p newest-cni-20220602193528-12108 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-windows-amd64.exe -p newest-cni-20220602193528-12108 logs -n 25: (13.6531106s)
helpers_test.go:252: TestStartStop/group/newest-cni/serial/Pause logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|------------------------------------------------------------|-------------------------------------------------|-------------------|----------------|---------------------|---------------------|
	| Command |                            Args                            |                     Profile                     |       User        |    Version     |     Start Time      |      End Time       |
	|---------|------------------------------------------------------------|-------------------------------------------------|-------------------|----------------|---------------------|---------------------|
	| start   | -p                                                         | default-k8s-different-port-20220602192441-12108 | minikube7\jenkins | v1.26.0-beta.1 | 02 Jun 22 19:27 GMT | 02 Jun 22 19:34 GMT |
	|         | default-k8s-different-port-20220602192441-12108            |                                                 |                   |                |                     |                     |
	|         | --memory=2200 --alsologtostderr --wait=true                |                                                 |                   |                |                     |                     |
	|         | --apiserver-port=8444 --driver=docker                      |                                                 |                   |                |                     |                     |
	|         | --kubernetes-version=v1.23.6                               |                                                 |                   |                |                     |                     |
	| ssh     | -p                                                         | old-k8s-version-20220602192231-12108            | minikube7\jenkins | v1.26.0-beta.1 | 02 Jun 22 19:35 GMT | 02 Jun 22 19:35 GMT |
	|         | old-k8s-version-20220602192231-12108                       |                                                 |                   |                |                     |                     |
	|         | sudo crictl images -o json                                 |                                                 |                   |                |                     |                     |
	| delete  | -p                                                         | embed-certs-20220602192235-12108                | minikube7\jenkins | v1.26.0-beta.1 | 02 Jun 22 19:34 GMT | 02 Jun 22 19:35 GMT |
	|         | embed-certs-20220602192235-12108                           |                                                 |                   |                |                     |                     |
	| ssh     | -p                                                         | no-preload-20220602192234-12108                 | minikube7\jenkins | v1.26.0-beta.1 | 02 Jun 22 19:35 GMT | 02 Jun 22 19:35 GMT |
	|         | no-preload-20220602192234-12108                            |                                                 |                   |                |                     |                     |
	|         | sudo crictl images -o json                                 |                                                 |                   |                |                     |                     |
	| pause   | -p                                                         | old-k8s-version-20220602192231-12108            | minikube7\jenkins | v1.26.0-beta.1 | 02 Jun 22 19:35 GMT | 02 Jun 22 19:35 GMT |
	|         | old-k8s-version-20220602192231-12108                       |                                                 |                   |                |                     |                     |
	|         | --alsologtostderr -v=1                                     |                                                 |                   |                |                     |                     |
	| pause   | -p                                                         | no-preload-20220602192234-12108                 | minikube7\jenkins | v1.26.0-beta.1 | 02 Jun 22 19:35 GMT | 02 Jun 22 19:35 GMT |
	|         | no-preload-20220602192234-12108                            |                                                 |                   |                |                     |                     |
	|         | --alsologtostderr -v=1                                     |                                                 |                   |                |                     |                     |
	| delete  | -p                                                         | embed-certs-20220602192235-12108                | minikube7\jenkins | v1.26.0-beta.1 | 02 Jun 22 19:35 GMT | 02 Jun 22 19:35 GMT |
	|         | embed-certs-20220602192235-12108                           |                                                 |                   |                |                     |                     |
	| unpause | -p                                                         | old-k8s-version-20220602192231-12108            | minikube7\jenkins | v1.26.0-beta.1 | 02 Jun 22 19:35 GMT | 02 Jun 22 19:35 GMT |
	|         | old-k8s-version-20220602192231-12108                       |                                                 |                   |                |                     |                     |
	|         | --alsologtostderr -v=1                                     |                                                 |                   |                |                     |                     |
	| unpause | -p                                                         | no-preload-20220602192234-12108                 | minikube7\jenkins | v1.26.0-beta.1 | 02 Jun 22 19:35 GMT | 02 Jun 22 19:35 GMT |
	|         | no-preload-20220602192234-12108                            |                                                 |                   |                |                     |                     |
	|         | --alsologtostderr -v=1                                     |                                                 |                   |                |                     |                     |
	| ssh     | -p                                                         | default-k8s-different-port-20220602192441-12108 | minikube7\jenkins | v1.26.0-beta.1 | 02 Jun 22 19:35 GMT | 02 Jun 22 19:35 GMT |
	|         | default-k8s-different-port-20220602192441-12108            |                                                 |                   |                |                     |                     |
	|         | sudo crictl images -o json                                 |                                                 |                   |                |                     |                     |
	| pause   | -p                                                         | default-k8s-different-port-20220602192441-12108 | minikube7\jenkins | v1.26.0-beta.1 | 02 Jun 22 19:35 GMT | 02 Jun 22 19:36 GMT |
	|         | default-k8s-different-port-20220602192441-12108            |                                                 |                   |                |                     |                     |
	|         | --alsologtostderr -v=1                                     |                                                 |                   |                |                     |                     |
	| unpause | -p                                                         | default-k8s-different-port-20220602192441-12108 | minikube7\jenkins | v1.26.0-beta.1 | 02 Jun 22 19:36 GMT | 02 Jun 22 19:36 GMT |
	|         | default-k8s-different-port-20220602192441-12108            |                                                 |                   |                |                     |                     |
	|         | --alsologtostderr -v=1                                     |                                                 |                   |                |                     |                     |
	| delete  | -p                                                         | old-k8s-version-20220602192231-12108            | minikube7\jenkins | v1.26.0-beta.1 | 02 Jun 22 19:35 GMT | 02 Jun 22 19:36 GMT |
	|         | old-k8s-version-20220602192231-12108                       |                                                 |                   |                |                     |                     |
	| delete  | -p                                                         | no-preload-20220602192234-12108                 | minikube7\jenkins | v1.26.0-beta.1 | 02 Jun 22 19:35 GMT | 02 Jun 22 19:36 GMT |
	|         | no-preload-20220602192234-12108                            |                                                 |                   |                |                     |                     |
	| delete  | -p                                                         | old-k8s-version-20220602192231-12108            | minikube7\jenkins | v1.26.0-beta.1 | 02 Jun 22 19:36 GMT | 02 Jun 22 19:36 GMT |
	|         | old-k8s-version-20220602192231-12108                       |                                                 |                   |                |                     |                     |
	| delete  | -p                                                         | no-preload-20220602192234-12108                 | minikube7\jenkins | v1.26.0-beta.1 | 02 Jun 22 19:36 GMT | 02 Jun 22 19:36 GMT |
	|         | no-preload-20220602192234-12108                            |                                                 |                   |                |                     |                     |
	| delete  | -p                                                         | default-k8s-different-port-20220602192441-12108 | minikube7\jenkins | v1.26.0-beta.1 | 02 Jun 22 19:36 GMT | 02 Jun 22 19:37 GMT |
	|         | default-k8s-different-port-20220602192441-12108            |                                                 |                   |                |                     |                     |
	| delete  | -p                                                         | default-k8s-different-port-20220602192441-12108 | minikube7\jenkins | v1.26.0-beta.1 | 02 Jun 22 19:37 GMT | 02 Jun 22 19:37 GMT |
	|         | default-k8s-different-port-20220602192441-12108            |                                                 |                   |                |                     |                     |
	| start   | -p newest-cni-20220602193528-12108 --memory=2200           | newest-cni-20220602193528-12108                 | minikube7\jenkins | v1.26.0-beta.1 | 02 Jun 22 19:35 GMT | 02 Jun 22 19:44 GMT |
	|         | --alsologtostderr --wait=apiserver,system_pods,default_sa  |                                                 |                   |                |                     |                     |
	|         | --feature-gates ServerSideApply=true --network-plugin=cni  |                                                 |                   |                |                     |                     |
	|         | --extra-config=kubelet.network-plugin=cni                  |                                                 |                   |                |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 |                                                 |                   |                |                     |                     |
	|         | --driver=docker --kubernetes-version=v1.23.6               |                                                 |                   |                |                     |                     |
	| addons  | enable metrics-server -p                                   | newest-cni-20220602193528-12108                 | minikube7\jenkins | v1.26.0-beta.1 | 02 Jun 22 19:44 GMT | 02 Jun 22 19:44 GMT |
	|         | newest-cni-20220602193528-12108                            |                                                 |                   |                |                     |                     |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4           |                                                 |                   |                |                     |                     |
	|         | --registries=MetricsServer=fake.domain                     |                                                 |                   |                |                     |                     |
	| stop    | -p                                                         | newest-cni-20220602193528-12108                 | minikube7\jenkins | v1.26.0-beta.1 | 02 Jun 22 19:44 GMT | 02 Jun 22 19:44 GMT |
	|         | newest-cni-20220602193528-12108                            |                                                 |                   |                |                     |                     |
	|         | --alsologtostderr -v=3                                     |                                                 |                   |                |                     |                     |
	| addons  | enable dashboard -p                                        | newest-cni-20220602193528-12108                 | minikube7\jenkins | v1.26.0-beta.1 | 02 Jun 22 19:44 GMT | 02 Jun 22 19:44 GMT |
	|         | newest-cni-20220602193528-12108                            |                                                 |                   |                |                     |                     |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4          |                                                 |                   |                |                     |                     |
	| start   | -p newest-cni-20220602193528-12108 --memory=2200           | newest-cni-20220602193528-12108                 | minikube7\jenkins | v1.26.0-beta.1 | 02 Jun 22 19:44 GMT | 02 Jun 22 19:45 GMT |
	|         | --alsologtostderr --wait=apiserver,system_pods,default_sa  |                                                 |                   |                |                     |                     |
	|         | --feature-gates ServerSideApply=true --network-plugin=cni  |                                                 |                   |                |                     |                     |
	|         | --extra-config=kubelet.network-plugin=cni                  |                                                 |                   |                |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 |                                                 |                   |                |                     |                     |
	|         | --driver=docker --kubernetes-version=v1.23.6               |                                                 |                   |                |                     |                     |
	| ssh     | -p                                                         | newest-cni-20220602193528-12108                 | minikube7\jenkins | v1.26.0-beta.1 | 02 Jun 22 19:46 GMT | 02 Jun 22 19:46 GMT |
	|         | newest-cni-20220602193528-12108                            |                                                 |                   |                |                     |                     |
	|         | sudo crictl images -o json                                 |                                                 |                   |                |                     |                     |
	| logs    | newest-cni-20220602193528-12108                            | newest-cni-20220602193528-12108                 | minikube7\jenkins | v1.26.0-beta.1 | 02 Jun 22 19:46 GMT | 02 Jun 22 19:46 GMT |
	|         | logs -n 25                                                 |                                                 |                   |                |                     |                     |
	|---------|------------------------------------------------------------|-------------------------------------------------|-------------------|----------------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2022/06/02 19:44:41
	Running on machine: minikube7
	Binary: Built with gc go1.18.2 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0602 19:44:41.496479   13568 out.go:296] Setting OutFile to fd 704 ...
	I0602 19:44:41.560249   13568 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0602 19:44:41.560249   13568 out.go:309] Setting ErrFile to fd 1964...
	I0602 19:44:41.560249   13568 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0602 19:44:41.577216   13568 out.go:303] Setting JSON to false
	I0602 19:44:41.580894   13568 start.go:115] hostinfo: {"hostname":"minikube7","uptime":62223,"bootTime":1654136858,"procs":188,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19044 Build 19044","kernelVersion":"10.0.19044 Build 19044","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f66ed2ea-6c04-4a6b-8eea-b2fb0953e990"}
	W0602 19:44:41.581464   13568 start.go:123] gopshost.Virtualization returned error: not implemented yet
	I0602 19:44:41.587340   13568 out.go:177] * [newest-cni-20220602193528-12108] minikube v1.26.0-beta.1 on Microsoft Windows 10 Enterprise N 10.0.19044 Build 19044
	I0602 19:44:41.589716   13568 notify.go:193] Checking for updates...
	I0602 19:44:41.591861   13568 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube7\minikube-integration\kubeconfig
	I0602 19:44:41.594967   13568 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube7\minikube-integration\.minikube
	I0602 19:44:41.598290   13568 out.go:177]   - MINIKUBE_LOCATION=14269
	I0602 19:44:41.601364   13568 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0602 19:44:42.420346   11612 pod_ready.go:102] pod "coredns-64897985d-6flrb" in "kube-system" namespace has status "Ready":"False"
	I0602 19:44:44.424745   11612 pod_ready.go:102] pod "coredns-64897985d-6flrb" in "kube-system" namespace has status "Ready":"False"
	I0602 19:44:41.604841   13568 config.go:178] Loaded profile config "newest-cni-20220602193528-12108": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.6
	I0602 19:44:41.605718   13568 driver.go:358] Setting default libvirt URI to qemu:///system
	I0602 19:44:44.403715   13568 docker.go:137] docker version: linux-20.10.16
	I0602 19:44:44.417723   13568 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0602 19:44:46.667733   13568 cli_runner.go:217] Completed: docker system info --format "{{json .}}": (2.2498578s)
	I0602 19:44:46.668460   13568 info.go:265] docker info: {ID:DTJ5:6HSD:HQYO:NIUY:6CLH:5UQL:6FBK:7BMO:GVE6:JAI7:26MB:GZA3 Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:90 OomKillDisable:true NGoroutines:65 SystemTime:2022-06-02 19:44:45.5123415 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:4 KernelVersion:5.10.102.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_
64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:53902323712 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.16 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16 Expected:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16} RuncCommit:{ID:v1.1.1-0-g52de29d Expected:v1.1.1-0-g52de29d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,
profile=default] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.6.0] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:C:\Program Files\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc.
Version:v0.17.0]] Warnings:<nil>}}
	I0602 19:44:46.677206   13568 out.go:177] * Using the docker driver based on existing profile
	I0602 19:44:46.680824   13568 start.go:284] selected driver: docker
	I0602 19:44:46.680824   13568 start.go:806] validating driver "docker" against &{Name:newest-cni-20220602193528-12108 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.6 ClusterName:newest-cni-20220602193528-12108 Namespace:
default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:network-plugin Value:cni} {Component:kubeadm Key:pod-network-cidr Value:192.168.111.111/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.23.6 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.6.0@sha256:4af9580485920635d888efe1eddbd67e12f9d5d84dba87100e93feb4e46636b3 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map
[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube7:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0602 19:44:46.680824   13568 start.go:817] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0602 19:44:46.812980   13568 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0602 19:44:49.211713   13568 cli_runner.go:217] Completed: docker system info --format "{{json .}}": (2.3987221s)
	I0602 19:44:49.211890   13568 info.go:265] docker info: {ID:DTJ5:6HSD:HQYO:NIUY:6CLH:5UQL:6FBK:7BMO:GVE6:JAI7:26MB:GZA3 Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:90 OomKillDisable:true NGoroutines:65 SystemTime:2022-06-02 19:44:48.025779 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:4 KernelVersion:5.10.102.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_6
4 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:53902323712 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.16 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16 Expected:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16} RuncCommit:{ID:v1.1.1-0-g52de29d Expected:v1.1.1-0-g52de29d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,p
rofile=default] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.6.0] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:C:\Program Files\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc.
Version:v0.17.0]] Warnings:<nil>}}
	I0602 19:44:49.212532   13568 start_flags.go:866] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0602 19:44:49.212532   13568 cni.go:95] Creating CNI manager for ""
	I0602 19:44:49.212532   13568 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0602 19:44:49.212532   13568 start_flags.go:306] config:
	{Name:newest-cni-20220602193528-12108 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.6 ClusterName:newest-cni-20220602193528-12108 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:clu
ster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:network-plugin Value:cni} {Component:kubeadm Key:pod-network-cidr Value:192.168.111.111/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.23.6 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.6.0@sha256:4af9580485920635d888efe1eddbd67e12f9d5d84dba87100e93feb4e46636b3 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false nod
e_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube7:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0602 19:44:49.216732   13568 out.go:177] * Starting control plane node newest-cni-20220602193528-12108 in cluster newest-cni-20220602193528-12108
	I0602 19:44:49.219113   13568 cache.go:120] Beginning downloading kic base image for docker with docker
	I0602 19:44:49.221725   13568 out.go:177] * Pulling base image ...
	I0602 19:44:46.934589   11612 pod_ready.go:102] pod "coredns-64897985d-6flrb" in "kube-system" namespace has status "Ready":"False"
	I0602 19:44:49.415977   11612 pod_ready.go:102] pod "coredns-64897985d-6flrb" in "kube-system" namespace has status "Ready":"False"
	I0602 19:44:49.223487   13568 preload.go:132] Checking if preload exists for k8s version v1.23.6 and runtime docker
	I0602 19:44:49.223487   13568 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496 in local docker daemon
	I0602 19:44:49.223747   13568 preload.go:148] Found local preload: C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.23.6-docker-overlay2-amd64.tar.lz4
	I0602 19:44:49.223747   13568 cache.go:57] Caching tarball of preloaded images
	I0602 19:44:49.224300   13568 preload.go:174] Found C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.23.6-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0602 19:44:49.224481   13568 cache.go:60] Finished verifying existence of preloaded tar for  v1.23.6 on docker
	I0602 19:44:49.224898   13568 profile.go:148] Saving config to C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\newest-cni-20220602193528-12108\config.json ...
	I0602 19:44:50.473910   13568 image.go:79] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496 in local docker daemon, skipping pull
	I0602 19:44:50.474138   13568 cache.go:141] gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496 exists in daemon, skipping load
	I0602 19:44:50.474138   13568 cache.go:206] Successfully downloaded all kic artifacts
	I0602 19:44:50.474138   13568 start.go:352] acquiring machines lock for newest-cni-20220602193528-12108: {Name:mk244be8bfa86d8f96622244132b3a037ccada35 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0602 19:44:50.474138   13568 start.go:356] acquired machines lock for "newest-cni-20220602193528-12108" in 0s
	I0602 19:44:50.474138   13568 start.go:94] Skipping create...Using existing machine configuration
	I0602 19:44:50.474138   13568 fix.go:55] fixHost starting: 
	I0602 19:44:50.489121   13568 cli_runner.go:164] Run: docker container inspect newest-cni-20220602193528-12108 --format={{.State.Status}}
	I0602 19:44:51.833358   13568 cli_runner.go:217] Completed: docker container inspect newest-cni-20220602193528-12108 --format={{.State.Status}}: (1.3442309s)
	I0602 19:44:51.833443   13568 fix.go:103] recreateIfNeeded on newest-cni-20220602193528-12108: state=Stopped err=<nil>
	W0602 19:44:51.833443   13568 fix.go:129] unexpected machine state, will restart: <nil>
	I0602 19:44:51.843959   13568 out.go:177] * Restarting existing docker container for "newest-cni-20220602193528-12108" ...
	I0602 19:44:51.912972   11612 pod_ready.go:102] pod "coredns-64897985d-6flrb" in "kube-system" namespace has status "Ready":"False"
	I0602 19:44:53.923388   11612 pod_ready.go:102] pod "coredns-64897985d-6flrb" in "kube-system" namespace has status "Ready":"False"
	I0602 19:44:51.872962   13568 cli_runner.go:164] Run: docker start newest-cni-20220602193528-12108
	I0602 19:44:54.200741   13568 cli_runner.go:217] Completed: docker start newest-cni-20220602193528-12108: (2.3277688s)
	I0602 19:44:54.210764   13568 cli_runner.go:164] Run: docker container inspect newest-cni-20220602193528-12108 --format={{.State.Status}}
	I0602 19:44:55.505806   13568 cli_runner.go:217] Completed: docker container inspect newest-cni-20220602193528-12108 --format={{.State.Status}}: (1.2950365s)
	I0602 19:44:55.505806   13568 kic.go:416] container "newest-cni-20220602193528-12108" state is running.
	I0602 19:44:55.515774   13568 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-20220602193528-12108
	I0602 19:44:56.418137   11612 pod_ready.go:102] pod "coredns-64897985d-6flrb" in "kube-system" namespace has status "Ready":"False"
	I0602 19:44:58.913097   11612 pod_ready.go:102] pod "coredns-64897985d-6flrb" in "kube-system" namespace has status "Ready":"False"
	I0602 19:44:56.822329   13568 cli_runner.go:217] Completed: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-20220602193528-12108: (1.3065496s)
	I0602 19:44:56.822329   13568 profile.go:148] Saving config to C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\newest-cni-20220602193528-12108\config.json ...
	I0602 19:44:56.824322   13568 machine.go:88] provisioning docker machine ...
	I0602 19:44:56.824322   13568 ubuntu.go:169] provisioning hostname "newest-cni-20220602193528-12108"
	I0602 19:44:56.835344   13568 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220602193528-12108
	I0602 19:44:58.141724   13568 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220602193528-12108: (1.3062118s)
	I0602 19:44:58.148930   13568 main.go:134] libmachine: Using SSH client type: native
	I0602 19:44:58.149171   13568 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x5b2ea0] 0x5b5d00 <nil>  [] 0s} 127.0.0.1 54943 <nil> <nil>}
	I0602 19:44:58.149171   13568 main.go:134] libmachine: About to run SSH command:
	sudo hostname newest-cni-20220602193528-12108 && echo "newest-cni-20220602193528-12108" | sudo tee /etc/hostname
	I0602 19:44:58.389505   13568 main.go:134] libmachine: SSH cmd err, output: <nil>: newest-cni-20220602193528-12108
	
	I0602 19:44:58.398345   13568 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220602193528-12108
	I0602 19:44:59.676621   13568 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220602193528-12108: (1.278216s)
	I0602 19:44:59.682531   13568 main.go:134] libmachine: Using SSH client type: native
	I0602 19:44:59.683534   13568 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x5b2ea0] 0x5b5d00 <nil>  [] 0s} 127.0.0.1 54943 <nil> <nil>}
	I0602 19:44:59.683534   13568 main.go:134] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-20220602193528-12108' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-20220602193528-12108/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-20220602193528-12108' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0602 19:44:59.822921   13568 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I0602 19:44:59.822921   13568 ubuntu.go:175] set auth options {CertDir:C:\Users\jenkins.minikube7\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube7\minikube-integration\.minikube}
	I0602 19:44:59.822921   13568 ubuntu.go:177] setting up certificates
	I0602 19:44:59.823899   13568 provision.go:83] configureAuth start
	I0602 19:44:59.830897   13568 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-20220602193528-12108
	I0602 19:45:01.075848   13568 cli_runner.go:217] Completed: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-20220602193528-12108: (1.2449458s)
	I0602 19:45:01.075848   13568 provision.go:138] copyHostCerts
	I0602 19:45:01.075848   13568 exec_runner.go:144] found C:\Users\jenkins.minikube7\minikube-integration\.minikube/ca.pem, removing ...
	I0602 19:45:01.075848   13568 exec_runner.go:207] rm: C:\Users\jenkins.minikube7\minikube-integration\.minikube\ca.pem
	I0602 19:45:01.076851   13568 exec_runner.go:151] cp: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube7\minikube-integration\.minikube/ca.pem (1078 bytes)
	I0602 19:45:01.078834   13568 exec_runner.go:144] found C:\Users\jenkins.minikube7\minikube-integration\.minikube/cert.pem, removing ...
	I0602 19:45:01.078834   13568 exec_runner.go:207] rm: C:\Users\jenkins.minikube7\minikube-integration\.minikube\cert.pem
	I0602 19:45:01.078834   13568 exec_runner.go:151] cp: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube7\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0602 19:45:01.079834   13568 exec_runner.go:144] found C:\Users\jenkins.minikube7\minikube-integration\.minikube/key.pem, removing ...
	I0602 19:45:01.079834   13568 exec_runner.go:207] rm: C:\Users\jenkins.minikube7\minikube-integration\.minikube\key.pem
	I0602 19:45:01.080821   13568 exec_runner.go:151] cp: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube7\minikube-integration\.minikube/key.pem (1675 bytes)
	I0602 19:45:01.081822   13568 provision.go:112] generating server cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.newest-cni-20220602193528-12108 san=[192.168.58.2 127.0.0.1 localhost 127.0.0.1 minikube newest-cni-20220602193528-12108]
	I0602 19:45:01.452887   13568 provision.go:172] copyRemoteCerts
	I0602 19:45:01.476806   13568 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0602 19:45:01.485801   13568 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220602193528-12108
	I0602 19:45:02.168867   12568 out.go:204]   - Generating certificates and keys ...
	I0602 19:45:02.176856   12568 out.go:204]   - Booting up control plane ...
	I0602 19:45:02.184845   12568 out.go:204]   - Configuring RBAC rules ...
	I0602 19:45:02.189843   12568 cni.go:95] Creating CNI manager for "calico"
	I0602 19:45:02.194836   12568 out.go:177] * Configuring Calico (Container Networking Interface) ...
	I0602 19:45:03.360204    7936 out.go:204]   - Generating certificates and keys ...
	I0602 19:45:03.367424    7936 out.go:204]   - Booting up control plane ...
	I0602 19:45:00.926017   11612 pod_ready.go:102] pod "coredns-64897985d-6flrb" in "kube-system" namespace has status "Ready":"False"
	I0602 19:45:03.420253   11612 pod_ready.go:102] pod "coredns-64897985d-6flrb" in "kube-system" namespace has status "Ready":"False"
	I0602 19:45:05.421419   11612 pod_ready.go:102] pod "coredns-64897985d-6flrb" in "kube-system" namespace has status "Ready":"False"
	I0602 19:45:03.373207    7936 out.go:204]   - Configuring RBAC rules ...
	I0602 19:45:03.376660    7936 cni.go:95] Creating CNI manager for ""
	I0602 19:45:03.376660    7936 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0602 19:45:03.376660    7936 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0602 19:45:03.391258    7936 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl label nodes minikube.k8s.io/version=v1.26.0-beta.1 minikube.k8s.io/commit=408dc4036f5a6d8b1313a2031b5dcb646a720fae minikube.k8s.io/name=auto-20220602191545-12108 minikube.k8s.io/updated_at=2022_06_02T19_45_03_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0602 19:45:03.394263    7936 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0602 19:45:03.402292    7936 ops.go:34] apiserver oom_adj: -16
	I0602 19:45:05.567230    7936 ssh_runner.go:235] Completed: sudo /var/lib/minikube/binaries/v1.23.6/kubectl label nodes minikube.k8s.io/version=v1.26.0-beta.1 minikube.k8s.io/commit=408dc4036f5a6d8b1313a2031b5dcb646a720fae minikube.k8s.io/name=auto-20220602191545-12108 minikube.k8s.io/updated_at=2022_06_02T19_45_03_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig: (2.1758709s)
	I0602 19:45:05.567271    7936 ssh_runner.go:235] Completed: sudo /var/lib/minikube/binaries/v1.23.6/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig: (2.1729578s)
	I0602 19:45:05.585998    7936 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0602 19:45:02.197833   12568 cni.go:189] applying CNI manifest using /var/lib/minikube/binaries/v1.23.6/kubectl ...
	I0602 19:45:02.197833   12568 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (202049 bytes)
	I0602 19:45:02.299376   12568 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0602 19:45:02.775165   13568 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220602193528-12108: (1.2893581s)
	I0602 19:45:02.775642   13568 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:54943 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\newest-cni-20220602193528-12108\id_rsa Username:docker}
	I0602 19:45:02.893906   13568 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (1.417094s)
	I0602 19:45:02.893906   13568 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0602 19:45:02.972035   13568 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1265 bytes)
	I0602 19:45:03.026753   13568 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0602 19:45:03.090723   13568 provision.go:86] duration metric: configureAuth took 3.2668103s
	I0602 19:45:03.090723   13568 ubuntu.go:193] setting minikube options for container-runtime
	I0602 19:45:03.092719   13568 config.go:178] Loaded profile config "newest-cni-20220602193528-12108": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.6
	I0602 19:45:03.099718   13568 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220602193528-12108
	I0602 19:45:04.397842   13568 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220602193528-12108: (1.298118s)
	I0602 19:45:04.400841   13568 main.go:134] libmachine: Using SSH client type: native
	I0602 19:45:04.401861   13568 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x5b2ea0] 0x5b5d00 <nil>  [] 0s} 127.0.0.1 54943 <nil> <nil>}
	I0602 19:45:04.401861   13568 main.go:134] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0602 19:45:04.554686   13568 main.go:134] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0602 19:45:04.554686   13568 ubuntu.go:71] root file system type: overlay
	I0602 19:45:04.555695   13568 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0602 19:45:04.570681   13568 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220602193528-12108
	I0602 19:45:05.809103   13568 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220602193528-12108: (1.2381808s)
	I0602 19:45:05.814611   13568 main.go:134] libmachine: Using SSH client type: native
	I0602 19:45:05.815060   13568 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x5b2ea0] 0x5b5d00 <nil>  [] 0s} 127.0.0.1 54943 <nil> <nil>}
	I0602 19:45:05.815060   13568 main.go:134] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0602 19:45:06.039515   13568 main.go:134] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0602 19:45:06.052500   13568 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220602193528-12108
	I0602 19:45:07.914845   11612 pod_ready.go:102] pod "coredns-64897985d-6flrb" in "kube-system" namespace has status "Ready":"False"
	I0602 19:45:09.938824   11612 pod_ready.go:102] pod "coredns-64897985d-6flrb" in "kube-system" namespace has status "Ready":"False"
	I0602 19:45:06.267653    7936 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0602 19:45:06.766675    7936 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0602 19:45:07.278466    7936 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0602 19:45:07.763872    7936 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0602 19:45:08.272005    7936 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0602 19:45:08.790342    7936 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0602 19:45:09.269470    7936 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0602 19:45:09.769916    7936 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0602 19:45:10.267794    7936 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0602 19:45:10.773803    7936 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0602 19:45:07.859017   12568 ssh_runner.go:235] Completed: sudo /var/lib/minikube/binaries/v1.23.6/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml: (5.5596171s)
	I0602 19:45:07.859017   12568 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0602 19:45:07.881503   12568 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0602 19:45:07.882383   12568 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl label nodes minikube.k8s.io/version=v1.26.0-beta.1 minikube.k8s.io/commit=408dc4036f5a6d8b1313a2031b5dcb646a720fae minikube.k8s.io/name=calico-20220602191616-12108 minikube.k8s.io/updated_at=2022_06_02T19_45_07_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0602 19:45:07.892171   12568 ops.go:34] apiserver oom_adj: -16
	I0602 19:45:08.186764   12568 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0602 19:45:08.920983   12568 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0602 19:45:09.421301   12568 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0602 19:45:09.918889   12568 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0602 19:45:10.421877   12568 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0602 19:45:10.927612   12568 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0602 19:45:11.417579   12568 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0602 19:45:07.318908   13568 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220602193528-12108: (1.2663066s)
	I0602 19:45:07.321909   13568 main.go:134] libmachine: Using SSH client type: native
	I0602 19:45:07.322906   13568 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x5b2ea0] 0x5b5d00 <nil>  [] 0s} 127.0.0.1 54943 <nil> <nil>}
	I0602 19:45:07.322906   13568 main.go:134] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0602 19:45:07.559165   13568 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I0602 19:45:07.559165   13568 machine.go:91] provisioned docker machine in 10.7347973s
	I0602 19:45:07.559165   13568 start.go:306] post-start starting for "newest-cni-20220602193528-12108" (driver="docker")
	I0602 19:45:07.559165   13568 start.go:316] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0602 19:45:07.578887   13568 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0602 19:45:07.592670   13568 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220602193528-12108
	I0602 19:45:08.848136   13568 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220602193528-12108: (1.2554608s)
	I0602 19:45:08.848136   13568 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:54943 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\newest-cni-20220602193528-12108\id_rsa Username:docker}
	I0602 19:45:08.995068   13568 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (1.4161745s)
	I0602 19:45:09.007638   13568 ssh_runner.go:195] Run: cat /etc/os-release
	I0602 19:45:09.025916   13568 main.go:134] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0602 19:45:09.025916   13568 main.go:134] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0602 19:45:09.025916   13568 main.go:134] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0602 19:45:09.025916   13568 info.go:137] Remote host: Ubuntu 20.04.4 LTS
	I0602 19:45:09.026467   13568 filesync.go:126] Scanning C:\Users\jenkins.minikube7\minikube-integration\.minikube\addons for local assets ...
	I0602 19:45:09.026962   13568 filesync.go:126] Scanning C:\Users\jenkins.minikube7\minikube-integration\.minikube\files for local assets ...
	I0602 19:45:09.027917   13568 filesync.go:149] local asset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\files\etc\ssl\certs\121082.pem -> 121082.pem in /etc/ssl/certs
	I0602 19:45:09.038847   13568 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0602 19:45:09.065810   13568 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\files\etc\ssl\certs\121082.pem --> /etc/ssl/certs/121082.pem (1708 bytes)
	I0602 19:45:09.125134   13568 start.go:309] post-start completed in 1.5659622s
	I0602 19:45:09.139028   13568 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0602 19:45:09.146768   13568 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220602193528-12108
	I0602 19:45:10.318137   13568 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220602193528-12108: (1.1713645s)
	I0602 19:45:10.318137   13568 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:54943 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\newest-cni-20220602193528-12108\id_rsa Username:docker}
	I0602 19:45:10.468508   13568 ssh_runner.go:235] Completed: sh -c "df -h /var | awk 'NR==2{print $5}'": (1.3293914s)
	I0602 19:45:10.485355   13568 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0602 19:45:10.503253   13568 fix.go:57] fixHost completed within 20.0290297s
	I0602 19:45:10.503253   13568 start.go:81] releasing machines lock for "newest-cni-20220602193528-12108", held for 20.0290297s
	I0602 19:45:10.510240   13568 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-20220602193528-12108
	I0602 19:45:11.764835   13568 cli_runner.go:217] Completed: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-20220602193528-12108: (1.2545895s)
	I0602 19:45:11.766841   13568 ssh_runner.go:195] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0602 19:45:11.775838   13568 ssh_runner.go:195] Run: systemctl --version
	I0602 19:45:11.776832   13568 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220602193528-12108
	I0602 19:45:11.781837   13568 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220602193528-12108
	I0602 19:45:13.110193   13568 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220602193528-12108: (1.3333555s)
	I0602 19:45:13.110193   13568 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:54943 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\newest-cni-20220602193528-12108\id_rsa Username:docker}
	I0602 19:45:13.126150   13568 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220602193528-12108: (1.3443077s)
	I0602 19:45:13.126150   13568 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:54943 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\newest-cni-20220602193528-12108\id_rsa Username:docker}
	I0602 19:45:13.338888   13568 ssh_runner.go:235] Completed: curl -sS -m 2 https://k8s.gcr.io/: (1.5720402s)
	I0602 19:45:13.339010   13568 ssh_runner.go:235] Completed: systemctl --version: (1.5631659s)
	I0602 19:45:13.355269   13568 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0602 19:45:13.399856   13568 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0602 19:45:13.429867   13568 cruntime.go:273] skipping containerd shutdown because we are bound to it
	I0602 19:45:13.446896   13568 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0602 19:45:13.496691   13568 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/dockershim.sock
	image-endpoint: unix:///var/run/dockershim.sock
	" | sudo tee /etc/crictl.yaml"
	I0602 19:45:13.554954   13568 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0602 19:45:13.782934   13568 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0602 19:45:13.982227   13568 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0602 19:45:14.025906   13568 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0602 19:45:14.226407   13568 ssh_runner.go:195] Run: sudo systemctl start docker
	I0602 19:45:14.279119   13568 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0602 19:45:14.396377   13568 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0602 19:45:12.422144   11612 pod_ready.go:102] pod "coredns-64897985d-6flrb" in "kube-system" namespace has status "Ready":"False"
	I0602 19:45:14.915305   11612 pod_ready.go:102] pod "coredns-64897985d-6flrb" in "kube-system" namespace has status "Ready":"False"
	I0602 19:45:11.272276    7936 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0602 19:45:11.763865    7936 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0602 19:45:12.269347    7936 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0602 19:45:12.765737    7936 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0602 19:45:13.771943    7936 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0602 19:45:14.775567    7936 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0602 19:45:15.356980    7936 kubeadm.go:1045] duration metric: took 11.9798453s to wait for elevateKubeSystemPrivileges.
	I0602 19:45:15.357054    7936 kubeadm.go:397] StartCluster complete in 35.8082943s
	I0602 19:45:15.357125    7936 settings.go:142] acquiring lock: {Name:mk2f48f1c2db86c45c5c20d13312e07e9c171d09 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0602 19:45:15.357125    7936 settings.go:150] Updating kubeconfig:  C:\Users\jenkins.minikube7\minikube-integration\kubeconfig
	I0602 19:45:15.359683    7936 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube7\minikube-integration\kubeconfig: {Name:mk3b9816be6135fef42a073128e9ed356868417a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0602 19:45:15.982921    7936 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "auto-20220602191545-12108" rescaled to 1
	I0602 19:45:15.982921    7936 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.23.6/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0602 19:45:15.982921    7936 start.go:208] Will wait 5m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.23.6 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0602 19:45:15.986924    7936 out.go:177] * Verifying Kubernetes components...
	I0602 19:45:15.982921    7936 addons.go:415] enableAddons start: toEnable=map[], additional=[]
	I0602 19:45:15.983881    7936 config.go:178] Loaded profile config "auto-20220602191545-12108": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.6
	I0602 19:45:15.986924    7936 addons.go:65] Setting storage-provisioner=true in profile "auto-20220602191545-12108"
	I0602 19:45:15.987879    7936 addons.go:65] Setting default-storageclass=true in profile "auto-20220602191545-12108"
	I0602 19:45:16.008518    7936 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "auto-20220602191545-12108"
	I0602 19:45:16.008518    7936 addons.go:153] Setting addon storage-provisioner=true in "auto-20220602191545-12108"
	W0602 19:45:16.008518    7936 addons.go:165] addon storage-provisioner should already be in state true
	I0602 19:45:16.009115    7936 host.go:66] Checking if "auto-20220602191545-12108" exists ...
	I0602 19:45:16.027094    7936 ssh_runner.go:195] Run: sudo service kubelet status
	I0602 19:45:16.033100    7936 cli_runner.go:164] Run: docker container inspect auto-20220602191545-12108 --format={{.State.Status}}
	I0602 19:45:16.034090    7936 cli_runner.go:164] Run: docker container inspect auto-20220602191545-12108 --format={{.State.Status}}
	I0602 19:45:11.917089   12568 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0602 19:45:12.416135   12568 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0602 19:45:13.413869   12568 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0602 19:45:13.919325   12568 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0602 19:45:14.426836   12568 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0602 19:45:15.260435   12568 kubeadm.go:1045] duration metric: took 7.4003542s to wait for elevateKubeSystemPrivileges.
	I0602 19:45:15.260435   12568 kubeadm.go:397] StartCluster complete in 35.7891698s
	I0602 19:45:15.260435   12568 settings.go:142] acquiring lock: {Name:mk2f48f1c2db86c45c5c20d13312e07e9c171d09 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0602 19:45:15.261310   12568 settings.go:150] Updating kubeconfig:  C:\Users\jenkins.minikube7\minikube-integration\kubeconfig
	I0602 19:45:15.263674   12568 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube7\minikube-integration\kubeconfig: {Name:mk3b9816be6135fef42a073128e9ed356868417a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0602 19:45:16.162120   12568 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "calico-20220602191616-12108" rescaled to 1
	I0602 19:45:16.162120   12568 start.go:208] Will wait 5m0s for node &{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.23.6 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0602 19:45:16.163104   12568 addons.go:415] enableAddons start: toEnable=map[], additional=[]
	I0602 19:45:16.163104   12568 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.23.6/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0602 19:45:16.166127   12568 addons.go:65] Setting storage-provisioner=true in profile "calico-20220602191616-12108"
	I0602 19:45:16.166127   12568 addons.go:65] Setting default-storageclass=true in profile "calico-20220602191616-12108"
	I0602 19:45:16.166127   12568 addons.go:153] Setting addon storage-provisioner=true in "calico-20220602191616-12108"
	W0602 19:45:16.166127   12568 addons.go:165] addon storage-provisioner should already be in state true
	I0602 19:45:16.166127   12568 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "calico-20220602191616-12108"
	I0602 19:45:16.167093   12568 host.go:66] Checking if "calico-20220602191616-12108" exists ...
	I0602 19:45:16.166127   12568 out.go:177] * Verifying Kubernetes components...
	I0602 19:45:16.163104   12568 config.go:178] Loaded profile config "calico-20220602191616-12108": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.6
	I0602 19:45:16.188102   12568 ssh_runner.go:195] Run: sudo service kubelet status
	I0602 19:45:16.190099   12568 cli_runner.go:164] Run: docker container inspect calico-20220602191616-12108 --format={{.State.Status}}
	I0602 19:45:16.191095   12568 cli_runner.go:164] Run: docker container inspect calico-20220602191616-12108 --format={{.State.Status}}
	I0602 19:45:14.514227   13568 out.go:204] * Preparing Kubernetes v1.23.6 on Docker 20.10.16 ...
	I0602 19:45:14.521221   13568 cli_runner.go:164] Run: docker exec -t newest-cni-20220602193528-12108 dig +short host.docker.internal
	I0602 19:45:16.040121   13568 cli_runner.go:217] Completed: docker exec -t newest-cni-20220602193528-12108 dig +short host.docker.internal: (1.5188935s)
	I0602 19:45:16.040121   13568 network.go:96] got host ip for mount in container by digging dns: 192.168.65.2
	I0602 19:45:16.062113   13568 ssh_runner.go:195] Run: grep 192.168.65.2	host.minikube.internal$ /etc/hosts
	I0602 19:45:16.077109   13568 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.65.2	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0602 19:45:16.121098   13568 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" newest-cni-20220602193528-12108
	I0602 19:45:16.467700    7936 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.23.6/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.65.2 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.23.6/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0602 19:45:16.484695    7936 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" auto-20220602191545-12108
	I0602 19:45:17.700097    7936 cli_runner.go:217] Completed: docker container inspect auto-20220602191545-12108 --format={{.State.Status}}: (1.6669892s)
	I0602 19:45:17.703095    7936 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0602 19:45:17.065196   12568 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.23.6/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.65.2 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.23.6/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0602 19:45:17.077175   12568 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" calico-20220602191616-12108
	I0602 19:45:17.855080   12568 cli_runner.go:217] Completed: docker container inspect calico-20220602191616-12108 --format={{.State.Status}}: (1.663978s)
	I0602 19:45:17.855080   13568 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" newest-cni-20220602193528-12108: (1.7339754s)
	I0602 19:45:17.863095   12568 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0602 19:45:17.866114   13568 out.go:177]   - kubelet.network-plugin=cni
	I0602 19:45:17.872106   13568 out.go:177]   - kubeadm.pod-network-cidr=192.168.111.111/16
	I0602 19:45:16.925627   11612 pod_ready.go:102] pod "coredns-64897985d-6flrb" in "kube-system" namespace has status "Ready":"False"
	I0602 19:45:18.928961   11612 pod_ready.go:102] pod "coredns-64897985d-6flrb" in "kube-system" namespace has status "Ready":"False"
	I0602 19:45:17.706080    7936 addons.go:348] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0602 19:45:17.706080    7936 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0602 19:45:17.722081    7936 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-20220602191545-12108
	I0602 19:45:17.723098    7936 cli_runner.go:217] Completed: docker container inspect auto-20220602191545-12108 --format={{.State.Status}}: (1.689001s)
	I0602 19:45:17.751103    7936 addons.go:153] Setting addon default-storageclass=true in "auto-20220602191545-12108"
	W0602 19:45:17.751103    7936 addons.go:165] addon default-storageclass should already be in state true
	I0602 19:45:17.751103    7936 host.go:66] Checking if "auto-20220602191545-12108" exists ...
	I0602 19:45:17.789090    7936 cli_runner.go:164] Run: docker container inspect auto-20220602191545-12108 --format={{.State.Status}}
	I0602 19:45:18.136359    7936 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" auto-20220602191545-12108: (1.6516568s)
	I0602 19:45:18.146383    7936 node_ready.go:35] waiting up to 5m0s for node "auto-20220602191545-12108" to be "Ready" ...
	I0602 19:45:18.167398    7936 node_ready.go:49] node "auto-20220602191545-12108" has status "Ready":"True"
	I0602 19:45:18.167398    7936 node_ready.go:38] duration metric: took 20.0156ms waiting for node "auto-20220602191545-12108" to be "Ready" ...
	I0602 19:45:18.167398    7936 pod_ready.go:35] extra waiting up to 5m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0602 19:45:18.264735    7936 pod_ready.go:78] waiting up to 5m0s for pod "coredns-64897985d-dsm5l" in "kube-system" namespace to be "Ready" ...
	I0602 19:45:19.473063    7936 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-20220602191545-12108: (1.7509742s)
	I0602 19:45:19.473063    7936 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:54862 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\auto-20220602191545-12108\id_rsa Username:docker}
	I0602 19:45:19.505057    7936 cli_runner.go:217] Completed: docker container inspect auto-20220602191545-12108 --format={{.State.Status}}: (1.7159596s)
	I0602 19:45:19.505057    7936 addons.go:348] installing /etc/kubernetes/addons/storageclass.yaml
	I0602 19:45:19.505057    7936 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0602 19:45:19.515052    7936 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-20220602191545-12108
	I0602 19:45:20.277466    7936 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.6/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0602 19:45:20.448049    7936 pod_ready.go:102] pod "coredns-64897985d-dsm5l" in "kube-system" namespace has status "Ready":"False"
	I0602 19:45:21.005189    7936 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-20220602191545-12108: (1.4901307s)
	I0602 19:45:21.005189    7936 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:54862 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\auto-20220602191545-12108\id_rsa Username:docker}
	I0602 19:45:17.868088   12568 addons.go:348] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0602 19:45:17.868088   12568 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0602 19:45:17.878092   12568 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220602191616-12108
	I0602 19:45:17.883102   12568 cli_runner.go:217] Completed: docker container inspect calico-20220602191616-12108 --format={{.State.Status}}: (1.6929964s)
	I0602 19:45:17.952088   12568 addons.go:153] Setting addon default-storageclass=true in "calico-20220602191616-12108"
	W0602 19:45:17.952088   12568 addons.go:165] addon default-storageclass should already be in state true
	I0602 19:45:17.952088   12568 host.go:66] Checking if "calico-20220602191616-12108" exists ...
	I0602 19:45:17.985092   12568 cli_runner.go:164] Run: docker container inspect calico-20220602191616-12108 --format={{.State.Status}}
	I0602 19:45:18.779249   12568 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" calico-20220602191616-12108: (1.7020666s)
	I0602 19:45:18.782292   12568 node_ready.go:35] waiting up to 5m0s for node "calico-20220602191616-12108" to be "Ready" ...
	I0602 19:45:18.793244   12568 node_ready.go:49] node "calico-20220602191616-12108" has status "Ready":"True"
	I0602 19:45:18.793244   12568 node_ready.go:38] duration metric: took 10.9527ms waiting for node "calico-20220602191616-12108" to be "Ready" ...
	I0602 19:45:18.793244   12568 pod_ready.go:35] extra waiting up to 5m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0602 19:45:18.852810   12568 pod_ready.go:78] waiting up to 5m0s for pod "calico-kube-controllers-8594699699-m62nx" in "kube-system" namespace to be "Ready" ...
	I0602 19:45:19.538197   12568 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220602191616-12108: (1.6600982s)
	I0602 19:45:19.538197   12568 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:54867 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\calico-20220602191616-12108\id_rsa Username:docker}
	I0602 19:45:19.632410   12568 cli_runner.go:217] Completed: docker container inspect calico-20220602191616-12108 --format={{.State.Status}}: (1.6473107s)
	I0602 19:45:19.632410   12568 addons.go:348] installing /etc/kubernetes/addons/storageclass.yaml
	I0602 19:45:19.632410   12568 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0602 19:45:19.647445   12568 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220602191616-12108
	I0602 19:45:20.180208   12568 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.6/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0602 19:45:21.046208   12568 pod_ready.go:102] pod "calico-kube-controllers-8594699699-m62nx" in "kube-system" namespace has status "Ready":"False"
	I0602 19:45:21.101235   12568 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220602191616-12108: (1.4537834s)
	I0602 19:45:21.101235   12568 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:54867 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\calico-20220602191616-12108\id_rsa Username:docker}
	I0602 19:45:17.874108   13568 preload.go:132] Checking if preload exists for k8s version v1.23.6 and runtime docker
	I0602 19:45:17.883102   13568 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0602 19:45:18.002086   13568 docker.go:610] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.23.6
	k8s.gcr.io/kube-controller-manager:v1.23.6
	k8s.gcr.io/kube-proxy:v1.23.6
	k8s.gcr.io/kube-scheduler:v1.23.6
	k8s.gcr.io/etcd:3.5.1-0
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0602 19:45:18.002086   13568 docker.go:541] Images already preloaded, skipping extraction
	I0602 19:45:18.012123   13568 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0602 19:45:18.119372   13568 docker.go:610] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.23.6
	k8s.gcr.io/kube-proxy:v1.23.6
	k8s.gcr.io/kube-scheduler:v1.23.6
	k8s.gcr.io/kube-controller-manager:v1.23.6
	k8s.gcr.io/etcd:3.5.1-0
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0602 19:45:18.119372   13568 cache_images.go:84] Images are preloaded, skipping loading
	I0602 19:45:18.129365   13568 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0602 19:45:18.385655   13568 cni.go:95] Creating CNI manager for ""
	I0602 19:45:18.385655   13568 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0602 19:45:18.385655   13568 kubeadm.go:87] Using pod CIDR: 192.168.111.111/16
	I0602 19:45:18.385655   13568 kubeadm.go:158] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:192.168.111.111/16 AdvertiseAddress:192.168.58.2 APIServerPort:8443 KubernetesVersion:v1.23.6 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-20220602193528-12108 NodeName:newest-cni-20220602193528-12108 DNSDomain:cluster.local CRISocket:/var/run/dockershim.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota feature-gates:ServerSideApply=true] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.58.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true feature-gates:ServerSideApply=true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[feature-gates:ServerSideApply=true leader-elect:fals
e] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.58.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	I0602 19:45:18.385655   13568 kubeadm.go:162] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.58.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/dockershim.sock
	  name: "newest-cni-20220602193528-12108"
	  kubeletExtraArgs:
	    node-ip: 192.168.58.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.58.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	    feature-gates: "ServerSideApply=true"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    feature-gates: "ServerSideApply=true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    feature-gates: "ServerSideApply=true"
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.23.6
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "192.168.111.111/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "192.168.111.111/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0602 19:45:18.385655   13568 kubeadm.go:961] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.23.6/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=docker --feature-gates=ServerSideApply=true --hostname-override=newest-cni-20220602193528-12108 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.58.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.23.6 ClusterName:newest-cni-20220602193528-12108 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:network-plugin Value:cni} {Component:kubeadm Key:pod-network-cidr Value:192.168.111.111/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0602 19:45:18.395641   13568 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.23.6
	I0602 19:45:18.430353   13568 binaries.go:44] Found k8s binaries, skipping transfer
	I0602 19:45:18.456378   13568 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0602 19:45:18.489659   13568 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (415 bytes)
	I0602 19:45:18.529655   13568 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0602 19:45:18.591880   13568 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2189 bytes)
	I0602 19:45:18.665601   13568 ssh_runner.go:195] Run: grep 192.168.58.2	control-plane.minikube.internal$ /etc/hosts
	I0602 19:45:18.680608   13568 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.58.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0602 19:45:18.709613   13568 certs.go:54] Setting up C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\newest-cni-20220602193528-12108 for IP: 192.168.58.2
	I0602 19:45:18.714605   13568 certs.go:182] skipping minikubeCA CA generation: C:\Users\jenkins.minikube7\minikube-integration\.minikube\ca.key
	I0602 19:45:18.715629   13568 certs.go:182] skipping proxyClientCA CA generation: C:\Users\jenkins.minikube7\minikube-integration\.minikube\proxy-client-ca.key
	I0602 19:45:18.716606   13568 certs.go:298] skipping minikube-user signed cert generation: C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\newest-cni-20220602193528-12108\client.key
	I0602 19:45:18.716606   13568 certs.go:298] skipping minikube signed cert generation: C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\newest-cni-20220602193528-12108\apiserver.key.cee25041
	I0602 19:45:18.716606   13568 certs.go:298] skipping aggregator signed cert generation: C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\newest-cni-20220602193528-12108\proxy-client.key
	I0602 19:45:18.718608   13568 certs.go:388] found cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\12108.pem (1338 bytes)
	W0602 19:45:18.719611   13568 certs.go:384] ignoring C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\12108_empty.pem, impossibly tiny 0 bytes
	I0602 19:45:18.719611   13568 certs.go:388] found cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca-key.pem (1675 bytes)
	I0602 19:45:18.719611   13568 certs.go:388] found cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem (1078 bytes)
	I0602 19:45:18.719611   13568 certs.go:388] found cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0602 19:45:18.720612   13568 certs.go:388] found cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\key.pem (1675 bytes)
	I0602 19:45:18.720612   13568 certs.go:388] found cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\files\etc\ssl\certs\C:\Users\jenkins.minikube7\minikube-integration\.minikube\files\etc\ssl\certs\121082.pem (1708 bytes)
	I0602 19:45:18.722606   13568 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\newest-cni-20220602193528-12108\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0602 19:45:18.801249   13568 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\newest-cni-20220602193528-12108\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0602 19:45:18.873739   13568 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\newest-cni-20220602193528-12108\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0602 19:45:18.947214   13568 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\newest-cni-20220602193528-12108\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0602 19:45:19.019021   13568 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0602 19:45:19.101812   13568 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0602 19:45:19.165962   13568 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0602 19:45:19.501057   13568 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0602 19:45:19.568074   13568 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\files\etc\ssl\certs\121082.pem --> /usr/share/ca-certificates/121082.pem (1708 bytes)
	I0602 19:45:19.622432   13568 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0602 19:45:19.690417   13568 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\12108.pem --> /usr/share/ca-certificates/12108.pem (1338 bytes)
	I0602 19:45:19.749142   13568 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (752 bytes)
	I0602 19:45:19.808211   13568 ssh_runner.go:195] Run: openssl version
	I0602 19:45:19.830196   13568 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/121082.pem && ln -fs /usr/share/ca-certificates/121082.pem /etc/ssl/certs/121082.pem"
	I0602 19:45:19.893481   13568 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/121082.pem
	I0602 19:45:19.903547   13568 certs.go:431] hashing: -rw-r--r-- 1 root root 1708 Jun  2 17:28 /usr/share/ca-certificates/121082.pem
	I0602 19:45:19.914475   13568 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/121082.pem
	I0602 19:45:19.935474   13568 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/121082.pem /etc/ssl/certs/3ec20f2e.0"
	I0602 19:45:19.992510   13568 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0602 19:45:20.074759   13568 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0602 19:45:20.085762   13568 certs.go:431] hashing: -rw-r--r-- 1 root root 1111 Jun  2 17:16 /usr/share/ca-certificates/minikubeCA.pem
	I0602 19:45:20.095755   13568 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0602 19:45:20.132752   13568 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0602 19:45:20.189200   13568 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/12108.pem && ln -fs /usr/share/ca-certificates/12108.pem /etc/ssl/certs/12108.pem"
	I0602 19:45:20.228223   13568 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12108.pem
	I0602 19:45:20.247519   13568 certs.go:431] hashing: -rw-r--r-- 1 root root 1338 Jun  2 17:28 /usr/share/ca-certificates/12108.pem
	I0602 19:45:20.271468   13568 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12108.pem
	I0602 19:45:20.295462   13568 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/12108.pem /etc/ssl/certs/51391683.0"
	I0602 19:45:20.320031   13568 kubeadm.go:395] StartCluster: {Name:newest-cni-20220602193528-12108 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.6 ClusterName:newest-cni-20220602193528-12108 Namespace:default APIServerNam
e:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:network-plugin Value:cni} {Component:kubeadm Key:pod-network-cidr Value:192.168.111.111/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.23.6 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.6.0@sha256:4af9580485920635d888efe1eddbd67e12f9d5d84dba87100e93feb4e46636b3 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps
_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube7:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0602 19:45:20.330784   13568 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0602 19:45:20.436429   13568 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0602 19:45:20.552771   13568 kubeadm.go:410] found existing configuration files, will attempt cluster restart
	I0602 19:45:20.552771   13568 kubeadm.go:626] restartCluster start
	I0602 19:45:20.573731   13568 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0602 19:45:20.609703   13568 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0602 19:45:20.623595   13568 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" newest-cni-20220602193528-12108
	I0602 19:45:21.287482    7936 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.6/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0602 19:45:22.470550    7936 pod_ready.go:102] pod "coredns-64897985d-dsm5l" in "kube-system" namespace has status "Ready":"False"
	I0602 19:45:22.553895    7936 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.23.6/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.65.2 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.23.6/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (6.086169s)
	I0602 19:45:22.554945    7936 start.go:806] {"host.minikube.internal": 192.168.65.2} host record injected into CoreDNS
	I0602 19:45:23.045096    7936 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.6/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.766915s)
	I0602 19:45:23.045096    7936 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.6/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.7569034s)
	I0602 19:45:23.049091    7936 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0602 19:45:21.777443   12568 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.6/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0602 19:45:23.744608   12568 pod_ready.go:102] pod "calico-kube-controllers-8594699699-m62nx" in "kube-system" namespace has status "Ready":"False"
	I0602 19:45:24.858043   12568 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.23.6/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.65.2 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.23.6/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (7.792813s)
	I0602 19:45:24.858043   12568 start.go:806] {"host.minikube.internal": 192.168.65.2} host record injected into CoreDNS
	I0602 19:45:25.545725   12568 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.6/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (3.7672601s)
	I0602 19:45:25.545725   12568 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.6/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (5.3654933s)
	I0602 19:45:25.549754   12568 out.go:177] * Enabled addons: default-storageclass, storage-provisioner
	I0602 19:45:21.424687   11612 pod_ready.go:102] pod "coredns-64897985d-6flrb" in "kube-system" namespace has status "Ready":"False"
	I0602 19:45:23.908577   11612 pod_ready.go:102] pod "coredns-64897985d-6flrb" in "kube-system" namespace has status "Ready":"False"
	I0602 19:45:23.052527    7936 addons.go:417] enableAddons completed in 7.0695754s
	I0602 19:45:24.876019    7936 pod_ready.go:102] pod "coredns-64897985d-dsm5l" in "kube-system" namespace has status "Ready":"False"
	I0602 19:45:25.553701   12568 addons.go:417] enableAddons completed in 9.3895777s
	I0602 19:45:26.047046   12568 pod_ready.go:102] pod "calico-kube-controllers-8594699699-m62nx" in "kube-system" namespace has status "Ready":"False"
	I0602 19:45:22.062468   13568 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" newest-cni-20220602193528-12108: (1.4388661s)
	I0602 19:45:22.064441   13568 kubeconfig.go:116] verify returned: extract IP: "newest-cni-20220602193528-12108" does not appear in C:\Users\jenkins.minikube7\minikube-integration\kubeconfig
	I0602 19:45:22.066452   13568 kubeconfig.go:127] "newest-cni-20220602193528-12108" context is missing from C:\Users\jenkins.minikube7\minikube-integration\kubeconfig - will repair!
	I0602 19:45:22.068428   13568 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube7\minikube-integration\kubeconfig: {Name:mk3b9816be6135fef42a073128e9ed356868417a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0602 19:45:22.092682   13568 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0602 19:45:22.113602   13568 api_server.go:165] Checking apiserver status ...
	I0602 19:45:22.126562   13568 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0602 19:45:22.172238   13568 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0602 19:45:22.372845   13568 api_server.go:165] Checking apiserver status ...
	I0602 19:45:22.387277   13568 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0602 19:45:22.471542   13568 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0602 19:45:22.574891   13568 api_server.go:165] Checking apiserver status ...
	I0602 19:45:22.584878   13568 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0602 19:45:22.609901   13568 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0602 19:45:22.786637   13568 api_server.go:165] Checking apiserver status ...
	I0602 19:45:22.798246   13568 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0602 19:45:22.966111   13568 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0602 19:45:22.986704   13568 api_server.go:165] Checking apiserver status ...
	I0602 19:45:22.998742   13568 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0602 19:45:23.025213   13568 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0602 19:45:23.188077   13568 api_server.go:165] Checking apiserver status ...
	I0602 19:45:23.198577   13568 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0602 19:45:23.234568   13568 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0602 19:45:23.373670   13568 api_server.go:165] Checking apiserver status ...
	I0602 19:45:23.387106   13568 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0602 19:45:23.411695   13568 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0602 19:45:23.577397   13568 api_server.go:165] Checking apiserver status ...
	I0602 19:45:23.588194   13568 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0602 19:45:23.616393   13568 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0602 19:45:23.778604   13568 api_server.go:165] Checking apiserver status ...
	I0602 19:45:23.789303   13568 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0602 19:45:23.814065   13568 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0602 19:45:23.986333   13568 api_server.go:165] Checking apiserver status ...
	I0602 19:45:23.997920   13568 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0602 19:45:24.028726   13568 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0602 19:45:24.186179   13568 api_server.go:165] Checking apiserver status ...
	I0602 19:45:24.198529   13568 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0602 19:45:24.232261   13568 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0602 19:45:24.375290   13568 api_server.go:165] Checking apiserver status ...
	I0602 19:45:24.393445   13568 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0602 19:45:24.426015   13568 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0602 19:45:24.577736   13568 api_server.go:165] Checking apiserver status ...
	I0602 19:45:24.587592   13568 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0602 19:45:24.619981   13568 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0602 19:45:24.782977   13568 api_server.go:165] Checking apiserver status ...
	I0602 19:45:24.791981   13568 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0602 19:45:24.820410   13568 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0602 19:45:24.984347   13568 api_server.go:165] Checking apiserver status ...
	I0602 19:45:24.995639   13568 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0602 19:45:25.026581   13568 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0602 19:45:25.185586   13568 api_server.go:165] Checking apiserver status ...
	I0602 19:45:25.197172   13568 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0602 19:45:25.223868   13568 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0602 19:45:25.223868   13568 api_server.go:165] Checking apiserver status ...
	I0602 19:45:25.232856   13568 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0602 19:45:25.289039   13568 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0602 19:45:25.289039   13568 kubeadm.go:601] needs reconfigure: apiserver error: timed out waiting for the condition
	I0602 19:45:25.289039   13568 kubeadm.go:1092] stopping kube-system containers ...
	I0602 19:45:25.295999   13568 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0602 19:45:25.395481   13568 docker.go:442] Stopping containers: [d2c5fc0230ad 8e1f9558c0ca 1d1446ebcd29 448dbf0a9b78 24eaf3189696 2d2f8a82cd4f db4c6833f821 d3d06040213e 657a9269a4b8 00aa836bb1a9 7e0c5a6877c8 e576c766bf2a 40851dc91e39 f8bc1117b670 9d6ff8386a76 f0f4349e2a50 37eb5623cd99 fcbc8319890a d617f2adfb0d d9cb2e830613 001ff57088e9 7a54dbda0d91 bc5560d151d2 4cfadebe4a13]
	I0602 19:45:25.404451   13568 ssh_runner.go:195] Run: docker stop d2c5fc0230ad 8e1f9558c0ca 1d1446ebcd29 448dbf0a9b78 24eaf3189696 2d2f8a82cd4f db4c6833f821 d3d06040213e 657a9269a4b8 00aa836bb1a9 7e0c5a6877c8 e576c766bf2a 40851dc91e39 f8bc1117b670 9d6ff8386a76 f0f4349e2a50 37eb5623cd99 fcbc8319890a d617f2adfb0d d9cb2e830613 001ff57088e9 7a54dbda0d91 bc5560d151d2 4cfadebe4a13
	I0602 19:45:25.517720   13568 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0602 19:45:25.568707   13568 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0602 19:45:25.592701   13568 kubeadm.go:155] found existing configuration files:
	-rw------- 1 root root 5639 Jun  2 19:43 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5652 Jun  2 19:43 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2063 Jun  2 19:43 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5604 Jun  2 19:43 /etc/kubernetes/scheduler.conf
	
	I0602 19:45:25.602703   13568 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0602 19:45:25.642424   13568 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0602 19:45:25.687517   13568 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0602 19:45:25.709176   13568 kubeadm.go:166] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0602 19:45:25.725650   13568 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0602 19:45:25.774027   13568 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0602 19:45:25.795775   13568 kubeadm.go:166] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0602 19:45:25.808264   13568 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0602 19:45:25.862843   13568 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0602 19:45:25.886823   13568 kubeadm.go:703] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0602 19:45:25.886823   13568 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0602 19:45:26.028041   13568 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0602 19:45:26.408188   11612 pod_ready.go:102] pod "coredns-64897985d-6flrb" in "kube-system" namespace has status "Ready":"False"
	I0602 19:45:28.410072   11612 pod_ready.go:102] pod "coredns-64897985d-6flrb" in "kube-system" namespace has status "Ready":"False"
	I0602 19:45:30.425070   11612 pod_ready.go:102] pod "coredns-64897985d-6flrb" in "kube-system" namespace has status "Ready":"False"
	I0602 19:45:27.397407    7936 pod_ready.go:102] pod "coredns-64897985d-dsm5l" in "kube-system" namespace has status "Ready":"False"
	I0602 19:45:29.872237    7936 pod_ready.go:102] pod "coredns-64897985d-dsm5l" in "kube-system" namespace has status "Ready":"False"
	I0602 19:45:28.556289   12568 pod_ready.go:102] pod "calico-kube-controllers-8594699699-m62nx" in "kube-system" namespace has status "Ready":"False"
	I0602 19:45:30.957286   12568 pod_ready.go:102] pod "calico-kube-controllers-8594699699-m62nx" in "kube-system" namespace has status "Ready":"False"
	I0602 19:45:28.165756   13568 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (2.1377054s)
	I0602 19:45:28.165756   13568 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0602 19:45:28.525225   13568 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0602 19:45:28.840566   13568 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0602 19:45:29.088918   13568 api_server.go:51] waiting for apiserver process to appear ...
	I0602 19:45:29.101919   13568 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0602 19:45:29.687355   13568 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0602 19:45:30.181868   13568 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0602 19:45:30.683275   13568 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0602 19:45:31.183167   13568 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0602 19:45:32.432855   11612 pod_ready.go:102] pod "coredns-64897985d-6flrb" in "kube-system" namespace has status "Ready":"False"
	I0602 19:45:34.988897   11612 pod_ready.go:102] pod "coredns-64897985d-6flrb" in "kube-system" namespace has status "Ready":"False"
	I0602 19:45:31.882862    7936 pod_ready.go:102] pod "coredns-64897985d-dsm5l" in "kube-system" namespace has status "Ready":"False"
	I0602 19:45:34.375631    7936 pod_ready.go:102] pod "coredns-64897985d-dsm5l" in "kube-system" namespace has status "Ready":"False"
	I0602 19:45:32.958874   12568 pod_ready.go:102] pod "calico-kube-controllers-8594699699-m62nx" in "kube-system" namespace has status "Ready":"False"
	I0602 19:45:35.451130   12568 pod_ready.go:102] pod "calico-kube-controllers-8594699699-m62nx" in "kube-system" namespace has status "Ready":"False"
	I0602 19:45:31.689475   13568 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0602 19:45:32.182798   13568 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0602 19:45:32.675827   13568 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0602 19:45:33.178329   13568 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0602 19:45:33.679995   13568 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0602 19:45:33.846188   13568 api_server.go:71] duration metric: took 4.7572492s to wait for apiserver process to appear ...
	I0602 19:45:33.846188   13568 api_server.go:87] waiting for apiserver healthz status ...
	I0602 19:45:33.846188   13568 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:54947/healthz ...
	I0602 19:45:33.852390   13568 api_server.go:256] stopped: https://127.0.0.1:54947/healthz: Get "https://127.0.0.1:54947/healthz": EOF
	I0602 19:45:34.365644   13568 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:54947/healthz ...
	I0602 19:45:37.411055   11612 pod_ready.go:102] pod "coredns-64897985d-6flrb" in "kube-system" namespace has status "Ready":"False"
	I0602 19:45:39.421012   11612 pod_ready.go:102] pod "coredns-64897985d-6flrb" in "kube-system" namespace has status "Ready":"False"
	I0602 19:45:36.377758    7936 pod_ready.go:102] pod "coredns-64897985d-dsm5l" in "kube-system" namespace has status "Ready":"False"
	I0602 19:45:38.386042    7936 pod_ready.go:102] pod "coredns-64897985d-dsm5l" in "kube-system" namespace has status "Ready":"False"
	I0602 19:45:40.891552    7936 pod_ready.go:102] pod "coredns-64897985d-dsm5l" in "kube-system" namespace has status "Ready":"False"
	I0602 19:45:37.455873   12568 pod_ready.go:102] pod "calico-kube-controllers-8594699699-m62nx" in "kube-system" namespace has status "Ready":"False"
	I0602 19:45:39.952037   12568 pod_ready.go:102] pod "calico-kube-controllers-8594699699-m62nx" in "kube-system" namespace has status "Ready":"False"
	I0602 19:45:39.374333   13568 api_server.go:256] stopped: https://127.0.0.1:54947/healthz: Get "https://127.0.0.1:54947/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0602 19:45:39.861043   13568 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:54947/healthz ...
	I0602 19:45:40.613416   13568 api_server.go:266] https://127.0.0.1:54947/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0602 19:45:40.613416   13568 api_server.go:102] status: https://127.0.0.1:54947/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0602 19:45:40.861510   13568 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:54947/healthz ...
	I0602 19:45:41.040568   13568 api_server.go:266] https://127.0.0.1:54947/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0602 19:45:41.040568   13568 api_server.go:102] status: https://127.0.0.1:54947/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0602 19:45:41.360373   13568 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:54947/healthz ...
	I0602 19:45:41.384512   13568 api_server.go:266] https://127.0.0.1:54947/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0602 19:45:41.384512   13568 api_server.go:102] status: https://127.0.0.1:54947/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0602 19:45:41.907546   11612 pod_ready.go:102] pod "coredns-64897985d-6flrb" in "kube-system" namespace has status "Ready":"False"
	I0602 19:45:43.909987   11612 pod_ready.go:102] pod "coredns-64897985d-6flrb" in "kube-system" namespace has status "Ready":"False"
	I0602 19:45:42.899904    7936 pod_ready.go:102] pod "coredns-64897985d-dsm5l" in "kube-system" namespace has status "Ready":"False"
	I0602 19:45:45.527740    7936 pod_ready.go:102] pod "coredns-64897985d-dsm5l" in "kube-system" namespace has status "Ready":"False"
	I0602 19:45:41.953590   12568 pod_ready.go:102] pod "calico-kube-controllers-8594699699-m62nx" in "kube-system" namespace has status "Ready":"False"
	I0602 19:45:44.055320   12568 pod_ready.go:102] pod "calico-kube-controllers-8594699699-m62nx" in "kube-system" namespace has status "Ready":"False"
	I0602 19:45:41.861124   13568 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:54947/healthz ...
	I0602 19:45:41.949494   13568 api_server.go:266] https://127.0.0.1:54947/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0602 19:45:41.949494   13568 api_server.go:102] status: https://127.0.0.1:54947/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0602 19:45:42.364745   13568 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:54947/healthz ...
	I0602 19:45:42.455024   13568 api_server.go:266] https://127.0.0.1:54947/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0602 19:45:42.456003   13568 api_server.go:102] status: https://127.0.0.1:54947/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0602 19:45:42.863056   13568 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:54947/healthz ...
	I0602 19:45:42.960010   13568 api_server.go:266] https://127.0.0.1:54947/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0602 19:45:42.960010   13568 api_server.go:102] status: https://127.0.0.1:54947/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0602 19:45:43.354853   13568 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:54947/healthz ...
	I0602 19:45:43.465127   13568 api_server.go:266] https://127.0.0.1:54947/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0602 19:45:43.465127   13568 api_server.go:102] status: https://127.0.0.1:54947/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0602 19:45:43.863296   13568 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:54947/healthz ...
	I0602 19:45:44.041753   13568 api_server.go:266] https://127.0.0.1:54947/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0602 19:45:44.041836   13568 api_server.go:102] status: https://127.0.0.1:54947/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0602 19:45:44.366599   13568 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:54947/healthz ...
	I0602 19:45:44.849840   13568 api_server.go:266] https://127.0.0.1:54947/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0602 19:45:44.849840   13568 api_server.go:102] status: https://127.0.0.1:54947/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0602 19:45:44.853919   13568 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:54947/healthz ...
	I0602 19:45:44.880015   13568 api_server.go:266] https://127.0.0.1:54947/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0602 19:45:44.880015   13568 api_server.go:102] status: https://127.0.0.1:54947/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0602 19:45:45.359749   13568 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:54947/healthz ...
	I0602 19:45:45.529710   13568 api_server.go:266] https://127.0.0.1:54947/healthz returned 200:
	ok
	I0602 19:45:45.552708   13568 api_server.go:140] control plane version: v1.23.6
	I0602 19:45:45.552708   13568 api_server.go:130] duration metric: took 11.7064701s to wait for apiserver health ...
	I0602 19:45:45.552708   13568 cni.go:95] Creating CNI manager for ""
	I0602 19:45:45.552708   13568 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0602 19:45:45.552708   13568 system_pods.go:43] waiting for kube-system pods to appear ...
	I0602 19:45:45.755019   13568 system_pods.go:59] 8 kube-system pods found
	I0602 19:45:45.755019   13568 system_pods.go:61] "coredns-64897985d-nvh82" [e020a13f-06c3-4682-8596-3644e6368c0d] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0602 19:45:45.755019   13568 system_pods.go:61] "etcd-newest-cni-20220602193528-12108" [9ef3d8bd-c960-4a4c-94cf-c13c0e665943] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0602 19:45:45.755019   13568 system_pods.go:61] "kube-apiserver-newest-cni-20220602193528-12108" [bde9ca58-c780-44d8-95d5-ae32ca2ec9e7] Running
	I0602 19:45:45.755019   13568 system_pods.go:61] "kube-controller-manager-newest-cni-20220602193528-12108" [4a3f5b11-4274-48f1-adba-16d5ee24cef6] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0602 19:45:45.755019   13568 system_pods.go:61] "kube-proxy-6qlxd" [83790132-5a2f-4b5b-9e93-dea1fd63879f] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0602 19:45:45.755019   13568 system_pods.go:61] "kube-scheduler-newest-cni-20220602193528-12108" [8d248962-47a0-44ab-b62e-d7215d2438b0] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0602 19:45:45.755019   13568 system_pods.go:61] "metrics-server-b955d9d8-4zjkc" [f7310338-75db-4112-9f21-d33fba8787e7] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0602 19:45:45.755019   13568 system_pods.go:61] "storage-provisioner" [05e51f8c-9b94-44b1-867a-06909461c1d3] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0602 19:45:45.755019   13568 system_pods.go:74] duration metric: took 202.3101ms to wait for pod list to return data ...
	I0602 19:45:45.755019   13568 node_conditions.go:102] verifying NodePressure condition ...
	I0602 19:45:45.863551   13568 node_conditions.go:122] node storage ephemeral capacity is 263174212Ki
	I0602 19:45:45.863706   13568 node_conditions.go:123] node cpu capacity is 16
	I0602 19:45:45.863706   13568 node_conditions.go:105] duration metric: took 108.6867ms to run NodePressure ...
	I0602 19:45:45.863706   13568 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0602 19:45:47.658309   13568 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml": (1.7944465s)
	I0602 19:45:47.658309   13568 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0602 19:45:47.692004   13568 ops.go:34] apiserver oom_adj: -16
	I0602 19:45:47.692004   13568 kubeadm.go:630] restartCluster took 27.1391171s
	I0602 19:45:47.692081   13568 kubeadm.go:397] StartCluster complete in 27.3719325s
	I0602 19:45:47.692127   13568 settings.go:142] acquiring lock: {Name:mk2f48f1c2db86c45c5c20d13312e07e9c171d09 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0602 19:45:47.692404   13568 settings.go:150] Updating kubeconfig:  C:\Users\jenkins.minikube7\minikube-integration\kubeconfig
	I0602 19:45:47.699337   13568 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube7\minikube-integration\kubeconfig: {Name:mk3b9816be6135fef42a073128e9ed356868417a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0602 19:45:47.761081   13568 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "newest-cni-20220602193528-12108" rescaled to 1
	I0602 19:45:47.761081   13568 start.go:208] Will wait 6m0s for node &{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.23.6 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0602 19:45:47.761081   13568 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.23.6/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0602 19:45:47.766080   13568 out.go:177] * Verifying Kubernetes components...
	I0602 19:45:47.761081   13568 addons.go:415] enableAddons start: toEnable=map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true], additional=[]
	I0602 19:45:47.762080   13568 config.go:178] Loaded profile config "newest-cni-20220602193528-12108": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.6
	I0602 19:45:47.767084   13568 addons.go:65] Setting dashboard=true in profile "newest-cni-20220602193528-12108"
	I0602 19:45:47.767084   13568 addons.go:65] Setting metrics-server=true in profile "newest-cni-20220602193528-12108"
	I0602 19:45:47.767084   13568 addons.go:65] Setting default-storageclass=true in profile "newest-cni-20220602193528-12108"
	I0602 19:45:47.771085   13568 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-20220602193528-12108"
	I0602 19:45:47.767084   13568 addons.go:65] Setting storage-provisioner=true in profile "newest-cni-20220602193528-12108"
	I0602 19:45:47.771085   13568 addons.go:153] Setting addon dashboard=true in "newest-cni-20220602193528-12108"
	W0602 19:45:47.771085   13568 addons.go:165] addon dashboard should already be in state true
	I0602 19:45:47.771085   13568 addons.go:153] Setting addon storage-provisioner=true in "newest-cni-20220602193528-12108"
	W0602 19:45:47.771085   13568 addons.go:165] addon storage-provisioner should already be in state true
	I0602 19:45:47.771085   13568 host.go:66] Checking if "newest-cni-20220602193528-12108" exists ...
	I0602 19:45:47.771085   13568 addons.go:153] Setting addon metrics-server=true in "newest-cni-20220602193528-12108"
	W0602 19:45:47.771085   13568 addons.go:165] addon metrics-server should already be in state true
	I0602 19:45:47.771085   13568 host.go:66] Checking if "newest-cni-20220602193528-12108" exists ...
	I0602 19:45:47.772082   13568 host.go:66] Checking if "newest-cni-20220602193528-12108" exists ...
	I0602 19:45:47.787060   13568 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0602 19:45:47.792060   13568 cli_runner.go:164] Run: docker container inspect newest-cni-20220602193528-12108 --format={{.State.Status}}
	I0602 19:45:47.793109   13568 cli_runner.go:164] Run: docker container inspect newest-cni-20220602193528-12108 --format={{.State.Status}}
	I0602 19:45:47.794075   13568 cli_runner.go:164] Run: docker container inspect newest-cni-20220602193528-12108 --format={{.State.Status}}
	I0602 19:45:47.795068   13568 cli_runner.go:164] Run: docker container inspect newest-cni-20220602193528-12108 --format={{.State.Status}}
	I0602 19:45:48.251664   13568 start.go:786] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I0602 19:45:48.270554   13568 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" newest-cni-20220602193528-12108
	I0602 19:45:49.486707   13568 cli_runner.go:217] Completed: docker container inspect newest-cni-20220602193528-12108 --format={{.State.Status}}: (1.6926254s)
	I0602 19:45:49.501754   13568 out.go:177]   - Using image kubernetesui/dashboard:v2.6.0
	I0602 19:45:49.510708   13568 out.go:177]   - Using image k8s.gcr.io/echoserver:1.4
	I0602 19:45:49.514711   13568 addons.go:348] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0602 19:45:49.514711   13568 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0602 19:45:49.517721   13568 cli_runner.go:217] Completed: docker container inspect newest-cni-20220602193528-12108 --format={{.State.Status}}: (1.7246046s)
	I0602 19:45:49.517721   13568 cli_runner.go:217] Completed: docker container inspect newest-cni-20220602193528-12108 --format={{.State.Status}}: (1.7226457s)
	I0602 19:45:49.521743   13568 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0602 19:45:49.525713   13568 addons.go:348] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0602 19:45:49.525713   13568 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0602 19:45:49.525713   13568 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220602193528-12108
	I0602 19:45:49.533710   13568 cli_runner.go:217] Completed: docker container inspect newest-cni-20220602193528-12108 --format={{.State.Status}}: (1.7416425s)
	I0602 19:45:49.533710   13568 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220602193528-12108
	I0602 19:45:49.536735   13568 out.go:177]   - Using image fake.domain/k8s.gcr.io/echoserver:1.4
	I0602 19:45:46.456242   11612 pod_ready.go:102] pod "coredns-64897985d-6flrb" in "kube-system" namespace has status "Ready":"False"
	I0602 19:45:48.966120   11612 pod_ready.go:102] pod "coredns-64897985d-6flrb" in "kube-system" namespace has status "Ready":"False"
	I0602 19:45:47.885090    7936 pod_ready.go:102] pod "coredns-64897985d-dsm5l" in "kube-system" namespace has status "Ready":"False"
	I0602 19:45:49.888764    7936 pod_ready.go:102] pod "coredns-64897985d-dsm5l" in "kube-system" namespace has status "Ready":"False"
	I0602 19:45:46.460244   12568 pod_ready.go:102] pod "calico-kube-controllers-8594699699-m62nx" in "kube-system" namespace has status "Ready":"False"
	I0602 19:45:48.464960   12568 pod_ready.go:102] pod "calico-kube-controllers-8594699699-m62nx" in "kube-system" namespace has status "Ready":"False"
	I0602 19:45:50.957962   12568 pod_ready.go:102] pod "calico-kube-controllers-8594699699-m62nx" in "kube-system" namespace has status "Ready":"False"
	I0602 19:45:49.546118   13568 addons.go:348] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0602 19:45:49.546118   13568 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0602 19:45:49.564755   13568 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220602193528-12108
	I0602 19:45:49.567734   13568 addons.go:153] Setting addon default-storageclass=true in "newest-cni-20220602193528-12108"
	W0602 19:45:49.567734   13568 addons.go:165] addon default-storageclass should already be in state true
	I0602 19:45:49.567734   13568 host.go:66] Checking if "newest-cni-20220602193528-12108" exists ...
	I0602 19:45:49.601733   13568 cli_runner.go:164] Run: docker container inspect newest-cni-20220602193528-12108 --format={{.State.Status}}
	I0602 19:45:49.990230   13568 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" newest-cni-20220602193528-12108: (1.7196688s)
	I0602 19:45:49.990230   13568 api_server.go:51] waiting for apiserver process to appear ...
	I0602 19:45:50.010206   13568 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0602 19:45:50.073210   13568 api_server.go:71] duration metric: took 2.3121189s to wait for apiserver process to appear ...
	I0602 19:45:50.073210   13568 api_server.go:87] waiting for apiserver healthz status ...
	I0602 19:45:50.073210   13568 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:54947/healthz ...
	I0602 19:45:50.094211   13568 api_server.go:266] https://127.0.0.1:54947/healthz returned 200:
	ok
	I0602 19:45:50.099215   13568 api_server.go:140] control plane version: v1.23.6
	I0602 19:45:50.099215   13568 api_server.go:130] duration metric: took 26.0041ms to wait for apiserver health ...
	I0602 19:45:50.099215   13568 system_pods.go:43] waiting for kube-system pods to appear ...
	I0602 19:45:50.155229   13568 system_pods.go:59] 8 kube-system pods found
	I0602 19:45:50.155229   13568 system_pods.go:61] "coredns-64897985d-nvh82" [e020a13f-06c3-4682-8596-3644e6368c0d] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0602 19:45:50.155229   13568 system_pods.go:61] "etcd-newest-cni-20220602193528-12108" [9ef3d8bd-c960-4a4c-94cf-c13c0e665943] Running
	I0602 19:45:50.155229   13568 system_pods.go:61] "kube-apiserver-newest-cni-20220602193528-12108" [bde9ca58-c780-44d8-95d5-ae32ca2ec9e7] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0602 19:45:50.155229   13568 system_pods.go:61] "kube-controller-manager-newest-cni-20220602193528-12108" [4a3f5b11-4274-48f1-adba-16d5ee24cef6] Running
	I0602 19:45:50.155229   13568 system_pods.go:61] "kube-proxy-6qlxd" [83790132-5a2f-4b5b-9e93-dea1fd63879f] Running
	I0602 19:45:50.155229   13568 system_pods.go:61] "kube-scheduler-newest-cni-20220602193528-12108" [8d248962-47a0-44ab-b62e-d7215d2438b0] Running
	I0602 19:45:50.155229   13568 system_pods.go:61] "metrics-server-b955d9d8-4zjkc" [f7310338-75db-4112-9f21-d33fba8787e7] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0602 19:45:50.155229   13568 system_pods.go:61] "storage-provisioner" [05e51f8c-9b94-44b1-867a-06909461c1d3] Running
	I0602 19:45:50.155229   13568 system_pods.go:74] duration metric: took 56.0143ms to wait for pod list to return data ...
	I0602 19:45:50.155229   13568 default_sa.go:34] waiting for default service account to be created ...
	I0602 19:45:50.166988   13568 default_sa.go:45] found service account: "default"
	I0602 19:45:50.167149   13568 default_sa.go:55] duration metric: took 11.8514ms for default service account to be created ...
	I0602 19:45:50.167149   13568 kubeadm.go:572] duration metric: took 2.406057s to wait for : map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] ...
	I0602 19:45:50.167149   13568 node_conditions.go:102] verifying NodePressure condition ...
	I0602 19:45:50.184766   13568 node_conditions.go:122] node storage ephemeral capacity is 263174212Ki
	I0602 19:45:50.184766   13568 node_conditions.go:123] node cpu capacity is 16
	I0602 19:45:50.184766   13568 node_conditions.go:105] duration metric: took 17.617ms to run NodePressure ...
	I0602 19:45:50.184766   13568 start.go:213] waiting for startup goroutines ...
	I0602 19:45:51.290384   13568 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220602193528-12108: (1.7566664s)
	I0602 19:45:51.290384   13568 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220602193528-12108: (1.7646634s)
	I0602 19:45:51.290384   13568 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:54943 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\newest-cni-20220602193528-12108\id_rsa Username:docker}
	I0602 19:45:51.290384   13568 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:54943 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\newest-cni-20220602193528-12108\id_rsa Username:docker}
	I0602 19:45:51.339386   13568 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220602193528-12108: (1.7746237s)
	I0602 19:45:51.340760   13568 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:54943 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\newest-cni-20220602193528-12108\id_rsa Username:docker}
	I0602 19:45:51.368695   13568 cli_runner.go:217] Completed: docker container inspect newest-cni-20220602193528-12108 --format={{.State.Status}}: (1.7669547s)
	I0602 19:45:51.369047   13568 addons.go:348] installing /etc/kubernetes/addons/storageclass.yaml
	I0602 19:45:51.369087   13568 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0602 19:45:51.386688   13568 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220602193528-12108
	I0602 19:45:51.423686   11612 pod_ready.go:102] pod "coredns-64897985d-6flrb" in "kube-system" namespace has status "Ready":"False"
	I0602 19:45:53.429991   11612 pod_ready.go:102] pod "coredns-64897985d-6flrb" in "kube-system" namespace has status "Ready":"False"
	I0602 19:45:52.381387    7936 pod_ready.go:102] pod "coredns-64897985d-dsm5l" in "kube-system" namespace has status "Ready":"False"
	I0602 19:45:54.387104    7936 pod_ready.go:102] pod "coredns-64897985d-dsm5l" in "kube-system" namespace has status "Ready":"False"
	I0602 19:45:53.456004   12568 pod_ready.go:102] pod "calico-kube-controllers-8594699699-m62nx" in "kube-system" namespace has status "Ready":"False"
	I0602 19:45:55.961545   12568 pod_ready.go:102] pod "calico-kube-controllers-8594699699-m62nx" in "kube-system" namespace has status "Ready":"False"
	I0602 19:45:51.715375   13568 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.6/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0602 19:45:51.742885   13568 addons.go:348] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0602 19:45:51.742965   13568 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0602 19:45:51.860842   13568 addons.go:348] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0602 19:45:51.860842   13568 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0602 19:45:51.864810   13568 addons.go:348] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0602 19:45:51.864810   13568 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1849 bytes)
	I0602 19:45:51.978825   13568 addons.go:348] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0602 19:45:51.978825   13568 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0602 19:45:52.041211   13568 addons.go:348] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0602 19:45:52.041211   13568 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0602 19:45:52.080073   13568 addons.go:348] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0602 19:45:52.080073   13568 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4196 bytes)
	I0602 19:45:52.158643   13568 addons.go:348] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0602 19:45:52.158643   13568 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0602 19:45:52.258640   13568 addons.go:348] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0602 19:45:52.258640   13568 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0602 19:45:52.371390   13568 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.6/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0602 19:45:52.461418   13568 addons.go:348] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0602 19:45:52.461418   13568 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0602 19:45:52.656675   13568 addons.go:348] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0602 19:45:52.656675   13568 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0602 19:45:52.779132   13568 addons.go:348] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0602 19:45:52.779132   13568 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0602 19:45:52.892716   13568 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220602193528-12108: (1.5060214s)
	I0602 19:45:52.892716   13568 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:54943 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\newest-cni-20220602193528-12108\id_rsa Username:docker}
	I0602 19:45:52.947185   13568 addons.go:348] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0602 19:45:52.947333   13568 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0602 19:45:53.260646   13568 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.6/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0602 19:45:53.580387   13568 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.6/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0602 19:45:55.767378   13568 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.6/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (4.0517429s)
	I0602 19:45:55.880534   13568 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.6/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (3.5091289s)
	I0602 19:45:55.880534   13568 addons.go:386] Verifying addon metrics-server=true in "newest-cni-20220602193528-12108"
	I0602 19:45:56.658178   13568 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.6/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (3.3975176s)
	I0602 19:45:56.659168   13568 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.6/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (3.0777781s)
	I0602 19:45:56.663269   13568 out.go:177] * Enabled addons: storage-provisioner, metrics-server, dashboard, default-storageclass
	I0602 19:45:56.667539   13568 addons.go:417] enableAddons completed in 8.9064194s
	I0602 19:45:56.893715   13568 start.go:504] kubectl: 1.18.2, cluster: 1.23.6 (minor skew: 5)
	I0602 19:45:56.895730   13568 out.go:177] 
	W0602 19:45:56.898471   13568 out.go:239] ! C:\ProgramData\chocolatey\bin\kubectl.exe is version 1.18.2, which may have incompatibilites with Kubernetes 1.23.6.
	I0602 19:45:56.902106   13568 out.go:177]   - Want kubectl v1.23.6? Try 'minikube kubectl -- get pods -A'
	I0602 19:45:56.906953   13568 out.go:177] * Done! kubectl is now configured to use "newest-cni-20220602193528-12108" cluster and "default" namespace by default
	I0602 19:45:55.937158   11612 pod_ready.go:102] pod "coredns-64897985d-6flrb" in "kube-system" namespace has status "Ready":"False"
	I0602 19:45:58.412172   11612 pod_ready.go:102] pod "coredns-64897985d-6flrb" in "kube-system" namespace has status "Ready":"False"
	I0602 19:46:00.446385   11612 pod_ready.go:102] pod "coredns-64897985d-6flrb" in "kube-system" namespace has status "Ready":"False"
	I0602 19:45:56.391721    7936 pod_ready.go:102] pod "coredns-64897985d-dsm5l" in "kube-system" namespace has status "Ready":"False"
	I0602 19:45:58.878490    7936 pod_ready.go:102] pod "coredns-64897985d-dsm5l" in "kube-system" namespace has status "Ready":"False"
	I0602 19:46:00.891997    7936 pod_ready.go:102] pod "coredns-64897985d-dsm5l" in "kube-system" namespace has status "Ready":"False"
	I0602 19:45:58.542909   12568 pod_ready.go:102] pod "calico-kube-controllers-8594699699-m62nx" in "kube-system" namespace has status "Ready":"False"
	I0602 19:46:00.912996   12568 pod_ready.go:102] pod "calico-kube-controllers-8594699699-m62nx" in "kube-system" namespace has status "Ready":"False"
	I0602 19:46:02.920944   11612 pod_ready.go:102] pod "coredns-64897985d-6flrb" in "kube-system" namespace has status "Ready":"False"
	I0602 19:46:04.925373   11612 pod_ready.go:102] pod "coredns-64897985d-6flrb" in "kube-system" namespace has status "Ready":"False"
	I0602 19:46:03.389662    7936 pod_ready.go:102] pod "coredns-64897985d-dsm5l" in "kube-system" namespace has status "Ready":"False"
	I0602 19:46:05.898031    7936 pod_ready.go:102] pod "coredns-64897985d-dsm5l" in "kube-system" namespace has status "Ready":"False"
	I0602 19:46:02.956957   12568 pod_ready.go:102] pod "calico-kube-controllers-8594699699-m62nx" in "kube-system" namespace has status "Ready":"False"
	I0602 19:46:05.048708   12568 pod_ready.go:102] pod "calico-kube-controllers-8594699699-m62nx" in "kube-system" namespace has status "Ready":"False"
	I0602 19:46:07.418700   11612 pod_ready.go:102] pod "coredns-64897985d-6flrb" in "kube-system" namespace has status "Ready":"False"
	I0602 19:46:09.917140   11612 pod_ready.go:102] pod "coredns-64897985d-6flrb" in "kube-system" namespace has status "Ready":"False"
	I0602 19:46:08.389076    7936 pod_ready.go:102] pod "coredns-64897985d-dsm5l" in "kube-system" namespace has status "Ready":"False"
	I0602 19:46:10.877594    7936 pod_ready.go:102] pod "coredns-64897985d-dsm5l" in "kube-system" namespace has status "Ready":"False"
	I0602 19:46:07.462698   12568 pod_ready.go:102] pod "calico-kube-controllers-8594699699-m62nx" in "kube-system" namespace has status "Ready":"False"
	I0602 19:46:09.954846   12568 pod_ready.go:102] pod "calico-kube-controllers-8594699699-m62nx" in "kube-system" namespace has status "Ready":"False"
	I0602 19:46:11.927311   11612 pod_ready.go:102] pod "coredns-64897985d-6flrb" in "kube-system" namespace has status "Ready":"False"
	I0602 19:46:14.415947   11612 pod_ready.go:102] pod "coredns-64897985d-6flrb" in "kube-system" namespace has status "Ready":"False"
	I0602 19:46:13.384426    7936 pod_ready.go:102] pod "coredns-64897985d-dsm5l" in "kube-system" namespace has status "Ready":"False"
	I0602 19:46:15.884637    7936 pod_ready.go:102] pod "coredns-64897985d-dsm5l" in "kube-system" namespace has status "Ready":"False"
	I0602 19:46:12.462765   12568 pod_ready.go:102] pod "calico-kube-controllers-8594699699-m62nx" in "kube-system" namespace has status "Ready":"False"
	I0602 19:46:14.969890   12568 pod_ready.go:102] pod "calico-kube-controllers-8594699699-m62nx" in "kube-system" namespace has status "Ready":"False"
	I0602 19:46:16.927070   11612 pod_ready.go:102] pod "coredns-64897985d-6flrb" in "kube-system" namespace has status "Ready":"False"
	I0602 19:46:19.412475   11612 pod_ready.go:102] pod "coredns-64897985d-6flrb" in "kube-system" namespace has status "Ready":"False"
	I0602 19:46:18.381763    7936 pod_ready.go:102] pod "coredns-64897985d-dsm5l" in "kube-system" namespace has status "Ready":"False"
	I0602 19:46:20.386682    7936 pod_ready.go:102] pod "coredns-64897985d-dsm5l" in "kube-system" namespace has status "Ready":"False"
	I0602 19:46:17.463199   12568 pod_ready.go:102] pod "calico-kube-controllers-8594699699-m62nx" in "kube-system" namespace has status "Ready":"False"
	I0602 19:46:19.959815   12568 pod_ready.go:102] pod "calico-kube-controllers-8594699699-m62nx" in "kube-system" namespace has status "Ready":"False"
	I0602 19:46:21.421850   11612 pod_ready.go:102] pod "coredns-64897985d-6flrb" in "kube-system" namespace has status "Ready":"False"
	I0602 19:46:23.423626   11612 pod_ready.go:102] pod "coredns-64897985d-6flrb" in "kube-system" namespace has status "Ready":"False"
	I0602 19:46:22.882125    7936 pod_ready.go:102] pod "coredns-64897985d-dsm5l" in "kube-system" namespace has status "Ready":"False"
	I0602 19:46:24.882194    7936 pod_ready.go:102] pod "coredns-64897985d-dsm5l" in "kube-system" namespace has status "Ready":"False"
	I0602 19:46:21.959877   12568 pod_ready.go:102] pod "calico-kube-controllers-8594699699-m62nx" in "kube-system" namespace has status "Ready":"False"
	I0602 19:46:24.458199   12568 pod_ready.go:102] pod "calico-kube-controllers-8594699699-m62nx" in "kube-system" namespace has status "Ready":"False"
	I0602 19:46:25.931595   11612 pod_ready.go:102] pod "coredns-64897985d-6flrb" in "kube-system" namespace has status "Ready":"False"
	I0602 19:46:28.423791   11612 pod_ready.go:102] pod "coredns-64897985d-6flrb" in "kube-system" namespace has status "Ready":"False"
	I0602 19:46:27.380583    7936 pod_ready.go:102] pod "coredns-64897985d-dsm5l" in "kube-system" namespace has status "Ready":"False"
	I0602 19:46:29.871026    7936 pod_ready.go:102] pod "coredns-64897985d-dsm5l" in "kube-system" namespace has status "Ready":"False"
	I0602 19:46:26.935194   12568 pod_ready.go:102] pod "calico-kube-controllers-8594699699-m62nx" in "kube-system" namespace has status "Ready":"False"
	I0602 19:46:28.963416   12568 pod_ready.go:102] pod "calico-kube-controllers-8594699699-m62nx" in "kube-system" namespace has status "Ready":"False"
	I0602 19:46:30.922650   11612 pod_ready.go:102] pod "coredns-64897985d-6flrb" in "kube-system" namespace has status "Ready":"False"
	I0602 19:46:32.923471   11612 pod_ready.go:102] pod "coredns-64897985d-6flrb" in "kube-system" namespace has status "Ready":"False"
	I0602 19:46:34.938432   11612 pod_ready.go:102] pod "coredns-64897985d-6flrb" in "kube-system" namespace has status "Ready":"False"
	I0602 19:46:31.879002    7936 pod_ready.go:102] pod "coredns-64897985d-dsm5l" in "kube-system" namespace has status "Ready":"False"
	I0602 19:46:33.885032    7936 pod_ready.go:102] pod "coredns-64897985d-dsm5l" in "kube-system" namespace has status "Ready":"False"
	I0602 19:46:31.475999   12568 pod_ready.go:102] pod "calico-kube-controllers-8594699699-m62nx" in "kube-system" namespace has status "Ready":"False"
	I0602 19:46:33.914876   12568 pod_ready.go:102] pod "calico-kube-controllers-8594699699-m62nx" in "kube-system" namespace has status "Ready":"False"
	I0602 19:46:35.965256   12568 pod_ready.go:102] pod "calico-kube-controllers-8594699699-m62nx" in "kube-system" namespace has status "Ready":"False"
	I0602 19:46:37.411449   11612 pod_ready.go:102] pod "coredns-64897985d-6flrb" in "kube-system" namespace has status "Ready":"False"
	I0602 19:46:39.918209   11612 pod_ready.go:102] pod "coredns-64897985d-6flrb" in "kube-system" namespace has status "Ready":"False"
	I0602 19:46:36.384467    7936 pod_ready.go:102] pod "coredns-64897985d-dsm5l" in "kube-system" namespace has status "Ready":"False"
	I0602 19:46:38.886247    7936 pod_ready.go:102] pod "coredns-64897985d-dsm5l" in "kube-system" namespace has status "Ready":"False"
	I0602 19:46:40.912312    7936 pod_ready.go:102] pod "coredns-64897985d-dsm5l" in "kube-system" namespace has status "Ready":"False"
	I0602 19:46:38.461171   12568 pod_ready.go:102] pod "calico-kube-controllers-8594699699-m62nx" in "kube-system" namespace has status "Ready":"False"
	I0602 19:46:40.906328   12568 pod_ready.go:102] pod "calico-kube-controllers-8594699699-m62nx" in "kube-system" namespace has status "Ready":"False"
	I0602 19:46:42.422969   11612 pod_ready.go:102] pod "coredns-64897985d-6flrb" in "kube-system" namespace has status "Ready":"False"
	I0602 19:46:44.913110   11612 pod_ready.go:102] pod "coredns-64897985d-6flrb" in "kube-system" namespace has status "Ready":"False"
	I0602 19:46:43.375284    7936 pod_ready.go:102] pod "coredns-64897985d-dsm5l" in "kube-system" namespace has status "Ready":"False"
	I0602 19:46:45.384063    7936 pod_ready.go:102] pod "coredns-64897985d-dsm5l" in "kube-system" namespace has status "Ready":"False"
	I0602 19:46:42.956634   12568 pod_ready.go:102] pod "calico-kube-controllers-8594699699-m62nx" in "kube-system" namespace has status "Ready":"False"
	I0602 19:46:45.459202   12568 pod_ready.go:102] pod "calico-kube-controllers-8594699699-m62nx" in "kube-system" namespace has status "Ready":"False"
	I0602 19:46:46.922296   11612 pod_ready.go:102] pod "coredns-64897985d-6flrb" in "kube-system" namespace has status "Ready":"False"
	I0602 19:46:49.417565   11612 pod_ready.go:102] pod "coredns-64897985d-6flrb" in "kube-system" namespace has status "Ready":"False"
	I0602 19:46:47.387326    7936 pod_ready.go:102] pod "coredns-64897985d-dsm5l" in "kube-system" namespace has status "Ready":"False"
	I0602 19:46:49.875477    7936 pod_ready.go:102] pod "coredns-64897985d-dsm5l" in "kube-system" namespace has status "Ready":"False"
	I0602 19:46:47.904173   12568 pod_ready.go:102] pod "calico-kube-controllers-8594699699-m62nx" in "kube-system" namespace has status "Ready":"False"
	I0602 19:46:49.953096   12568 pod_ready.go:102] pod "calico-kube-controllers-8594699699-m62nx" in "kube-system" namespace has status "Ready":"False"
	I0602 19:46:51.917138   11612 pod_ready.go:102] pod "coredns-64897985d-6flrb" in "kube-system" namespace has status "Ready":"False"
	I0602 19:46:54.431186   11612 pod_ready.go:102] pod "coredns-64897985d-6flrb" in "kube-system" namespace has status "Ready":"False"
	I0602 19:46:51.875929    7936 pod_ready.go:102] pod "coredns-64897985d-dsm5l" in "kube-system" namespace has status "Ready":"False"
	I0602 19:46:53.889171    7936 pod_ready.go:102] pod "coredns-64897985d-dsm5l" in "kube-system" namespace has status "Ready":"False"
	I0602 19:46:52.455048   12568 pod_ready.go:102] pod "calico-kube-controllers-8594699699-m62nx" in "kube-system" namespace has status "Ready":"False"
	I0602 19:46:54.948954   12568 pod_ready.go:102] pod "calico-kube-controllers-8594699699-m62nx" in "kube-system" namespace has status "Ready":"False"
	I0602 19:46:56.922439   11612 pod_ready.go:102] pod "coredns-64897985d-6flrb" in "kube-system" namespace has status "Ready":"False"
	I0602 19:46:59.414486   11612 pod_ready.go:102] pod "coredns-64897985d-6flrb" in "kube-system" namespace has status "Ready":"False"
	I0602 19:46:56.383559    7936 pod_ready.go:102] pod "coredns-64897985d-dsm5l" in "kube-system" namespace has status "Ready":"False"
	I0602 19:46:58.386074    7936 pod_ready.go:102] pod "coredns-64897985d-dsm5l" in "kube-system" namespace has status "Ready":"False"
	I0602 19:47:00.878659    7936 pod_ready.go:102] pod "coredns-64897985d-dsm5l" in "kube-system" namespace has status "Ready":"False"
	I0602 19:46:56.963450   12568 pod_ready.go:102] pod "calico-kube-controllers-8594699699-m62nx" in "kube-system" namespace has status "Ready":"False"
	I0602 19:46:59.453500   12568 pod_ready.go:102] pod "calico-kube-controllers-8594699699-m62nx" in "kube-system" namespace has status "Ready":"False"
	I0602 19:47:01.427193   11612 pod_ready.go:102] pod "coredns-64897985d-6flrb" in "kube-system" namespace has status "Ready":"False"
	I0602 19:47:03.919623   11612 pod_ready.go:102] pod "coredns-64897985d-6flrb" in "kube-system" namespace has status "Ready":"False"
	I0602 19:47:02.882523    7936 pod_ready.go:102] pod "coredns-64897985d-dsm5l" in "kube-system" namespace has status "Ready":"False"
	I0602 19:47:04.883156    7936 pod_ready.go:102] pod "coredns-64897985d-dsm5l" in "kube-system" namespace has status "Ready":"False"
	I0602 19:47:01.952788   12568 pod_ready.go:102] pod "calico-kube-controllers-8594699699-m62nx" in "kube-system" namespace has status "Ready":"False"
	I0602 19:47:03.955309   12568 pod_ready.go:102] pod "calico-kube-controllers-8594699699-m62nx" in "kube-system" namespace has status "Ready":"False"
	I0602 19:47:05.958585   12568 pod_ready.go:102] pod "calico-kube-controllers-8594699699-m62nx" in "kube-system" namespace has status "Ready":"False"
	
	* 
	* ==> Docker <==
	* -- Logs begin at Thu 2022-06-02 19:44:54 UTC, end at Thu 2022-06-02 19:47:17 UTC. --
	Jun 02 19:46:49 newest-cni-20220602193528-12108 dockerd[251]: time="2022-06-02T19:46:49.728948900Z" level=info msg="ignoring event" container=6ed686ffe398184051d614bba0314eb03519439031c2efbca4a7745a4ed8c2bd module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 02 19:46:52 newest-cni-20220602193528-12108 dockerd[251]: time="2022-06-02T19:46:52.066772800Z" level=info msg="ignoring event" container=2282192f5a17fad5961885948f0242fa9c61f2aba358d9b81388dd9c5e2a9519 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 02 19:46:52 newest-cni-20220602193528-12108 dockerd[251]: time="2022-06-02T19:46:52.272282500Z" level=info msg="ignoring event" container=62a46d6a973fd51419682bc595ccd8868f144b358ef0d98dc3758886613999e3 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 02 19:46:52 newest-cni-20220602193528-12108 dockerd[251]: time="2022-06-02T19:46:52.447107000Z" level=info msg="ignoring event" container=9f73a5993f8774e9016b21e2613d19d966bddac5405d39fa417f38dca7c95883 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 02 19:46:53 newest-cni-20220602193528-12108 dockerd[251]: time="2022-06-02T19:46:53.853573400Z" level=info msg="ignoring event" container=bb61c185bc45fced836bb26e1c294788f694041adb9b81b4891063bbd225d1ab module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 02 19:46:55 newest-cni-20220602193528-12108 dockerd[251]: time="2022-06-02T19:46:55.867115700Z" level=info msg="ignoring event" container=0aae13a6117de8e9ac5ffc909b38c8e468704ae7a2bd1a114500c46eb742bbf9 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 02 19:46:56 newest-cni-20220602193528-12108 dockerd[251]: time="2022-06-02T19:46:56.001854600Z" level=info msg="ignoring event" container=c89a95083f65205f356fcb247eaec0d2c5781704d03a2d804a3429244964d28e module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 02 19:46:58 newest-cni-20220602193528-12108 dockerd[251]: time="2022-06-02T19:46:58.088481400Z" level=info msg="ignoring event" container=97ad514634973bcc851a7532092548d446424c8c1b751fc9fa23499cd98dc45f module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 02 19:46:58 newest-cni-20220602193528-12108 dockerd[251]: time="2022-06-02T19:46:58.252614600Z" level=info msg="ignoring event" container=c74890e3aec7331da7f905775ca0fdffd0cae8cd59e29377978e8acb23ebc741 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 02 19:46:59 newest-cni-20220602193528-12108 dockerd[251]: time="2022-06-02T19:46:59.339449100Z" level=info msg="ignoring event" container=f3713257900fbfc741111bb4e05810045d3c83e7be0d4e920c8feaa6937ab930 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 02 19:47:00 newest-cni-20220602193528-12108 dockerd[251]: time="2022-06-02T19:47:00.352703300Z" level=info msg="ignoring event" container=f0d141ca1181fca14590fdcf7495e31b40556474dde7770c14eba977ea3ba12d module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 02 19:47:00 newest-cni-20220602193528-12108 dockerd[251]: time="2022-06-02T19:47:00.562019400Z" level=info msg="ignoring event" container=261f39c4dd42734f5251205a42c4d610d7e3e931b195f69488bdf058de2f6746 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 02 19:47:02 newest-cni-20220602193528-12108 dockerd[251]: time="2022-06-02T19:47:02.661622700Z" level=info msg="ignoring event" container=c89a8024bcb4a6900df5b6bb417ab2fb238d43b79c7e09e1c73a09a36b2e4f42 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 02 19:47:02 newest-cni-20220602193528-12108 dockerd[251]: time="2022-06-02T19:47:02.775972000Z" level=info msg="ignoring event" container=e26dbf56c9d2e0e40c4bd63bcf0623f1b30ff6c08b8e5e15d939a2de8faaccca module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 02 19:47:04 newest-cni-20220602193528-12108 dockerd[251]: time="2022-06-02T19:47:04.650358900Z" level=info msg="ignoring event" container=6515a1656e5f224fc1c80cd36462b09e2de73eba156801b8ee28b99f310121b3 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 02 19:47:05 newest-cni-20220602193528-12108 dockerd[251]: time="2022-06-02T19:47:05.040967400Z" level=info msg="ignoring event" container=2f334d39c3531f2cf6eff7f78fcf395deeb79e6927e5931458890ba5d4c1d4c0 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 02 19:47:07 newest-cni-20220602193528-12108 dockerd[251]: time="2022-06-02T19:47:07.301020000Z" level=info msg="ignoring event" container=0ed681c688a23b9daefcd040d91f59812dce720cd51397297794df1d61436a99 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 02 19:47:07 newest-cni-20220602193528-12108 dockerd[251]: time="2022-06-02T19:47:07.490392400Z" level=info msg="ignoring event" container=86154aa53814210d7dd7aee0fe5c92ef43a1fe8b4f236796fde4b84029592d5c module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 02 19:47:10 newest-cni-20220602193528-12108 dockerd[251]: time="2022-06-02T19:47:10.251740700Z" level=info msg="ignoring event" container=2c845dbb9b0106b9afb19fee23b76c7a7556d78d68ba403aac08e284112de0fe module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 02 19:47:10 newest-cni-20220602193528-12108 dockerd[251]: time="2022-06-02T19:47:10.439183600Z" level=info msg="ignoring event" container=0a9762a792cc6f385b29582812cb287deec4aa4a13ee80ff77c54a7f41b60d53 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 02 19:47:11 newest-cni-20220602193528-12108 dockerd[251]: time="2022-06-02T19:47:11.059726800Z" level=info msg="ignoring event" container=38be3d50d85e0fa9092fb1bcc9627f60f22914ff5a38b4b906693fda9e88ccbc module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 02 19:47:12 newest-cni-20220602193528-12108 dockerd[251]: time="2022-06-02T19:47:12.134054600Z" level=info msg="ignoring event" container=03f73da552867474c2dbe06c3799c89c9ffa939da19fa3ecd57181aaca9dbf82 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 02 19:47:14 newest-cni-20220602193528-12108 dockerd[251]: time="2022-06-02T19:47:14.739057300Z" level=info msg="ignoring event" container=0e2bd2b4fe3613af1902a6f52e79089b469a03bc4a8a3267eaef444bbb958e38 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 02 19:47:15 newest-cni-20220602193528-12108 dockerd[251]: time="2022-06-02T19:47:15.137288300Z" level=info msg="ignoring event" container=6b791474461f094525f07849371bfadc91091d8a335285b9e625f1774beec089 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 02 19:47:15 newest-cni-20220602193528-12108 dockerd[251]: time="2022-06-02T19:47:15.240213800Z" level=info msg="ignoring event" container=1acf379070592ee501751d922de7b98e53f98ab907093e9efab131dfed6107c1 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE               CREATED              STATE               NAME                      ATTEMPT             POD ID
	f3713257900fb       6e38f40d628db       51 seconds ago       Exited              storage-provisioner       2                   23b54706dc012
	1705a550b9b6c       4c03754524064       About a minute ago   Running             kube-proxy                1                   731b52d4bbe49
	93d0a0699aa39       25f8c7f3da61c       About a minute ago   Running             etcd                      1                   9bfbfb42dc1e8
	2e5f7364c8bed       595f327f224a4       About a minute ago   Running             kube-scheduler            1                   9fa60b2829a78
	2bdb3f26ce03b       8fa62c12256df       About a minute ago   Running             kube-apiserver            1                   1d4b824c3839a
	a13b2fac238d7       df7b72818ad2e       About a minute ago   Running             kube-controller-manager   2                   ce12276d46737
	e576c766bf2ab       4c03754524064       3 minutes ago        Exited              kube-proxy                0                   9d6ff8386a767
	f0f4349e2a508       df7b72818ad2e       3 minutes ago        Exited              kube-controller-manager   1                   bc5560d151d25
	37eb5623cd991       25f8c7f3da61c       4 minutes ago        Exited              etcd                      0                   4cfadebe4a138
	fcbc8319890a5       8fa62c12256df       4 minutes ago        Exited              kube-apiserver            0                   001ff57088e9c
	d9cb2e8306132       595f327f224a4       4 minutes ago        Exited              kube-scheduler            0                   7a54dbda0d91f
	
	* 
	* ==> describe nodes <==
	* Name:               newest-cni-20220602193528-12108
	Roles:              control-plane,master
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=newest-cni-20220602193528-12108
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=408dc4036f5a6d8b1313a2031b5dcb646a720fae
	                    minikube.k8s.io/name=newest-cni-20220602193528-12108
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2022_06_02T19_43_43_0700
	                    minikube.k8s.io/version=v1.26.0-beta.1
	                    node-role.kubernetes.io/control-plane=
	                    node-role.kubernetes.io/master=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 02 Jun 2022 19:43:13 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  newest-cni-20220602193528-12108
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 02 Jun 2022 19:47:14 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 02 Jun 2022 19:45:41 +0000   Thu, 02 Jun 2022 19:43:10 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 02 Jun 2022 19:45:41 +0000   Thu, 02 Jun 2022 19:43:10 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 02 Jun 2022 19:45:41 +0000   Thu, 02 Jun 2022 19:43:10 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 02 Jun 2022 19:45:41 +0000   Thu, 02 Jun 2022 19:43:55 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.58.2
	  Hostname:    newest-cni-20220602193528-12108
	Capacity:
	  cpu:                16
	  ephemeral-storage:  263174212Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             52638988Ki
	  pods:               110
	Allocatable:
	  cpu:                16
	  ephemeral-storage:  263174212Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             52638988Ki
	  pods:               110
	System Info:
	  Machine ID:                 a34bb2508bce429bb90502b0ef044420
	  System UUID:                a34bb2508bce429bb90502b0ef044420
	  Boot ID:                    174c87a1-4ba0-4f3f-a840-04757270163f
	  Kernel Version:             5.10.102.1-microsoft-standard-WSL2
	  OS Image:                   Ubuntu 20.04.4 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://20.10.16
	  Kubelet Version:            v1.23.6
	  Kube-Proxy Version:         v1.23.6
	PodCIDR:                      192.168.0.0/24
	PodCIDRs:                     192.168.0.0/24
	Non-terminated Pods:          (10 in total)
	  Namespace                   Name                                                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                       ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-64897985d-nvh82                                    100m (0%!)(MISSING)     0 (0%!)(MISSING)      70Mi (0%!)(MISSING)        170Mi (0%!)(MISSING)     3m18s
	  kube-system                 etcd-newest-cni-20220602193528-12108                       100m (0%!)(MISSING)     0 (0%!)(MISSING)      100Mi (0%!)(MISSING)       0 (0%!)(MISSING)         3m57s
	  kube-system                 kube-apiserver-newest-cni-20220602193528-12108             250m (1%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m32s
	  kube-system                 kube-controller-manager-newest-cni-20220602193528-12108    200m (1%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m2s
	  kube-system                 kube-proxy-6qlxd                                           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m18s
	  kube-system                 kube-scheduler-newest-cni-20220602193528-12108             100m (0%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m1s
	  kube-system                 metrics-server-b955d9d8-4zjkc                              100m (0%!)(MISSING)     0 (0%!)(MISSING)      200Mi (0%!)(MISSING)       0 (0%!)(MISSING)         3m4s
	  kube-system                 storage-provisioner                                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m10s
	  kubernetes-dashboard        dashboard-metrics-scraper-56974995fc-9zf2w                 0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         79s
	  kubernetes-dashboard        kubernetes-dashboard-cd7c84bfc-xsbcz                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         79s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (5%!)(MISSING)   0 (0%!)(MISSING)
	  memory             370Mi (0%!)(MISSING)  170Mi (0%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From        Message
	  ----    ------                   ----                   ----        -------
	  Normal  Starting                 92s                    kube-proxy  
	  Normal  Starting                 3m13s                  kube-proxy  
	  Normal  NodeHasNoDiskPressure    4m14s (x8 over 4m15s)  kubelet     Node newest-cni-20220602193528-12108 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m14s (x7 over 4m15s)  kubelet     Node newest-cni-20220602193528-12108 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientMemory  4m14s (x8 over 4m15s)  kubelet     Node newest-cni-20220602193528-12108 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m34s                  kubelet     Node newest-cni-20220602193528-12108 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m34s                  kubelet     Node newest-cni-20220602193528-12108 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientMemory  3m34s                  kubelet     Node newest-cni-20220602193528-12108 status is now: NodeHasSufficientMemory
	  Normal  Starting                 3m34s                  kubelet     Starting kubelet.
	  Normal  NodeNotReady             3m33s                  kubelet     Node newest-cni-20220602193528-12108 status is now: NodeNotReady
	  Normal  NodeAllocatableEnforced  3m32s                  kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeReady                3m23s                  kubelet     Node newest-cni-20220602193528-12108 status is now: NodeReady
	  Normal  Starting                 109s                   kubelet     Starting kubelet.
	  Normal  NodeHasSufficientMemory  108s (x8 over 109s)    kubelet     Node newest-cni-20220602193528-12108 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    108s (x8 over 109s)    kubelet     Node newest-cni-20220602193528-12108 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     108s (x7 over 109s)    kubelet     Node newest-cni-20220602193528-12108 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  108s                   kubelet     Updated Node Allocatable limit across pods
	
	* 
	* ==> dmesg <==
	* [Jun 2 19:22] WSL2: Performing memory compaction.
	[Jun 2 19:23] WSL2: Performing memory compaction.
	[Jun 2 19:25] WSL2: Performing memory compaction.
	[Jun 2 19:35] WSL2: Performing memory compaction.
	[Jun 2 19:36] WSL2: Performing memory compaction.
	[Jun 2 19:37] WSL2: Performing memory compaction.
	[Jun 2 19:38] WSL2: Performing memory compaction.
	[Jun 2 19:41] WSL2: Performing memory compaction.
	[Jun 2 19:42] WSL2: Performing memory compaction.
	
	* 
	* ==> etcd [37eb5623cd99] <==
	* {"level":"info","ts":"2022-06-02T19:44:00.445Z","caller":"traceutil/trace.go:171","msg":"trace[1358101858] transaction","detail":"{read_only:false; response_revision:432; number_of_response:1; }","duration":"101.6597ms","start":"2022-06-02T19:44:00.344Z","end":"2022-06-02T19:44:00.445Z","steps":["trace[1358101858] 'process raft request'  (duration: 101.2861ms)"],"step_count":1}
	{"level":"info","ts":"2022-06-02T19:44:00.446Z","caller":"traceutil/trace.go:171","msg":"trace[2062613631] transaction","detail":"{read_only:false; response_revision:431; number_of_response:1; }","duration":"103.3103ms","start":"2022-06-02T19:44:00.343Z","end":"2022-06-02T19:44:00.446Z","steps":["trace[2062613631] 'process raft request'  (duration: 101.6054ms)"],"step_count":1}
	{"level":"info","ts":"2022-06-02T19:44:00.604Z","caller":"traceutil/trace.go:171","msg":"trace[1432950124] transaction","detail":"{read_only:false; response_revision:439; number_of_response:1; }","duration":"143.1198ms","start":"2022-06-02T19:44:00.461Z","end":"2022-06-02T19:44:00.604Z","steps":["trace[1432950124] 'process raft request'  (duration: 130.8677ms)","trace[1432950124] 'compare'  (duration: 11.6224ms)"],"step_count":2}
	{"level":"info","ts":"2022-06-02T19:44:00.839Z","caller":"traceutil/trace.go:171","msg":"trace[956500219] transaction","detail":"{read_only:false; response_revision:450; number_of_response:1; }","duration":"101.6569ms","start":"2022-06-02T19:44:00.737Z","end":"2022-06-02T19:44:00.839Z","steps":["trace[956500219] 'compare'  (duration: 99.5461ms)"],"step_count":1}
	{"level":"warn","ts":"2022-06-02T19:44:00.968Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"110.6166ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/kube-proxy-6qlxd\" ","response":"range_response_count:1 size:4448"}
	{"level":"info","ts":"2022-06-02T19:44:00.968Z","caller":"traceutil/trace.go:171","msg":"trace[1357411940] range","detail":"{range_begin:/registry/pods/kube-system/kube-proxy-6qlxd; range_end:; response_count:1; response_revision:454; }","duration":"110.9896ms","start":"2022-06-02T19:44:00.857Z","end":"2022-06-02T19:44:00.968Z","steps":["trace[1357411940] 'agreement among raft nodes before linearized reading'  (duration: 89.0991ms)","trace[1357411940] 'range keys from in-memory index tree'  (duration: 21.4928ms)"],"step_count":2}
	{"level":"warn","ts":"2022-06-02T19:44:00.968Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"108.1911ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/coredns-64897985d-2sl4x\" ","response":"range_response_count:1 size:4337"}
	{"level":"info","ts":"2022-06-02T19:44:00.968Z","caller":"traceutil/trace.go:171","msg":"trace[1004335176] range","detail":"{range_begin:/registry/pods/kube-system/coredns-64897985d-2sl4x; range_end:; response_count:1; response_revision:454; }","duration":"108.9072ms","start":"2022-06-02T19:44:00.860Z","end":"2022-06-02T19:44:00.968Z","steps":["trace[1004335176] 'agreement among raft nodes before linearized reading'  (duration: 86.6981ms)","trace[1004335176] 'range keys from in-memory index tree'  (duration: 21.5285ms)"],"step_count":2}
	{"level":"warn","ts":"2022-06-02T19:44:00.968Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"107.8543ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/coredns-64897985d-nvh82\" ","response":"range_response_count:1 size:3461"}
	{"level":"info","ts":"2022-06-02T19:44:00.969Z","caller":"traceutil/trace.go:171","msg":"trace[331850363] range","detail":"{range_begin:/registry/pods/kube-system/coredns-64897985d-nvh82; range_end:; response_count:1; response_revision:454; }","duration":"108.4961ms","start":"2022-06-02T19:44:00.860Z","end":"2022-06-02T19:44:00.968Z","steps":["trace[331850363] 'agreement among raft nodes before linearized reading'  (duration: 86.1808ms)","trace[331850363] 'range keys from in-memory index tree'  (duration: 21.6444ms)"],"step_count":2}
	{"level":"warn","ts":"2022-06-02T19:44:08.052Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"103.7827ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/roles/kube-system/system:persistent-volume-provisioner\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2022-06-02T19:44:08.052Z","caller":"traceutil/trace.go:171","msg":"trace[994262197] range","detail":"{range_begin:/registry/roles/kube-system/system:persistent-volume-provisioner; range_end:; response_count:0; response_revision:476; }","duration":"104.0265ms","start":"2022-06-02T19:44:07.948Z","end":"2022-06-02T19:44:08.052Z","steps":["trace[994262197] 'agreement among raft nodes before linearized reading'  (duration: 87.5831ms)"],"step_count":1}
	{"level":"warn","ts":"2022-06-02T19:44:08.052Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"102.8469ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/\" range_end:\"/registry/minions0\" ","response":"range_response_count:1 size:4518"}
	{"level":"info","ts":"2022-06-02T19:44:08.052Z","caller":"traceutil/trace.go:171","msg":"trace[614362432] range","detail":"{range_begin:/registry/minions/; range_end:/registry/minions0; response_count:1; response_revision:476; }","duration":"103.3016ms","start":"2022-06-02T19:44:07.949Z","end":"2022-06-02T19:44:08.052Z","steps":["trace[614362432] 'agreement among raft nodes before linearized reading'  (duration: 86.5535ms)","trace[614362432] 'range keys from in-memory index tree'  (duration: 16.2372ms)"],"step_count":2}
	{"level":"warn","ts":"2022-06-02T19:44:14.581Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"118.6865ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/deployments/kube-system/metrics-server\" ","response":"range_response_count:1 size:4883"}
	{"level":"info","ts":"2022-06-02T19:44:14.581Z","caller":"traceutil/trace.go:171","msg":"trace[331367817] range","detail":"{range_begin:/registry/deployments/kube-system/metrics-server; range_end:; response_count:1; response_revision:515; }","duration":"119.0079ms","start":"2022-06-02T19:44:14.462Z","end":"2022-06-02T19:44:14.581Z","steps":["trace[331367817] 'agreement among raft nodes before linearized reading'  (duration: 108.5714ms)"],"step_count":1}
	{"level":"info","ts":"2022-06-02T19:44:22.347Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2022-06-02T19:44:22.348Z","caller":"embed/etcd.go:367","msg":"closing etcd server","name":"newest-cni-20220602193528-12108","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.58.2:2380"],"advertise-client-urls":["https://192.168.58.2:2379"]}
	WARNING: 2022/06/02 19:44:22 [core] grpc: addrConn.createTransport failed to connect to {192.168.58.2:2379 192.168.58.2:2379 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 192.168.58.2:2379: connect: connection refused". Reconnecting...
	WARNING: 2022/06/02 19:44:22 [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1:2379 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	WARNING: 2022/06/02 19:44:22 [core] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	{"level":"info","ts":"2022-06-02T19:44:22.535Z","caller":"etcdserver/server.go:1438","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"b2c6679ac05f2cf1","current-leader-member-id":"b2c6679ac05f2cf1"}
	{"level":"info","ts":"2022-06-02T19:44:22.642Z","caller":"embed/etcd.go:562","msg":"stopping serving peer traffic","address":"192.168.58.2:2380"}
	{"level":"info","ts":"2022-06-02T19:44:22.644Z","caller":"embed/etcd.go:567","msg":"stopped serving peer traffic","address":"192.168.58.2:2380"}
	{"level":"info","ts":"2022-06-02T19:44:22.644Z","caller":"embed/etcd.go:369","msg":"closed etcd server","name":"newest-cni-20220602193528-12108","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.58.2:2380"],"advertise-client-urls":["https://192.168.58.2:2379"]}
	
	* 
	* ==> etcd [93d0a0699aa3] <==
	* {"level":"warn","ts":"2022-06-02T19:45:45.526Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"144.3113ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2022-06-02T19:45:45.527Z","caller":"traceutil/trace.go:171","msg":"trace[2000617987] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:579; }","duration":"144.404ms","start":"2022-06-02T19:45:45.382Z","end":"2022-06-02T19:45:45.527Z","steps":["trace[2000617987] 'agreement among raft nodes before linearized reading'  (duration: 142.9119ms)"],"step_count":1}
	{"level":"warn","ts":"2022-06-02T19:45:45.660Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"117.7292ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/masterleases/192.168.58.2\" ","response":"range_response_count:0 size:5"}
	{"level":"warn","ts":"2022-06-02T19:45:45.660Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"102.3676ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/\" range_end:\"/registry/pods/kube-system0\" ","response":"range_response_count:8 size:41515"}
	{"level":"info","ts":"2022-06-02T19:45:45.660Z","caller":"traceutil/trace.go:171","msg":"trace[147537117] range","detail":"{range_begin:/registry/pods/kube-system/; range_end:/registry/pods/kube-system0; response_count:8; response_revision:580; }","duration":"102.5288ms","start":"2022-06-02T19:45:45.558Z","end":"2022-06-02T19:45:45.660Z","steps":["trace[147537117] 'agreement among raft nodes before linearized reading'  (duration: 78.9483ms)","trace[147537117] 'range keys from in-memory index tree'  (duration: 22.7079ms)"],"step_count":2}
	{"level":"info","ts":"2022-06-02T19:45:45.660Z","caller":"traceutil/trace.go:171","msg":"trace[622534470] range","detail":"{range_begin:/registry/masterleases/192.168.58.2; range_end:; response_count:0; response_revision:580; }","duration":"118.427ms","start":"2022-06-02T19:45:45.542Z","end":"2022-06-02T19:45:45.660Z","steps":["trace[622534470] 'agreement among raft nodes before linearized reading'  (duration: 94.9777ms)","trace[622534470] 'range keys from in-memory index tree'  (duration: 22.7024ms)"],"step_count":2}
	{"level":"warn","ts":"2022-06-02T19:45:56.355Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"105.5863ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/specs/kubernetes-dashboard/kubernetes-dashboard\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2022-06-02T19:45:56.355Z","caller":"traceutil/trace.go:171","msg":"trace[120776893] range","detail":"{range_begin:/registry/services/specs/kubernetes-dashboard/kubernetes-dashboard; range_end:; response_count:0; response_revision:642; }","duration":"105.784ms","start":"2022-06-02T19:45:56.250Z","end":"2022-06-02T19:45:56.355Z","steps":["trace[120776893] 'range keys from in-memory index tree'  (duration: 102.9097ms)"],"step_count":1}
	{"level":"info","ts":"2022-06-02T19:45:59.337Z","caller":"traceutil/trace.go:171","msg":"trace[249791148] transaction","detail":"{read_only:false; response_revision:657; number_of_response:1; }","duration":"100.1132ms","start":"2022-06-02T19:45:59.237Z","end":"2022-06-02T19:45:59.337Z","steps":["trace[249791148] 'process raft request'  (duration: 99.7216ms)"],"step_count":1}
	{"level":"info","ts":"2022-06-02T19:45:59.338Z","caller":"traceutil/trace.go:171","msg":"trace[1734302757] linearizableReadLoop","detail":"{readStateIndex:695; appliedIndex:694; }","duration":"100.4407ms","start":"2022-06-02T19:45:59.238Z","end":"2022-06-02T19:45:59.338Z","steps":["trace[1734302757] 'read index received'  (duration: 98.8763ms)","trace[1734302757] 'applied index is now lower than readState.Index'  (duration: 1.5611ms)"],"step_count":2}
	{"level":"warn","ts":"2022-06-02T19:45:59.339Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"101.5752ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/replicaset-controller\" ","response":"range_response_count:1 size:260"}
	{"level":"info","ts":"2022-06-02T19:45:59.339Z","caller":"traceutil/trace.go:171","msg":"trace[2788297] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/replicaset-controller; range_end:; response_count:1; response_revision:663; }","duration":"101.9595ms","start":"2022-06-02T19:45:59.237Z","end":"2022-06-02T19:45:59.339Z","steps":["trace[2788297] 'agreement among raft nodes before linearized reading'  (duration: 101.4912ms)"],"step_count":1}
	{"level":"warn","ts":"2022-06-02T19:45:59.339Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"179.8989ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/endpointslice-controller\" ","response":"range_response_count:1 size:269"}
	{"level":"info","ts":"2022-06-02T19:45:59.339Z","caller":"traceutil/trace.go:171","msg":"trace[1587263312] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/endpointslice-controller; range_end:; response_count:1; response_revision:663; }","duration":"179.9674ms","start":"2022-06-02T19:45:59.159Z","end":"2022-06-02T19:45:59.339Z","steps":["trace[1587263312] 'agreement among raft nodes before linearized reading'  (duration: 179.8528ms)"],"step_count":1}
	{"level":"warn","ts":"2022-06-02T19:45:59.362Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"106.8839ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/deployments/kube-system/coredns\" ","response":"range_response_count:1 size:3982"}
	{"level":"info","ts":"2022-06-02T19:45:59.363Z","caller":"traceutil/trace.go:171","msg":"trace[2101052752] range","detail":"{range_begin:/registry/deployments/kube-system/coredns; range_end:; response_count:1; response_revision:663; }","duration":"107.1489ms","start":"2022-06-02T19:45:59.256Z","end":"2022-06-02T19:45:59.363Z","steps":["trace[2101052752] 'agreement among raft nodes before linearized reading'  (duration: 106.797ms)"],"step_count":1}
	{"level":"warn","ts":"2022-06-02T19:45:59.363Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"115.7089ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kubernetes-dashboard/default\" ","response":"range_response_count:1 size:199"}
	{"level":"info","ts":"2022-06-02T19:45:59.363Z","caller":"traceutil/trace.go:171","msg":"trace[183619020] range","detail":"{range_begin:/registry/serviceaccounts/kubernetes-dashboard/default; range_end:; response_count:1; response_revision:663; }","duration":"116.0293ms","start":"2022-06-02T19:45:59.247Z","end":"2022-06-02T19:45:59.363Z","steps":["trace[183619020] 'agreement among raft nodes before linearized reading'  (duration: 115.5069ms)"],"step_count":1}
	{"level":"warn","ts":"2022-06-02T19:45:59.648Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"104.3687ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/deployments/kubernetes-dashboard/kubernetes-dashboard\" ","response":"range_response_count:1 size:4887"}
	{"level":"info","ts":"2022-06-02T19:45:59.649Z","caller":"traceutil/trace.go:171","msg":"trace[1128804163] range","detail":"{range_begin:/registry/deployments/kubernetes-dashboard/kubernetes-dashboard; range_end:; response_count:1; response_revision:674; }","duration":"104.653ms","start":"2022-06-02T19:45:59.544Z","end":"2022-06-02T19:45:59.649Z","steps":["trace[1128804163] 'agreement among raft nodes before linearized reading'  (duration: 93.1928ms)"],"step_count":1}
	{"level":"warn","ts":"2022-06-02T19:45:59.648Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"103.3026ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/deployments/kube-system/coredns\" ","response":"range_response_count:1 size:3982"}
	{"level":"info","ts":"2022-06-02T19:45:59.649Z","caller":"traceutil/trace.go:171","msg":"trace[196657013] range","detail":"{range_begin:/registry/deployments/kube-system/coredns; range_end:; response_count:1; response_revision:674; }","duration":"103.7158ms","start":"2022-06-02T19:45:59.545Z","end":"2022-06-02T19:45:59.649Z","steps":["trace[196657013] 'agreement among raft nodes before linearized reading'  (duration: 92.018ms)"],"step_count":1}
	{"level":"info","ts":"2022-06-02T19:45:59.776Z","caller":"traceutil/trace.go:171","msg":"trace[706700929] transaction","detail":"{read_only:false; response_revision:683; number_of_response:1; }","duration":"109.4469ms","start":"2022-06-02T19:45:59.666Z","end":"2022-06-02T19:45:59.776Z","steps":["trace[706700929] 'process raft request'  (duration: 102.1447ms)"],"step_count":1}
	{"level":"warn","ts":"2022-06-02T19:46:42.645Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"101.8677ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/\" range_end:\"/registry/minions0\" count_only:true ","response":"range_response_count:0 size:7"}
	{"level":"info","ts":"2022-06-02T19:46:42.645Z","caller":"traceutil/trace.go:171","msg":"trace[97225568] range","detail":"{range_begin:/registry/minions/; range_end:/registry/minions0; response_count:0; response_revision:781; }","duration":"102.1615ms","start":"2022-06-02T19:46:42.543Z","end":"2022-06-02T19:46:42.645Z","steps":["trace[97225568] 'count revisions from in-memory index tree'  (duration: 101.6883ms)"],"step_count":1}
	
	* 
	* ==> kernel <==
	*  19:47:19 up  2:37,  0 users,  load average: 10.63, 6.99, 5.59
	Linux newest-cni-20220602193528-12108 5.10.102.1-microsoft-standard-WSL2 #1 SMP Wed Mar 2 00:30:59 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 20.04.4 LTS"
	
	* 
	* ==> kube-apiserver [2bdb3f26ce03] <==
	* I0602 19:45:41.637652       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0602 19:45:41.648768       1 storage_scheduling.go:109] all system priority classes are created successfully or already exist.
	W0602 19:45:42.058100       1 handler_proxy.go:104] no RequestInfo found in the context
	E0602 19:45:42.058250       1 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0602 19:45:42.058286       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0602 19:45:44.848949       1 trace.go:205] Trace[456318723]: "Get" url:/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:pvc-protection-controller,user-agent:kube-apiserver/v1.23.6 (linux/amd64) kubernetes/ad33385,audit-id:0e1ba5c1-5210-4204-b433-79a743d24410,client:127.0.0.1,accept:application/vnd.kubernetes.protobuf, */*,protocol:HTTP/2.0 (02-Jun-2022 19:45:44.345) (total time: 503ms):
	Trace[456318723]: ---"About to write a response" 503ms (19:45:44.848)
	Trace[456318723]: [503.5121ms] [503.5121ms] END
	I0602 19:45:46.748430       1 controller.go:611] quota admission added evaluator for: serviceaccounts
	I0602 19:45:46.768848       1 controller.go:611] quota admission added evaluator for: events.events.k8s.io
	I0602 19:45:46.847843       1 controller.go:611] quota admission added evaluator for: deployments.apps
	I0602 19:45:47.253913       1 controller.go:611] quota admission added evaluator for: daemonsets.apps
	I0602 19:45:47.476389       1 controller.go:611] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0602 19:45:47.559530       1 controller.go:611] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0602 19:45:55.249411       1 controller.go:611] quota admission added evaluator for: namespaces
	I0602 19:45:56.448379       1 alloc.go:329] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs=map[IPv4:10.111.246.143]
	I0602 19:45:56.645509       1 alloc.go:329] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs=map[IPv4:10.106.248.67]
	I0602 19:45:59.143225       1 controller.go:611] quota admission added evaluator for: replicasets.apps
	I0602 19:45:59.255000       1 controller.go:611] quota admission added evaluator for: endpoints
	I0602 19:45:59.348013       1 controller.go:611] quota admission added evaluator for: endpointslices.discovery.k8s.io
	W0602 19:46:42.058486       1 handler_proxy.go:104] no RequestInfo found in the context
	E0602 19:46:42.058605       1 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0602 19:46:42.058625       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	* 
	* ==> kube-apiserver [fcbc8319890a] <==
	* W0602 19:44:23.441474       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0602 19:44:23.441485       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0602 19:44:23.441230       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0602 19:44:23.441598       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0602 19:44:23.441633       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0602 19:44:23.441645       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0602 19:44:23.441668       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0602 19:44:23.441266       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0602 19:44:23.441713       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0602 19:44:23.441693       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0602 19:44:23.441732       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0602 19:44:23.441747       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0602 19:44:23.441762       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0602 19:44:23.441822       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0602 19:44:23.441825       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0602 19:44:23.441788       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0602 19:44:23.441790       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0602 19:44:23.441885       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0602 19:44:23.441793       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0602 19:44:23.442043       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0602 19:44:23.442059       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0602 19:44:23.442105       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0602 19:44:23.442208       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0602 19:44:23.442300       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0602 19:44:23.441924       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	
	* 
	* ==> kube-controller-manager [a13b2fac238d] <==
	* I0602 19:45:59.040292       1 shared_informer.go:247] Caches are synced for namespace 
	I0602 19:45:59.041684       1 range_allocator.go:173] Starting range CIDR allocator
	I0602 19:45:59.041739       1 shared_informer.go:240] Waiting for caches to sync for cidrallocator
	I0602 19:45:59.041761       1 shared_informer.go:247] Caches are synced for cidrallocator 
	W0602 19:45:59.040448       1 node_lifecycle_controller.go:1012] Missing timestamp for Node newest-cni-20220602193528-12108. Assuming now as a timestamp.
	I0602 19:45:59.042065       1 node_lifecycle_controller.go:1213] Controller detected that zone  is now in state Normal.
	I0602 19:45:59.040611       1 event.go:294] "Event occurred" object="newest-cni-20220602193528-12108" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node newest-cni-20220602193528-12108 event: Registered Node newest-cni-20220602193528-12108 in Controller"
	E0602 19:45:59.047599       1 memcache.go:196] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I0602 19:45:59.048609       1 shared_informer.go:247] Caches are synced for resource quota 
	I0602 19:45:59.049226       1 shared_informer.go:247] Caches are synced for resource quota 
	E0602 19:45:59.051632       1 memcache.go:101] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I0602 19:45:59.136817       1 shared_informer.go:247] Caches are synced for endpoint_slice_mirroring 
	I0602 19:45:59.137256       1 shared_informer.go:247] Caches are synced for endpoint_slice 
	I0602 19:45:59.144608       1 shared_informer.go:240] Waiting for caches to sync for garbage collector
	I0602 19:45:59.156826       1 event.go:294] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set dashboard-metrics-scraper-56974995fc to 1"
	I0602 19:45:59.242636       1 event.go:294] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set kubernetes-dashboard-cd7c84bfc to 1"
	I0602 19:45:59.451140       1 shared_informer.go:247] Caches are synced for garbage collector 
	I0602 19:45:59.451288       1 garbagecollector.go:155] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I0602 19:45:59.546406       1 shared_informer.go:247] Caches are synced for garbage collector 
	I0602 19:45:59.553597       1 event.go:294] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-cd7c84bfc" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kubernetes-dashboard-cd7c84bfc-xsbcz"
	I0602 19:45:59.553925       1 event.go:294] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-56974995fc" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: dashboard-metrics-scraper-56974995fc-9zf2w"
	E0602 19:46:29.158775       1 resource_quota_controller.go:413] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0602 19:46:29.643198       1 garbagecollector.go:707] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0602 19:46:59.354526       1 resource_quota_controller.go:413] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0602 19:46:59.754132       1 garbagecollector.go:707] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	
	* 
	* ==> kube-controller-manager [f0f4349e2a50] <==
	* I0602 19:43:59.436834       1 shared_informer.go:247] Caches are synced for node 
	I0602 19:43:59.436974       1 taint_manager.go:187] "Starting NoExecuteTaintManager"
	I0602 19:43:59.436910       1 shared_informer.go:247] Caches are synced for GC 
	I0602 19:43:59.442785       1 range_allocator.go:173] Starting range CIDR allocator
	I0602 19:43:59.443028       1 shared_informer.go:240] Waiting for caches to sync for cidrallocator
	I0602 19:43:59.443048       1 shared_informer.go:247] Caches are synced for cidrallocator 
	I0602 19:43:59.443124       1 shared_informer.go:247] Caches are synced for resource quota 
	I0602 19:43:59.444243       1 event.go:294] "Event occurred" object="newest-cni-20220602193528-12108" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node newest-cni-20220602193528-12108 event: Registered Node newest-cni-20220602193528-12108 in Controller"
	I0602 19:43:59.448657       1 shared_informer.go:247] Caches are synced for certificate-csrsigning-kubelet-serving 
	I0602 19:43:59.451108       1 shared_informer.go:247] Caches are synced for certificate-csrsigning-kube-apiserver-client 
	I0602 19:43:59.451208       1 shared_informer.go:247] Caches are synced for certificate-csrsigning-kubelet-client 
	I0602 19:43:59.468244       1 shared_informer.go:247] Caches are synced for certificate-csrapproving 
	I0602 19:43:59.536183       1 shared_informer.go:247] Caches are synced for certificate-csrsigning-legacy-unknown 
	I0602 19:43:59.740052       1 range_allocator.go:374] Set node newest-cni-20220602193528-12108 PodCIDR to [192.168.0.0/24]
	I0602 19:43:59.740122       1 event.go:294] "Event occurred" object="kube-system/coredns" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-64897985d to 2"
	I0602 19:43:59.935664       1 shared_informer.go:247] Caches are synced for garbage collector 
	I0602 19:43:59.935742       1 garbagecollector.go:155] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I0602 19:43:59.936505       1 shared_informer.go:247] Caches are synced for garbage collector 
	I0602 19:44:00.454032       1 event.go:294] "Event occurred" object="kube-system/coredns-64897985d" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-64897985d-2sl4x"
	I0602 19:44:00.454577       1 event.go:294] "Event occurred" object="kube-system/kube-proxy" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-6qlxd"
	I0602 19:44:00.608214       1 event.go:294] "Event occurred" object="kube-system/coredns-64897985d" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-64897985d-nvh82"
	I0602 19:44:00.859355       1 event.go:294] "Event occurred" object="kube-system/coredns" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set coredns-64897985d to 1"
	I0602 19:44:00.981277       1 event.go:294] "Event occurred" object="kube-system/coredns-64897985d" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-64897985d-2sl4x"
	I0602 19:44:14.358917       1 event.go:294] "Event occurred" object="kube-system/metrics-server" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set metrics-server-b955d9d8 to 1"
	I0602 19:44:14.456327       1 event.go:294] "Event occurred" object="kube-system/metrics-server-b955d9d8" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: metrics-server-b955d9d8-4zjkc"
	
	* 
	* ==> kube-proxy [1705a550b9b6] <==
	* E0602 19:45:46.346189       1 proxier.go:647] "Failed to read builtin modules file, you can ignore this message when kube-proxy is running inside container without mounting /lib/modules" err="open /lib/modules/5.10.102.1-microsoft-standard-WSL2/modules.builtin: no such file or directory" filePath="/lib/modules/5.10.102.1-microsoft-standard-WSL2/modules.builtin"
	I0602 19:45:46.352530       1 proxier.go:657] "Failed to load kernel module with modprobe, you can ignore this message when kube-proxy is running inside container without mounting /lib/modules" moduleName="ip_vs"
	I0602 19:45:46.358229       1 proxier.go:657] "Failed to load kernel module with modprobe, you can ignore this message when kube-proxy is running inside container without mounting /lib/modules" moduleName="ip_vs_rr"
	I0602 19:45:46.362670       1 proxier.go:657] "Failed to load kernel module with modprobe, you can ignore this message when kube-proxy is running inside container without mounting /lib/modules" moduleName="ip_vs_wrr"
	I0602 19:45:46.370433       1 proxier.go:657] "Failed to load kernel module with modprobe, you can ignore this message when kube-proxy is running inside container without mounting /lib/modules" moduleName="ip_vs_sh"
	I0602 19:45:46.377719       1 proxier.go:657] "Failed to load kernel module with modprobe, you can ignore this message when kube-proxy is running inside container without mounting /lib/modules" moduleName="nf_conntrack"
	I0602 19:45:46.467336       1 node.go:163] Successfully retrieved node IP: 192.168.58.2
	I0602 19:45:46.467548       1 server_others.go:138] "Detected node IP" address="192.168.58.2"
	I0602 19:45:46.467621       1 server_others.go:561] "Unknown proxy mode, assuming iptables proxy" proxyMode=""
	I0602 19:45:46.757244       1 server_others.go:206] "Using iptables Proxier"
	I0602 19:45:46.757709       1 server_others.go:213] "kube-proxy running in dual-stack mode" ipFamily=IPv4
	I0602 19:45:46.757744       1 server_others.go:214] "Creating dualStackProxier for iptables"
	I0602 19:45:46.757850       1 server_others.go:491] "Detect-local-mode set to ClusterCIDR, but no IPv6 cluster CIDR defined, , defaulting to no-op detect-local for IPv6"
	I0602 19:45:46.759429       1 server.go:656] "Version info" version="v1.23.6"
	I0602 19:45:46.763092       1 config.go:226] "Starting endpoint slice config controller"
	I0602 19:45:46.763119       1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
	I0602 19:45:46.763205       1 config.go:317] "Starting service config controller"
	I0602 19:45:46.763219       1 shared_informer.go:240] Waiting for caches to sync for service config
	I0602 19:45:46.863501       1 shared_informer.go:247] Caches are synced for service config 
	I0602 19:45:46.863660       1 shared_informer.go:247] Caches are synced for endpoint slice config 
	
	* 
	* ==> kube-proxy [e576c766bf2a] <==
	* E0602 19:44:05.054582       1 proxier.go:647] "Failed to read builtin modules file, you can ignore this message when kube-proxy is running inside container without mounting /lib/modules" err="open /lib/modules/5.10.102.1-microsoft-standard-WSL2/modules.builtin: no such file or directory" filePath="/lib/modules/5.10.102.1-microsoft-standard-WSL2/modules.builtin"
	I0602 19:44:05.140592       1 proxier.go:657] "Failed to load kernel module with modprobe, you can ignore this message when kube-proxy is running inside container without mounting /lib/modules" moduleName="ip_vs"
	I0602 19:44:05.148491       1 proxier.go:657] "Failed to load kernel module with modprobe, you can ignore this message when kube-proxy is running inside container without mounting /lib/modules" moduleName="ip_vs_rr"
	I0602 19:44:05.153924       1 proxier.go:657] "Failed to load kernel module with modprobe, you can ignore this message when kube-proxy is running inside container without mounting /lib/modules" moduleName="ip_vs_wrr"
	I0602 19:44:05.239074       1 proxier.go:657] "Failed to load kernel module with modprobe, you can ignore this message when kube-proxy is running inside container without mounting /lib/modules" moduleName="ip_vs_sh"
	I0602 19:44:05.243140       1 proxier.go:657] "Failed to load kernel module with modprobe, you can ignore this message when kube-proxy is running inside container without mounting /lib/modules" moduleName="nf_conntrack"
	I0602 19:44:05.449564       1 node.go:163] Successfully retrieved node IP: 192.168.58.2
	I0602 19:44:05.449818       1 server_others.go:138] "Detected node IP" address="192.168.58.2"
	I0602 19:44:05.449868       1 server_others.go:561] "Unknown proxy mode, assuming iptables proxy" proxyMode=""
	I0602 19:44:05.746367       1 server_others.go:206] "Using iptables Proxier"
	I0602 19:44:05.746488       1 server_others.go:213] "kube-proxy running in dual-stack mode" ipFamily=IPv4
	I0602 19:44:05.746502       1 server_others.go:214] "Creating dualStackProxier for iptables"
	I0602 19:44:05.746533       1 server_others.go:491] "Detect-local-mode set to ClusterCIDR, but no IPv6 cluster CIDR defined, , defaulting to no-op detect-local for IPv6"
	I0602 19:44:05.747677       1 server.go:656] "Version info" version="v1.23.6"
	I0602 19:44:05.749240       1 config.go:317] "Starting service config controller"
	I0602 19:44:05.749361       1 shared_informer.go:240] Waiting for caches to sync for service config
	I0602 19:44:05.749278       1 config.go:226] "Starting endpoint slice config controller"
	I0602 19:44:05.749405       1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
	I0602 19:44:05.849742       1 shared_informer.go:247] Caches are synced for endpoint slice config 
	I0602 19:44:05.849914       1 shared_informer.go:247] Caches are synced for service config 
	
	* 
	* ==> kube-scheduler [2e5f7364c8be] <==
	* W0602 19:45:33.944916       1 feature_gate.go:237] Setting GA feature gate ServerSideApply=true. It will be removed in a future release.
	I0602 19:45:35.070930       1 serving.go:348] Generated self-signed cert in-memory
	W0602 19:45:40.647732       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0602 19:45:40.647795       1 authentication.go:345] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0602 19:45:40.647817       1 authentication.go:346] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0602 19:45:40.647829       1 authentication.go:347] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0602 19:45:40.936389       1 server.go:139] "Starting Kubernetes Scheduler" version="v1.23.6"
	I0602 19:45:40.947866       1 configmap_cafile_content.go:201] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0602 19:45:40.951345       1 shared_informer.go:240] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0602 19:45:40.949102       1 secure_serving.go:200] Serving securely on 127.0.0.1:10259
	I0602 19:45:40.949130       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0602 19:45:41.051469       1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	
	* 
	* ==> kube-scheduler [d9cb2e830613] <==
	* E0602 19:43:15.238196       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0602 19:43:15.241464       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0602 19:43:15.241624       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0602 19:43:15.369379       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0602 19:43:15.369519       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0602 19:43:15.373444       1 reflector.go:324] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:205: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0602 19:43:15.373601       1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:205: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0602 19:43:15.437712       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0602 19:43:15.437757       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0602 19:43:15.437795       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0602 19:43:15.437827       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0602 19:43:15.452182       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0602 19:43:15.452374       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0602 19:43:15.538996       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0602 19:43:15.539199       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0602 19:43:15.540511       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0602 19:43:15.540645       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0602 19:43:15.572678       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0602 19:43:15.572794       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0602 19:43:17.586808       1 reflector.go:324] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:205: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0602 19:43:17.586974       1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:205: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0602 19:43:21.848972       1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	I0602 19:44:22.237837       1 configmap_cafile_content.go:222] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0602 19:44:22.238714       1 secure_serving.go:311] Stopped listening on 127.0.0.1:10259
	I0602 19:44:22.240124       1 tlsconfig.go:255] "Shutting down DynamicServingCertificateController"
	
	* 
	* ==> kubelet <==
	* -- Logs begin at Thu 2022-06-02 19:44:54 UTC, end at Thu 2022-06-02 19:47:21 UTC. --
	Jun 02 19:47:18 newest-cni-20220602193528-12108 kubelet[945]: I0602 19:47:18.760168     945 pod_container_deletor.go:79] "Container not found in pod's containers" containerID="1acf379070592ee501751d922de7b98e53f98ab907093e9efab131dfed6107c1"
	Jun 02 19:47:18 newest-cni-20220602193528-12108 kubelet[945]: E0602 19:47:18.840839     945 remote_runtime.go:209] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = [failed to set up sandbox container \"57a43e7d34345debb96a7e1910af542938561f9c6cb5a52519f700db943d7ffd\" network for pod \"kubernetes-dashboard-cd7c84bfc-xsbcz\": networkPlugin cni failed to set up pod \"kubernetes-dashboard-cd7c84bfc-xsbcz_kubernetes-dashboard\" network: failed to set bridge addr: could not add IP address to \"cni0\": permission denied, failed to clean up sandbox container \"57a43e7d34345debb96a7e1910af542938561f9c6cb5a52519f700db943d7ffd\" network for pod \"kubernetes-dashboard-cd7c84bfc-xsbcz\": networkPlugin cni failed to teardown pod \"kubernetes-dashboard-cd7c84bfc-xsbcz_kubernetes-dashboard\" network: running [/usr/sbin/iptables -t nat -D POSTROUTING -s 10.85.0.84 -j CNI-4ea7c0c5c78ab8a853699f97 -m comment --comment name: \"crio\" id: \"57a43e7d34345debb96a7e1910af542938561f9c6cb5a52
519f700db943d7ffd\" --wait]: exit status 2: iptables v1.8.4 (legacy): Couldn't load target `CNI-4ea7c0c5c78ab8a853699f97':No such file or directory\n\nTry `iptables -h' or 'iptables --help' for more information.\n]"
	Jun 02 19:47:18 newest-cni-20220602193528-12108 kubelet[945]: E0602 19:47:18.840971     945 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = [failed to set up sandbox container \"57a43e7d34345debb96a7e1910af542938561f9c6cb5a52519f700db943d7ffd\" network for pod \"kubernetes-dashboard-cd7c84bfc-xsbcz\": networkPlugin cni failed to set up pod \"kubernetes-dashboard-cd7c84bfc-xsbcz_kubernetes-dashboard\" network: failed to set bridge addr: could not add IP address to \"cni0\": permission denied, failed to clean up sandbox container \"57a43e7d34345debb96a7e1910af542938561f9c6cb5a52519f700db943d7ffd\" network for pod \"kubernetes-dashboard-cd7c84bfc-xsbcz\": networkPlugin cni failed to teardown pod \"kubernetes-dashboard-cd7c84bfc-xsbcz_kubernetes-dashboard\" network: running [/usr/sbin/iptables -t nat -D POSTROUTING -s 10.85.0.84 -j CNI-4ea7c0c5c78ab8a853699f97 -m comment --comment name: \"crio\" id: \"57a43e7d34345debb96a7e1910af542938561f9c6cb5a52519f7
00db943d7ffd\" --wait]: exit status 2: iptables v1.8.4 (legacy): Couldn't load target `CNI-4ea7c0c5c78ab8a853699f97':No such file or directory\n\nTry `iptables -h' or 'iptables --help' for more information.\n]" pod="kubernetes-dashboard/kubernetes-dashboard-cd7c84bfc-xsbcz"
	Jun 02 19:47:18 newest-cni-20220602193528-12108 kubelet[945]: E0602 19:47:18.841038     945 kuberuntime_manager.go:833] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = [failed to set up sandbox container \"57a43e7d34345debb96a7e1910af542938561f9c6cb5a52519f700db943d7ffd\" network for pod \"kubernetes-dashboard-cd7c84bfc-xsbcz\": networkPlugin cni failed to set up pod \"kubernetes-dashboard-cd7c84bfc-xsbcz_kubernetes-dashboard\" network: failed to set bridge addr: could not add IP address to \"cni0\": permission denied, failed to clean up sandbox container \"57a43e7d34345debb96a7e1910af542938561f9c6cb5a52519f700db943d7ffd\" network for pod \"kubernetes-dashboard-cd7c84bfc-xsbcz\": networkPlugin cni failed to teardown pod \"kubernetes-dashboard-cd7c84bfc-xsbcz_kubernetes-dashboard\" network: running [/usr/sbin/iptables -t nat -D POSTROUTING -s 10.85.0.84 -j CNI-4ea7c0c5c78ab8a853699f97 -m comment --comment name: \"crio\" id: \"57a43e7d34345debb96a7e1910af542938561f9c6cb5a52519f7
00db943d7ffd\" --wait]: exit status 2: iptables v1.8.4 (legacy): Couldn't load target `CNI-4ea7c0c5c78ab8a853699f97':No such file or directory\n\nTry `iptables -h' or 'iptables --help' for more information.\n]" pod="kubernetes-dashboard/kubernetes-dashboard-cd7c84bfc-xsbcz"
	Jun 02 19:47:18 newest-cni-20220602193528-12108 kubelet[945]: E0602 19:47:18.841302     945 pod_workers.go:951] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"kubernetes-dashboard-cd7c84bfc-xsbcz_kubernetes-dashboard(6e5f4f81-7f1a-4dbe-acda-91cfedb0abcf)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"kubernetes-dashboard-cd7c84bfc-xsbcz_kubernetes-dashboard(6e5f4f81-7f1a-4dbe-acda-91cfedb0abcf)\\\": rpc error: code = Unknown desc = [failed to set up sandbox container \\\"57a43e7d34345debb96a7e1910af542938561f9c6cb5a52519f700db943d7ffd\\\" network for pod \\\"kubernetes-dashboard-cd7c84bfc-xsbcz\\\": networkPlugin cni failed to set up pod \\\"kubernetes-dashboard-cd7c84bfc-xsbcz_kubernetes-dashboard\\\" network: failed to set bridge addr: could not add IP address to \\\"cni0\\\": permission denied, failed to clean up sandbox container \\\"57a43e7d34345debb96a7e1910af542938561f9c6cb5a52519f700db943d7ffd\\\" network for pod \\\"kubernetes-dashboard-cd7c84bf
c-xsbcz\\\": networkPlugin cni failed to teardown pod \\\"kubernetes-dashboard-cd7c84bfc-xsbcz_kubernetes-dashboard\\\" network: running [/usr/sbin/iptables -t nat -D POSTROUTING -s 10.85.0.84 -j CNI-4ea7c0c5c78ab8a853699f97 -m comment --comment name: \\\"crio\\\" id: \\\"57a43e7d34345debb96a7e1910af542938561f9c6cb5a52519f700db943d7ffd\\\" --wait]: exit status 2: iptables v1.8.4 (legacy): Couldn't load target `CNI-4ea7c0c5c78ab8a853699f97':No such file or directory\\n\\nTry `iptables -h' or 'iptables --help' for more information.\\n]\"" pod="kubernetes-dashboard/kubernetes-dashboard-cd7c84bfc-xsbcz" podUID=6e5f4f81-7f1a-4dbe-acda-91cfedb0abcf
	Jun 02 19:47:18 newest-cni-20220602193528-12108 kubelet[945]: I0602 19:47:18.843393     945 docker_sandbox.go:402] "Failed to read pod IP from plugin/docker" err="networkPlugin cni failed on the status hook for pod \"kubernetes-dashboard-cd7c84bfc-xsbcz_kubernetes-dashboard\": CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container \"57a43e7d34345debb96a7e1910af542938561f9c6cb5a52519f700db943d7ffd\""
	Jun 02 19:47:18 newest-cni-20220602193528-12108 kubelet[945]: E0602 19:47:18.969062     945 cni.go:381] "Error deleting pod from network" err="running [/usr/sbin/iptables -t nat -D POSTROUTING -s 10.85.0.85 -j CNI-afe6c3e03c38029a1c68f634 -m comment --comment name: \"crio\" id: \"c1c7f8f33469c4b8007d0acc9d20e0977faaebf4b93d16317bc40aa26e603c91\" --wait]: exit status 2: iptables v1.8.4 (legacy): Couldn't load target `CNI-afe6c3e03c38029a1c68f634':No such file or directory\n\nTry `iptables -h' or 'iptables --help' for more information.\n" pod="kube-system/metrics-server-b955d9d8-4zjkc" podSandboxID={Type:docker ID:c1c7f8f33469c4b8007d0acc9d20e0977faaebf4b93d16317bc40aa26e603c91} podNetnsPath="/proc/14459/ns/net" networkType="bridge" networkName="crio"
	Jun 02 19:47:19 newest-cni-20220602193528-12108 kubelet[945]: E0602 19:47:19.669275     945 remote_runtime.go:209] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = [failed to set up sandbox container \"c1c7f8f33469c4b8007d0acc9d20e0977faaebf4b93d16317bc40aa26e603c91\" network for pod \"metrics-server-b955d9d8-4zjkc\": networkPlugin cni failed to set up pod \"metrics-server-b955d9d8-4zjkc_kube-system\" network: failed to set bridge addr: could not add IP address to \"cni0\": permission denied, failed to clean up sandbox container \"c1c7f8f33469c4b8007d0acc9d20e0977faaebf4b93d16317bc40aa26e603c91\" network for pod \"metrics-server-b955d9d8-4zjkc\": networkPlugin cni failed to teardown pod \"metrics-server-b955d9d8-4zjkc_kube-system\" network: running [/usr/sbin/iptables -t nat -D POSTROUTING -s 10.85.0.85 -j CNI-afe6c3e03c38029a1c68f634 -m comment --comment name: \"crio\" id: \"c1c7f8f33469c4b8007d0acc9d20e0977faaebf4b93d16317bc40aa26e603c91\" --wait]: exit status 2: ip
tables v1.8.4 (legacy): Couldn't load target `CNI-afe6c3e03c38029a1c68f634':No such file or directory\n\nTry `iptables -h' or 'iptables --help' for more information.\n]"
	Jun 02 19:47:19 newest-cni-20220602193528-12108 kubelet[945]: E0602 19:47:19.669590     945 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = [failed to set up sandbox container \"c1c7f8f33469c4b8007d0acc9d20e0977faaebf4b93d16317bc40aa26e603c91\" network for pod \"metrics-server-b955d9d8-4zjkc\": networkPlugin cni failed to set up pod \"metrics-server-b955d9d8-4zjkc_kube-system\" network: failed to set bridge addr: could not add IP address to \"cni0\": permission denied, failed to clean up sandbox container \"c1c7f8f33469c4b8007d0acc9d20e0977faaebf4b93d16317bc40aa26e603c91\" network for pod \"metrics-server-b955d9d8-4zjkc\": networkPlugin cni failed to teardown pod \"metrics-server-b955d9d8-4zjkc_kube-system\" network: running [/usr/sbin/iptables -t nat -D POSTROUTING -s 10.85.0.85 -j CNI-afe6c3e03c38029a1c68f634 -m comment --comment name: \"crio\" id: \"c1c7f8f33469c4b8007d0acc9d20e0977faaebf4b93d16317bc40aa26e603c91\" --wait]: exit status 2: iptable
s v1.8.4 (legacy): Couldn't load target `CNI-afe6c3e03c38029a1c68f634':No such file or directory\n\nTry `iptables -h' or 'iptables --help' for more information.\n]" pod="kube-system/metrics-server-b955d9d8-4zjkc"
	Jun 02 19:47:19 newest-cni-20220602193528-12108 kubelet[945]: E0602 19:47:19.669669     945 kuberuntime_manager.go:833] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = [failed to set up sandbox container \"c1c7f8f33469c4b8007d0acc9d20e0977faaebf4b93d16317bc40aa26e603c91\" network for pod \"metrics-server-b955d9d8-4zjkc\": networkPlugin cni failed to set up pod \"metrics-server-b955d9d8-4zjkc_kube-system\" network: failed to set bridge addr: could not add IP address to \"cni0\": permission denied, failed to clean up sandbox container \"c1c7f8f33469c4b8007d0acc9d20e0977faaebf4b93d16317bc40aa26e603c91\" network for pod \"metrics-server-b955d9d8-4zjkc\": networkPlugin cni failed to teardown pod \"metrics-server-b955d9d8-4zjkc_kube-system\" network: running [/usr/sbin/iptables -t nat -D POSTROUTING -s 10.85.0.85 -j CNI-afe6c3e03c38029a1c68f634 -m comment --comment name: \"crio\" id: \"c1c7f8f33469c4b8007d0acc9d20e0977faaebf4b93d16317bc40aa26e603c91\" --wait]: exit status 2: iptable
s v1.8.4 (legacy): Couldn't load target `CNI-afe6c3e03c38029a1c68f634':No such file or directory\n\nTry `iptables -h' or 'iptables --help' for more information.\n]" pod="kube-system/metrics-server-b955d9d8-4zjkc"
	Jun 02 19:47:19 newest-cni-20220602193528-12108 kubelet[945]: E0602 19:47:19.669830     945 pod_workers.go:951] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"metrics-server-b955d9d8-4zjkc_kube-system(f7310338-75db-4112-9f21-d33fba8787e7)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"metrics-server-b955d9d8-4zjkc_kube-system(f7310338-75db-4112-9f21-d33fba8787e7)\\\": rpc error: code = Unknown desc = [failed to set up sandbox container \\\"c1c7f8f33469c4b8007d0acc9d20e0977faaebf4b93d16317bc40aa26e603c91\\\" network for pod \\\"metrics-server-b955d9d8-4zjkc\\\": networkPlugin cni failed to set up pod \\\"metrics-server-b955d9d8-4zjkc_kube-system\\\" network: failed to set bridge addr: could not add IP address to \\\"cni0\\\": permission denied, failed to clean up sandbox container \\\"c1c7f8f33469c4b8007d0acc9d20e0977faaebf4b93d16317bc40aa26e603c91\\\" network for pod \\\"metrics-server-b955d9d8-4zjkc\\\": networkPlugin cni failed to teardown pod \\\"metr
ics-server-b955d9d8-4zjkc_kube-system\\\" network: running [/usr/sbin/iptables -t nat -D POSTROUTING -s 10.85.0.85 -j CNI-afe6c3e03c38029a1c68f634 -m comment --comment name: \\\"crio\\\" id: \\\"c1c7f8f33469c4b8007d0acc9d20e0977faaebf4b93d16317bc40aa26e603c91\\\" --wait]: exit status 2: iptables v1.8.4 (legacy): Couldn't load target `CNI-afe6c3e03c38029a1c68f634':No such file or directory\\n\\nTry `iptables -h' or 'iptables --help' for more information.\\n]\"" pod="kube-system/metrics-server-b955d9d8-4zjkc" podUID=f7310338-75db-4112-9f21-d33fba8787e7
	Jun 02 19:47:20 newest-cni-20220602193528-12108 kubelet[945]: E0602 19:47:20.548034     945 cni.go:362] "Error adding pod to network" err="failed to set bridge addr: could not add IP address to \"cni0\": permission denied" pod="kube-system/coredns-64897985d-nvh82" podSandboxID={Type:docker ID:8b396d0bbfb771f695eb66821106467c80a549984eef892ce0662e2749da2639} podNetnsPath="/proc/14784/ns/net" networkType="bridge" networkName="crio"
	Jun 02 19:47:20 newest-cni-20220602193528-12108 kubelet[945]: I0602 19:47:20.563642     945 pod_container_deletor.go:79] "Container not found in pod's containers" containerID="2892e8f35d6ead2bdcc990732d413af4cc1cd4f9a3abf87219801cb55a1e57b3"
	Jun 02 19:47:20 newest-cni-20220602193528-12108 kubelet[945]: E0602 19:47:20.651207     945 cni.go:362] "Error adding pod to network" err="failed to set bridge addr: could not add IP address to \"cni0\": permission denied" pod="kubernetes-dashboard/dashboard-metrics-scraper-56974995fc-9zf2w" podSandboxID={Type:docker ID:2892e8f35d6ead2bdcc990732d413af4cc1cd4f9a3abf87219801cb55a1e57b3} podNetnsPath="/proc/14783/ns/net" networkType="bridge" networkName="crio"
	Jun 02 19:47:20 newest-cni-20220602193528-12108 kubelet[945]: I0602 19:47:20.763605     945 pod_container_deletor.go:79] "Container not found in pod's containers" containerID="8b396d0bbfb771f695eb66821106467c80a549984eef892ce0662e2749da2639"
	Jun 02 19:47:20 newest-cni-20220602193528-12108 kubelet[945]: E0602 19:47:20.865053     945 cni.go:381] "Error deleting pod from network" err="running [/usr/sbin/iptables -t nat -D POSTROUTING -s 10.85.0.86 -j CNI-9641ffbf693dcb6cbb3b2f54 -m comment --comment name: \"crio\" id: \"8b396d0bbfb771f695eb66821106467c80a549984eef892ce0662e2749da2639\" --wait]: exit status 2: iptables v1.8.4 (legacy): Couldn't load target `CNI-9641ffbf693dcb6cbb3b2f54':No such file or directory\n\nTry `iptables -h' or 'iptables --help' for more information.\n" pod="kube-system/coredns-64897985d-nvh82" podSandboxID={Type:docker ID:8b396d0bbfb771f695eb66821106467c80a549984eef892ce0662e2749da2639} podNetnsPath="/proc/14784/ns/net" networkType="bridge" networkName="crio"
	Jun 02 19:47:20 newest-cni-20220602193528-12108 kubelet[945]: E0602 19:47:20.959551     945 cni.go:381] "Error deleting pod from network" err="running [/usr/sbin/iptables -t nat -D POSTROUTING -s 10.85.0.87 -j CNI-4bd22cb48e44441504a54616 -m comment --comment name: \"crio\" id: \"2892e8f35d6ead2bdcc990732d413af4cc1cd4f9a3abf87219801cb55a1e57b3\" --wait]: exit status 2: iptables v1.8.4 (legacy): Couldn't load target `CNI-4bd22cb48e44441504a54616':No such file or directory\n\nTry `iptables -h' or 'iptables --help' for more information.\n" pod="kubernetes-dashboard/dashboard-metrics-scraper-56974995fc-9zf2w" podSandboxID={Type:docker ID:2892e8f35d6ead2bdcc990732d413af4cc1cd4f9a3abf87219801cb55a1e57b3} podNetnsPath="/proc/14783/ns/net" networkType="bridge" networkName="crio"
	Jun 02 19:47:21 newest-cni-20220602193528-12108 kubelet[945]: E0602 19:47:21.377422     945 remote_runtime.go:209] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = [failed to set up sandbox container \"8b396d0bbfb771f695eb66821106467c80a549984eef892ce0662e2749da2639\" network for pod \"coredns-64897985d-nvh82\": networkPlugin cni failed to set up pod \"coredns-64897985d-nvh82_kube-system\" network: failed to set bridge addr: could not add IP address to \"cni0\": permission denied, failed to clean up sandbox container \"8b396d0bbfb771f695eb66821106467c80a549984eef892ce0662e2749da2639\" network for pod \"coredns-64897985d-nvh82\": networkPlugin cni failed to teardown pod \"coredns-64897985d-nvh82_kube-system\" network: running [/usr/sbin/iptables -t nat -D POSTROUTING -s 10.85.0.86 -j CNI-9641ffbf693dcb6cbb3b2f54 -m comment --comment name: \"crio\" id: \"8b396d0bbfb771f695eb66821106467c80a549984eef892ce0662e2749da2639\" --wait]: exit status 2: iptables v1.8.4 (legacy):
Couldn't load target `CNI-9641ffbf693dcb6cbb3b2f54':No such file or directory\n\nTry `iptables -h' or 'iptables --help' for more information.\n]"
	Jun 02 19:47:21 newest-cni-20220602193528-12108 kubelet[945]: E0602 19:47:21.377664     945 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = [failed to set up sandbox container \"8b396d0bbfb771f695eb66821106467c80a549984eef892ce0662e2749da2639\" network for pod \"coredns-64897985d-nvh82\": networkPlugin cni failed to set up pod \"coredns-64897985d-nvh82_kube-system\" network: failed to set bridge addr: could not add IP address to \"cni0\": permission denied, failed to clean up sandbox container \"8b396d0bbfb771f695eb66821106467c80a549984eef892ce0662e2749da2639\" network for pod \"coredns-64897985d-nvh82\": networkPlugin cni failed to teardown pod \"coredns-64897985d-nvh82_kube-system\" network: running [/usr/sbin/iptables -t nat -D POSTROUTING -s 10.85.0.86 -j CNI-9641ffbf693dcb6cbb3b2f54 -m comment --comment name: \"crio\" id: \"8b396d0bbfb771f695eb66821106467c80a549984eef892ce0662e2749da2639\" --wait]: exit status 2: iptables v1.8.4 (legacy): Could
n't load target `CNI-9641ffbf693dcb6cbb3b2f54':No such file or directory\n\nTry `iptables -h' or 'iptables --help' for more information.\n]" pod="kube-system/coredns-64897985d-nvh82"
	Jun 02 19:47:21 newest-cni-20220602193528-12108 kubelet[945]: E0602 19:47:21.377739     945 kuberuntime_manager.go:833] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = [failed to set up sandbox container \"8b396d0bbfb771f695eb66821106467c80a549984eef892ce0662e2749da2639\" network for pod \"coredns-64897985d-nvh82\": networkPlugin cni failed to set up pod \"coredns-64897985d-nvh82_kube-system\" network: failed to set bridge addr: could not add IP address to \"cni0\": permission denied, failed to clean up sandbox container \"8b396d0bbfb771f695eb66821106467c80a549984eef892ce0662e2749da2639\" network for pod \"coredns-64897985d-nvh82\": networkPlugin cni failed to teardown pod \"coredns-64897985d-nvh82_kube-system\" network: running [/usr/sbin/iptables -t nat -D POSTROUTING -s 10.85.0.86 -j CNI-9641ffbf693dcb6cbb3b2f54 -m comment --comment name: \"crio\" id: \"8b396d0bbfb771f695eb66821106467c80a549984eef892ce0662e2749da2639\" --wait]: exit status 2: iptables v1.8.4 (legacy): Could
n't load target `CNI-9641ffbf693dcb6cbb3b2f54':No such file or directory\n\nTry `iptables -h' or 'iptables --help' for more information.\n]" pod="kube-system/coredns-64897985d-nvh82"
	Jun 02 19:47:21 newest-cni-20220602193528-12108 kubelet[945]: E0602 19:47:21.378034     945 pod_workers.go:951] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-64897985d-nvh82_kube-system(e020a13f-06c3-4682-8596-3644e6368c0d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-64897985d-nvh82_kube-system(e020a13f-06c3-4682-8596-3644e6368c0d)\\\": rpc error: code = Unknown desc = [failed to set up sandbox container \\\"8b396d0bbfb771f695eb66821106467c80a549984eef892ce0662e2749da2639\\\" network for pod \\\"coredns-64897985d-nvh82\\\": networkPlugin cni failed to set up pod \\\"coredns-64897985d-nvh82_kube-system\\\" network: failed to set bridge addr: could not add IP address to \\\"cni0\\\": permission denied, failed to clean up sandbox container \\\"8b396d0bbfb771f695eb66821106467c80a549984eef892ce0662e2749da2639\\\" network for pod \\\"coredns-64897985d-nvh82\\\": networkPlugin cni failed to teardown pod \\\"coredns-64897985d-nvh82_kube-syste
m\\\" network: running [/usr/sbin/iptables -t nat -D POSTROUTING -s 10.85.0.86 -j CNI-9641ffbf693dcb6cbb3b2f54 -m comment --comment name: \\\"crio\\\" id: \\\"8b396d0bbfb771f695eb66821106467c80a549984eef892ce0662e2749da2639\\\" --wait]: exit status 2: iptables v1.8.4 (legacy): Couldn't load target `CNI-9641ffbf693dcb6cbb3b2f54':No such file or directory\\n\\nTry `iptables -h' or 'iptables --help' for more information.\\n]\"" pod="kube-system/coredns-64897985d-nvh82" podUID=e020a13f-06c3-4682-8596-3644e6368c0d
	Jun 02 19:47:21 newest-cni-20220602193528-12108 kubelet[945]: E0602 19:47:21.476657     945 remote_runtime.go:209] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = [failed to set up sandbox container \"2892e8f35d6ead2bdcc990732d413af4cc1cd4f9a3abf87219801cb55a1e57b3\" network for pod \"dashboard-metrics-scraper-56974995fc-9zf2w\": networkPlugin cni failed to set up pod \"dashboard-metrics-scraper-56974995fc-9zf2w_kubernetes-dashboard\" network: failed to set bridge addr: could not add IP address to \"cni0\": permission denied, failed to clean up sandbox container \"2892e8f35d6ead2bdcc990732d413af4cc1cd4f9a3abf87219801cb55a1e57b3\" network for pod \"dashboard-metrics-scraper-56974995fc-9zf2w\": networkPlugin cni failed to teardown pod \"dashboard-metrics-scraper-56974995fc-9zf2w_kubernetes-dashboard\" network: running [/usr/sbin/iptables -t nat -D POSTROUTING -s 10.85.0.87 -j CNI-4bd22cb48e44441504a54616 -m comment --comment name: \"crio\" id: \"2892e8f35d6ead2bdcc9907
32d413af4cc1cd4f9a3abf87219801cb55a1e57b3\" --wait]: exit status 2: iptables v1.8.4 (legacy): Couldn't load target `CNI-4bd22cb48e44441504a54616':No such file or directory\n\nTry `iptables -h' or 'iptables --help' for more information.\n]"
	Jun 02 19:47:21 newest-cni-20220602193528-12108 kubelet[945]: E0602 19:47:21.476882     945 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = [failed to set up sandbox container \"2892e8f35d6ead2bdcc990732d413af4cc1cd4f9a3abf87219801cb55a1e57b3\" network for pod \"dashboard-metrics-scraper-56974995fc-9zf2w\": networkPlugin cni failed to set up pod \"dashboard-metrics-scraper-56974995fc-9zf2w_kubernetes-dashboard\" network: failed to set bridge addr: could not add IP address to \"cni0\": permission denied, failed to clean up sandbox container \"2892e8f35d6ead2bdcc990732d413af4cc1cd4f9a3abf87219801cb55a1e57b3\" network for pod \"dashboard-metrics-scraper-56974995fc-9zf2w\": networkPlugin cni failed to teardown pod \"dashboard-metrics-scraper-56974995fc-9zf2w_kubernetes-dashboard\" network: running [/usr/sbin/iptables -t nat -D POSTROUTING -s 10.85.0.87 -j CNI-4bd22cb48e44441504a54616 -m comment --comment name: \"crio\" id: \"2892e8f35d6ead2bdcc990732d41
3af4cc1cd4f9a3abf87219801cb55a1e57b3\" --wait]: exit status 2: iptables v1.8.4 (legacy): Couldn't load target `CNI-4bd22cb48e44441504a54616':No such file or directory\n\nTry `iptables -h' or 'iptables --help' for more information.\n]" pod="kubernetes-dashboard/dashboard-metrics-scraper-56974995fc-9zf2w"
	Jun 02 19:47:21 newest-cni-20220602193528-12108 kubelet[945]: E0602 19:47:21.476937     945 kuberuntime_manager.go:833] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = [failed to set up sandbox container \"2892e8f35d6ead2bdcc990732d413af4cc1cd4f9a3abf87219801cb55a1e57b3\" network for pod \"dashboard-metrics-scraper-56974995fc-9zf2w\": networkPlugin cni failed to set up pod \"dashboard-metrics-scraper-56974995fc-9zf2w_kubernetes-dashboard\" network: failed to set bridge addr: could not add IP address to \"cni0\": permission denied, failed to clean up sandbox container \"2892e8f35d6ead2bdcc990732d413af4cc1cd4f9a3abf87219801cb55a1e57b3\" network for pod \"dashboard-metrics-scraper-56974995fc-9zf2w\": networkPlugin cni failed to teardown pod \"dashboard-metrics-scraper-56974995fc-9zf2w_kubernetes-dashboard\" network: running [/usr/sbin/iptables -t nat -D POSTROUTING -s 10.85.0.87 -j CNI-4bd22cb48e44441504a54616 -m comment --comment name: \"crio\" id: \"2892e8f35d6ead2bdcc990732d41
3af4cc1cd4f9a3abf87219801cb55a1e57b3\" --wait]: exit status 2: iptables v1.8.4 (legacy): Couldn't load target `CNI-4bd22cb48e44441504a54616':No such file or directory\n\nTry `iptables -h' or 'iptables --help' for more information.\n]" pod="kubernetes-dashboard/dashboard-metrics-scraper-56974995fc-9zf2w"
	Jun 02 19:47:21 newest-cni-20220602193528-12108 kubelet[945]: E0602 19:47:21.477137     945 pod_workers.go:951] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"dashboard-metrics-scraper-56974995fc-9zf2w_kubernetes-dashboard(0fcdccde-11fb-4570-a19d-b572b11432d3)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"dashboard-metrics-scraper-56974995fc-9zf2w_kubernetes-dashboard(0fcdccde-11fb-4570-a19d-b572b11432d3)\\\": rpc error: code = Unknown desc = [failed to set up sandbox container \\\"2892e8f35d6ead2bdcc990732d413af4cc1cd4f9a3abf87219801cb55a1e57b3\\\" network for pod \\\"dashboard-metrics-scraper-56974995fc-9zf2w\\\": networkPlugin cni failed to set up pod \\\"dashboard-metrics-scraper-56974995fc-9zf2w_kubernetes-dashboard\\\" network: failed to set bridge addr: could not add IP address to \\\"cni0\\\": permission denied, failed to clean up sandbox container \\\"2892e8f35d6ead2bdcc990732d413af4cc1cd4f9a3abf87219801cb55a1e57b3\\\" network for pod \\\"dashb
oard-metrics-scraper-56974995fc-9zf2w\\\": networkPlugin cni failed to teardown pod \\\"dashboard-metrics-scraper-56974995fc-9zf2w_kubernetes-dashboard\\\" network: running [/usr/sbin/iptables -t nat -D POSTROUTING -s 10.85.0.87 -j CNI-4bd22cb48e44441504a54616 -m comment --comment name: \\\"crio\\\" id: \\\"2892e8f35d6ead2bdcc990732d413af4cc1cd4f9a3abf87219801cb55a1e57b3\\\" --wait]: exit status 2: iptables v1.8.4 (legacy): Couldn't load target `CNI-4bd22cb48e44441504a54616':No such file or directory\\n\\nTry `iptables -h' or 'iptables --help' for more information.\\n]\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-56974995fc-9zf2w" podUID=0fcdccde-11fb-4570-a19d-b572b11432d3
	
	* 
	* ==> storage-provisioner [f3713257900f] <==
	* I0602 19:46:29.142768       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F0602 19:46:59.162627       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p newest-cni-20220602193528-12108 -n newest-cni-20220602193528-12108
helpers_test.go:254: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p newest-cni-20220602193528-12108 -n newest-cni-20220602193528-12108: (8.5430974s)
helpers_test.go:261: (dbg) Run:  kubectl --context newest-cni-20220602193528-12108 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:270: non-running pods: coredns-64897985d-nvh82 metrics-server-b955d9d8-4zjkc dashboard-metrics-scraper-56974995fc-9zf2w kubernetes-dashboard-cd7c84bfc-xsbcz
helpers_test.go:272: ======> post-mortem[TestStartStop/group/newest-cni/serial/Pause]: describe non-running pods <======
helpers_test.go:275: (dbg) Run:  kubectl --context newest-cni-20220602193528-12108 describe pod coredns-64897985d-nvh82 metrics-server-b955d9d8-4zjkc dashboard-metrics-scraper-56974995fc-9zf2w kubernetes-dashboard-cd7c84bfc-xsbcz
helpers_test.go:275: (dbg) Non-zero exit: kubectl --context newest-cni-20220602193528-12108 describe pod coredns-64897985d-nvh82 metrics-server-b955d9d8-4zjkc dashboard-metrics-scraper-56974995fc-9zf2w kubernetes-dashboard-cd7c84bfc-xsbcz: exit status 1 (314.978ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "coredns-64897985d-nvh82" not found
	Error from server (NotFound): pods "metrics-server-b955d9d8-4zjkc" not found
	Error from server (NotFound): pods "dashboard-metrics-scraper-56974995fc-9zf2w" not found
	Error from server (NotFound): pods "kubernetes-dashboard-cd7c84bfc-xsbcz" not found

                                                
                                                
** /stderr **
helpers_test.go:277: kubectl --context newest-cni-20220602193528-12108 describe pod coredns-64897985d-nvh82 metrics-server-b955d9d8-4zjkc dashboard-metrics-scraper-56974995fc-9zf2w kubernetes-dashboard-cd7c84bfc-xsbcz: exit status 1
--- FAIL: TestStartStop/group/newest-cni/serial/Pause (77.84s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Start (462.98s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Start
net_test.go:101: (dbg) Run:  out/minikube-windows-amd64.exe start -p false-20220602191600-12108 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=false --driver=docker

                                                
                                                
=== CONT  TestNetworkPlugins/group/false/Start
net_test.go:101: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p false-20220602191600-12108 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=false --driver=docker: exit status 1 (7m42.2355795s)

                                                
                                                
-- stdout --
	* [false-20220602191600-12108] minikube v1.26.0-beta.1 on Microsoft Windows 10 Enterprise N 10.0.19044 Build 19044
	  - KUBECONFIG=C:\Users\jenkins.minikube7\minikube-integration\kubeconfig
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube7\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=14269
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	* Using the docker driver based on user configuration
	* Using Docker Desktop driver with the root privilege
	* Starting control plane node false-20220602191600-12108 in cluster false-20220602191600-12108
	* Pulling base image ...
	* Creating docker container (CPUs=2, Memory=2048MB) ...
	* docker "false-20220602191600-12108" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=2048MB) ...
	* Preparing Kubernetes v1.23.6 on Docker 20.10.16 ...

                                                
                                                
-- /stdout --
** stderr ** 
	I0602 19:48:18.378276    7928 out.go:296] Setting OutFile to fd 1996 ...
	I0602 19:48:18.437308    7928 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0602 19:48:18.437308    7928 out.go:309] Setting ErrFile to fd 1656...
	I0602 19:48:18.437308    7928 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0602 19:48:18.458285    7928 out.go:303] Setting JSON to false
	I0602 19:48:18.462274    7928 start.go:115] hostinfo: {"hostname":"minikube7","uptime":62440,"bootTime":1654136858,"procs":189,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19044 Build 19044","kernelVersion":"10.0.19044 Build 19044","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f66ed2ea-6c04-4a6b-8eea-b2fb0953e990"}
	W0602 19:48:18.462274    7928 start.go:123] gopshost.Virtualization returned error: not implemented yet
	I0602 19:48:18.470298    7928 out.go:177] * [false-20220602191600-12108] minikube v1.26.0-beta.1 on Microsoft Windows 10 Enterprise N 10.0.19044 Build 19044
	I0602 19:48:18.474283    7928 notify.go:193] Checking for updates...
	I0602 19:48:18.476284    7928 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube7\minikube-integration\kubeconfig
	I0602 19:48:18.479278    7928 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube7\minikube-integration\.minikube
	I0602 19:48:18.482271    7928 out.go:177]   - MINIKUBE_LOCATION=14269
	I0602 19:48:18.484274    7928 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0602 19:48:18.487338    7928 config.go:178] Loaded profile config "auto-20220602191545-12108": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.6
	I0602 19:48:18.488323    7928 config.go:178] Loaded profile config "calico-20220602191616-12108": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.6
	I0602 19:48:18.488323    7928 config.go:178] Loaded profile config "cilium-20220602191616-12108": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.6
	I0602 19:48:18.488323    7928 driver.go:358] Setting default libvirt URI to qemu:///system
	I0602 19:48:21.664040    7928 docker.go:137] docker version: linux-20.10.16
	I0602 19:48:21.677803    7928 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0602 19:48:24.034438    7928 cli_runner.go:217] Completed: docker system info --format "{{json .}}": (2.3564859s)
	I0602 19:48:24.035371    7928 info.go:265] docker info: {ID:DTJ5:6HSD:HQYO:NIUY:6CLH:5UQL:6FBK:7BMO:GVE6:JAI7:26MB:GZA3 Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:89 OomKillDisable:true NGoroutines:65 SystemTime:2022-06-02 19:48:22.8668845 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:4 KernelVersion:5.10.102.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_
64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:53902323712 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.16 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16 Expected:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16} RuncCommit:{ID:v1.1.1-0-g52de29d Expected:v1.1.1-0-g52de29d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,
profile=default] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.6.0] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:C:\Program Files\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc.
Version:v0.17.0]] Warnings:<nil>}}
	I0602 19:48:24.039588    7928 out.go:177] * Using the docker driver based on user configuration
	I0602 19:48:24.043282    7928 start.go:284] selected driver: docker
	I0602 19:48:24.043282    7928 start.go:806] validating driver "docker" against <nil>
	I0602 19:48:24.043282    7928 start.go:817] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0602 19:48:24.119217    7928 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0602 19:48:26.467272    7928 cli_runner.go:217] Completed: docker system info --format "{{json .}}": (2.3479961s)
	I0602 19:48:26.467272    7928 info.go:265] docker info: {ID:DTJ5:6HSD:HQYO:NIUY:6CLH:5UQL:6FBK:7BMO:GVE6:JAI7:26MB:GZA3 Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:89 OomKillDisable:true NGoroutines:65 SystemTime:2022-06-02 19:48:25.2986055 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:4 KernelVersion:5.10.102.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_
64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:53902323712 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.16 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16 Expected:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16} RuncCommit:{ID:v1.1.1-0-g52de29d Expected:v1.1.1-0-g52de29d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,
profile=default] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.6.0] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:C:\Program Files\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc.
Version:v0.17.0]] Warnings:<nil>}}
	I0602 19:48:26.468012    7928 start_flags.go:292] no existing cluster config was found, will generate one from the flags 
	I0602 19:48:26.468724    7928 start_flags.go:847] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0602 19:48:26.472048    7928 out.go:177] * Using Docker Desktop driver with the root privilege
	I0602 19:48:26.474329    7928 cni.go:95] Creating CNI manager for "false"
	I0602 19:48:26.474329    7928 start_flags.go:306] config:
	{Name:false-20220602191600-12108 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.6 ClusterName:false-20220602191600-12108 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:false NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:5m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube7:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0602 19:48:26.479271    7928 out.go:177] * Starting control plane node false-20220602191600-12108 in cluster false-20220602191600-12108
	I0602 19:48:26.481255    7928 cache.go:120] Beginning downloading kic base image for docker with docker
	I0602 19:48:26.485251    7928 out.go:177] * Pulling base image ...
	I0602 19:48:26.488249    7928 preload.go:132] Checking if preload exists for k8s version v1.23.6 and runtime docker
	I0602 19:48:26.488249    7928 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496 in local docker daemon
	I0602 19:48:26.488249    7928 preload.go:148] Found local preload: C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.23.6-docker-overlay2-amd64.tar.lz4
	I0602 19:48:26.488249    7928 cache.go:57] Caching tarball of preloaded images
	I0602 19:48:26.488249    7928 preload.go:174] Found C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.23.6-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0602 19:48:26.488249    7928 cache.go:60] Finished verifying existence of preloaded tar for  v1.23.6 on docker
	I0602 19:48:26.489256    7928 profile.go:148] Saving config to C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\false-20220602191600-12108\config.json ...
	I0602 19:48:26.489256    7928 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\false-20220602191600-12108\config.json: {Name:mk2322e008e4974f81d2c667fc6d662f7e2ecf5d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0602 19:48:27.750864    7928 image.go:79] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496 in local docker daemon, skipping pull
	I0602 19:48:27.750864    7928 cache.go:141] gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496 exists in daemon, skipping load
	I0602 19:48:27.750998    7928 cache.go:206] Successfully downloaded all kic artifacts
	I0602 19:48:27.751240    7928 start.go:352] acquiring machines lock for false-20220602191600-12108: {Name:mk9ba5eb39f80beff395501d3c10ee7037599c79 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0602 19:48:27.751522    7928 start.go:356] acquired machines lock for "false-20220602191600-12108" in 281.6µs
	I0602 19:48:27.751812    7928 start.go:91] Provisioning new machine with config: &{Name:false-20220602191600-12108 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.6 ClusterName:false-20220602191600-12108 Namespace:default
APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:false NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.23.6 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:5m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube7:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Bi
naryMirror: DisableOptimizations:false DisableMetrics:false} &{Name: IP: Port:8443 KubernetesVersion:v1.23.6 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0602 19:48:27.751812    7928 start.go:131] createHost starting for "" (driver="docker")
	I0602 19:48:27.756433    7928 out.go:204] * Creating docker container (CPUs=2, Memory=2048MB) ...
	I0602 19:48:27.756433    7928 start.go:165] libmachine.API.Create for "false-20220602191600-12108" (driver="docker")
	I0602 19:48:27.756433    7928 client.go:168] LocalClient.Create starting
	I0602 19:48:27.757426    7928 main.go:134] libmachine: Reading certificate data from C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem
	I0602 19:48:27.757426    7928 main.go:134] libmachine: Decoding PEM data...
	I0602 19:48:27.757426    7928 main.go:134] libmachine: Parsing certificate...
	I0602 19:48:27.757426    7928 main.go:134] libmachine: Reading certificate data from C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\cert.pem
	I0602 19:48:27.757426    7928 main.go:134] libmachine: Decoding PEM data...
	I0602 19:48:27.757426    7928 main.go:134] libmachine: Parsing certificate...
	I0602 19:48:27.770434    7928 cli_runner.go:164] Run: docker network inspect false-20220602191600-12108 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0602 19:48:29.049101    7928 cli_runner.go:211] docker network inspect false-20220602191600-12108 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0602 19:48:29.049101    7928 cli_runner.go:217] Completed: docker network inspect false-20220602191600-12108 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}": (1.2786616s)
	I0602 19:48:29.063764    7928 network_create.go:272] running [docker network inspect false-20220602191600-12108] to gather additional debugging logs...
	I0602 19:48:29.063764    7928 cli_runner.go:164] Run: docker network inspect false-20220602191600-12108
	W0602 19:48:30.293140    7928 cli_runner.go:211] docker network inspect false-20220602191600-12108 returned with exit code 1
	I0602 19:48:30.293140    7928 cli_runner.go:217] Completed: docker network inspect false-20220602191600-12108: (1.2292667s)
	I0602 19:48:30.293237    7928 network_create.go:275] error running [docker network inspect false-20220602191600-12108]: docker network inspect false-20220602191600-12108: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: false-20220602191600-12108
	I0602 19:48:30.293237    7928 network_create.go:277] output of [docker network inspect false-20220602191600-12108]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: false-20220602191600-12108
	
	** /stderr **
	I0602 19:48:30.301959    7928 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0602 19:48:31.519525    7928 cli_runner.go:217] Completed: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}": (1.2175602s)
	I0602 19:48:31.540340    7928 network.go:288] reserving subnet 192.168.49.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[] amended:true}} dirty:map[192.168.49.0:0xc000006520] misses:0}
	I0602 19:48:31.540517    7928 network.go:235] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0602 19:48:31.540596    7928 network_create.go:115] attempt to create docker network false-20220602191600-12108 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0602 19:48:31.551644    7928 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true false-20220602191600-12108
	W0602 19:48:32.822648    7928 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true false-20220602191600-12108 returned with exit code 1
	I0602 19:48:32.822648    7928 cli_runner.go:217] Completed: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true false-20220602191600-12108: (1.2709987s)
	W0602 19:48:32.822648    7928 network_create.go:107] failed to create docker network false-20220602191600-12108 192.168.49.0/24, will retry: subnet is taken
	I0602 19:48:32.842631    7928 network.go:279] skipping subnet 192.168.49.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc000006520] amended:false}} dirty:map[] misses:0}
	I0602 19:48:32.842631    7928 network.go:238] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0602 19:48:32.864692    7928 network.go:288] reserving subnet 192.168.58.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc000006520] amended:true}} dirty:map[192.168.49.0:0xc000006520 192.168.58.0:0xc000667750] misses:0}
	I0602 19:48:32.864692    7928 network.go:235] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0602 19:48:32.865652    7928 network_create.go:115] attempt to create docker network false-20220602191600-12108 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
	I0602 19:48:32.874673    7928 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true false-20220602191600-12108
	I0602 19:48:34.545377    7928 cli_runner.go:217] Completed: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true false-20220602191600-12108: (1.6706961s)
	I0602 19:48:34.545377    7928 network_create.go:99] docker network false-20220602191600-12108 192.168.58.0/24 created
	I0602 19:48:34.545377    7928 kic.go:106] calculated static IP "192.168.58.2" for the "false-20220602191600-12108" container
	I0602 19:48:34.567396    7928 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0602 19:48:35.780776    7928 cli_runner.go:217] Completed: docker ps -a --format {{.Names}}: (1.2132538s)
	I0602 19:48:35.789493    7928 cli_runner.go:164] Run: docker volume create false-20220602191600-12108 --label name.minikube.sigs.k8s.io=false-20220602191600-12108 --label created_by.minikube.sigs.k8s.io=true
	I0602 19:48:39.104608    7928 cli_runner.go:217] Completed: docker volume create false-20220602191600-12108 --label name.minikube.sigs.k8s.io=false-20220602191600-12108 --label created_by.minikube.sigs.k8s.io=true: (3.314918s)
	I0602 19:48:39.104661    7928 oci.go:103] Successfully created a docker volume false-20220602191600-12108
	I0602 19:48:39.113746    7928 cli_runner.go:164] Run: docker run --rm --name false-20220602191600-12108-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=false-20220602191600-12108 --entrypoint /usr/bin/test -v false-20220602191600-12108:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496 -d /var/lib
	I0602 19:48:42.609812    7928 cli_runner.go:217] Completed: docker run --rm --name false-20220602191600-12108-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=false-20220602191600-12108 --entrypoint /usr/bin/test -v false-20220602191600-12108:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496 -d /var/lib: (3.4960509s)
	I0602 19:48:42.609812    7928 oci.go:107] Successfully prepared a docker volume false-20220602191600-12108
	I0602 19:48:42.609812    7928 preload.go:132] Checking if preload exists for k8s version v1.23.6 and runtime docker
	I0602 19:48:42.609812    7928 kic.go:179] Starting extracting preloaded images to volume ...
	I0602 19:48:42.616807    7928 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.23.6-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v false-20220602191600-12108:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496 -I lz4 -xf /preloaded.tar -C /extractDir
	I0602 19:49:06.778905    7928 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.23.6-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v false-20220602191600-12108:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496 -I lz4 -xf /preloaded.tar -C /extractDir: (24.1617077s)
	I0602 19:49:06.778905    7928 kic.go:188] duration metric: took 24.168987 seconds to extract preloaded images to volume
	I0602 19:49:06.787075    7928 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0602 19:49:09.159670    7928 cli_runner.go:217] Completed: docker system info --format "{{json .}}": (2.3725841s)
	I0602 19:49:09.159670    7928 info.go:265] docker info: {ID:DTJ5:6HSD:HQYO:NIUY:6CLH:5UQL:6FBK:7BMO:GVE6:JAI7:26MB:GZA3 Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:75 OomKillDisable:true NGoroutines:60 SystemTime:2022-06-02 19:49:08.0183569 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:4 KernelVersion:5.10.102.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_
64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:53902323712 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.16 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16 Expected:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16} RuncCommit:{ID:v1.1.1-0-g52de29d Expected:v1.1.1-0-g52de29d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,
profile=default] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.6.0] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:C:\Program Files\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc.
Version:v0.17.0]] Warnings:<nil>}}
	I0602 19:49:09.167677    7928 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0602 19:49:11.436406    7928 cli_runner.go:217] Completed: docker info --format "'{{json .SecurityOptions}}'": (2.2687191s)
	I0602 19:49:11.446024    7928 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname false-20220602191600-12108 --name false-20220602191600-12108 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=false-20220602191600-12108 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=false-20220602191600-12108 --network false-20220602191600-12108 --ip 192.168.58.2 --volume false-20220602191600-12108:/var --security-opt apparmor=unconfined --memory=2048mb --memory-swap=2048mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496
	W0602 19:49:12.905545    7928 cli_runner.go:211] docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname false-20220602191600-12108 --name false-20220602191600-12108 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=false-20220602191600-12108 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=false-20220602191600-12108 --network false-20220602191600-12108 --ip 192.168.58.2 --volume false-20220602191600-12108:/var --security-opt apparmor=unconfined --memory=2048mb --memory-swap=2048mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496 returned with exit code 125
	I0602 19:49:12.905545    7928 cli_runner.go:217] Completed: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname false-20220602191600-12108 --name false-20220602191600-12108 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=false-20220602191600-12108 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=false-20220602191600-12108 --network false-20220602191600-12108 --ip 192.168.58.2 --volume false-20220602191600-12108:/var --security-opt apparmor=unconfined --memory=2048mb --memory-swap=2048mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496: (1.4595153s)
	I0602 19:49:12.905545    7928 client.go:171] LocalClient.Create took 45.1489146s
	I0602 19:49:14.927030    7928 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0602 19:49:14.938281    7928 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-20220602191600-12108
	W0602 19:49:16.137750    7928 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-20220602191600-12108 returned with exit code 1
	I0602 19:49:16.137779    7928 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-20220602191600-12108: (1.1992897s)
	I0602 19:49:16.137779    7928 retry.go:31] will retry after 276.165072ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0602 19:49:16.422388    7928 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-20220602191600-12108
	W0602 19:49:17.700953    7928 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-20220602191600-12108 returned with exit code 1
	I0602 19:49:17.700953    7928 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-20220602191600-12108: (1.2785593s)
	W0602 19:49:17.700953    7928 start.go:281] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	W0602 19:49:17.700953    7928 start.go:248] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0602 19:49:17.712670    7928 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0602 19:49:17.719677    7928 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-20220602191600-12108
	W0602 19:49:18.940046    7928 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-20220602191600-12108 returned with exit code 1
	I0602 19:49:18.940046    7928 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-20220602191600-12108: (1.2203631s)
	I0602 19:49:18.940046    7928 retry.go:31] will retry after 291.140013ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0602 19:49:19.249575    7928 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-20220602191600-12108
	W0602 19:49:20.545570    7928 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-20220602191600-12108 returned with exit code 1
	I0602 19:49:20.545706    7928 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-20220602191600-12108: (1.2956287s)
	W0602 19:49:20.545941    7928 start.go:296] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	W0602 19:49:20.546036    7928 start.go:253] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0602 19:49:20.546036    7928 start.go:134] duration metric: createHost completed in 52.7939938s
	I0602 19:49:20.546036    7928 start.go:81] releasing machines lock for "false-20220602191600-12108", held for 52.7942845s
	W0602 19:49:20.546414    7928 start.go:599] error starting host: creating host: create: creating: create kic node: create container: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname false-20220602191600-12108 --name false-20220602191600-12108 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=false-20220602191600-12108 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=false-20220602191600-12108 --network false-20220602191600-12108 --ip 192.168.58.2 --volume false-20220602191600-12108:/var --security-opt apparmor=unconfined --memory=2048mb --memory-swap=2048mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496: exit status 1
25
	stdout:
	69e40d0fcb6190193fe373cd554a511deee3c01e27d9b4a3fa3c04508762bd11
	
	stderr:
	docker: Error response from daemon: network false-20220602191600-12108 not found.
	I0602 19:49:20.575139    7928 cli_runner.go:164] Run: docker container inspect false-20220602191600-12108 --format={{.State.Status}}
	I0602 19:49:21.815908    7928 cli_runner.go:217] Completed: docker container inspect false-20220602191600-12108 --format={{.State.Status}}: (1.240764s)
	W0602 19:49:21.815908    7928 start.go:604] delete host: Docker machine "false-20220602191600-12108" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	W0602 19:49:21.815908    7928 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: create kic node: create container: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname false-20220602191600-12108 --name false-20220602191600-12108 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=false-20220602191600-12108 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=false-20220602191600-12108 --network false-20220602191600-12108 --ip 192.168.58.2 --volume false-20220602191600-12108:/var --security-opt apparmor=unconfined --memory=2048mb --memory-swap=2048mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc34
96: exit status 125
	stdout:
	69e40d0fcb6190193fe373cd554a511deee3c01e27d9b4a3fa3c04508762bd11
	
	stderr:
	docker: Error response from daemon: network false-20220602191600-12108 not found.
	
	! StartHost failed, but will try again: creating host: create: creating: create kic node: create container: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname false-20220602191600-12108 --name false-20220602191600-12108 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=false-20220602191600-12108 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=false-20220602191600-12108 --network false-20220602191600-12108 --ip 192.168.58.2 --volume false-20220602191600-12108:/var --security-opt apparmor=unconfined --memory=2048mb --memory-swap=2048mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496: exit status 125
	stdout:
	69e40d0fcb6190193fe373cd554a511deee3c01e27d9b4a3fa3c04508762bd11
	
	stderr:
	docker: Error response from daemon: network false-20220602191600-12108 not found.
	
	I0602 19:49:21.815908    7928 start.go:614] Will try again in 5 seconds ...
	I0602 19:49:26.825502    7928 start.go:352] acquiring machines lock for false-20220602191600-12108: {Name:mk9ba5eb39f80beff395501d3c10ee7037599c79 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0602 19:49:26.825502    7928 start.go:356] acquired machines lock for "false-20220602191600-12108" in 0s
	I0602 19:49:26.825502    7928 start.go:94] Skipping create...Using existing machine configuration
	I0602 19:49:26.825502    7928 fix.go:55] fixHost starting: 
	I0602 19:49:26.847916    7928 cli_runner.go:164] Run: docker container inspect false-20220602191600-12108 --format={{.State.Status}}
	I0602 19:49:28.064540    7928 cli_runner.go:217] Completed: docker container inspect false-20220602191600-12108 --format={{.State.Status}}: (1.2166188s)
	I0602 19:49:28.064540    7928 fix.go:103] recreateIfNeeded on false-20220602191600-12108: state= err=<nil>
	I0602 19:49:28.064540    7928 fix.go:108] machineExists: false. err=machine does not exist
	I0602 19:49:28.070530    7928 out.go:177] * docker "false-20220602191600-12108" container is missing, will recreate.
	I0602 19:49:28.073541    7928 delete.go:124] DEMOLISHING false-20220602191600-12108 ...
	I0602 19:49:28.087530    7928 cli_runner.go:164] Run: docker container inspect false-20220602191600-12108 --format={{.State.Status}}
	I0602 19:49:29.289532    7928 cli_runner.go:217] Completed: docker container inspect false-20220602191600-12108 --format={{.State.Status}}: (1.2019969s)
	I0602 19:49:29.289532    7928 stop.go:79] host is in state 
	I0602 19:49:29.289532    7928 main.go:134] libmachine: Stopping "false-20220602191600-12108"...
	I0602 19:49:29.303533    7928 cli_runner.go:164] Run: docker container inspect false-20220602191600-12108 --format={{.State.Status}}
	I0602 19:49:30.569856    7928 cli_runner.go:217] Completed: docker container inspect false-20220602191600-12108 --format={{.State.Status}}: (1.2663177s)
	I0602 19:49:30.588862    7928 kic_runner.go:93] Run: systemctl --version
	I0602 19:49:30.588862    7928 kic_runner.go:114] Args: [docker exec --privileged false-20220602191600-12108 systemctl --version]
	I0602 19:49:31.836125    7928 kic_runner.go:93] Run: sudo service kubelet stop
	I0602 19:49:31.836125    7928 kic_runner.go:114] Args: [docker exec --privileged false-20220602191600-12108 sudo service kubelet stop]
	I0602 19:49:33.031063    7928 openrc.go:165] stop output: 
	** stderr ** 
	Error response from daemon: Container 69e40d0fcb6190193fe373cd554a511deee3c01e27d9b4a3fa3c04508762bd11 is not running
	
	** /stderr **
	W0602 19:49:33.031063    7928 kic.go:439] couldn't stop kubelet. will continue with stop anyways: sudo service kubelet stop: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: Container 69e40d0fcb6190193fe373cd554a511deee3c01e27d9b4a3fa3c04508762bd11 is not running
	I0602 19:49:33.050607    7928 kic_runner.go:93] Run: sudo service kubelet stop
	I0602 19:49:33.050607    7928 kic_runner.go:114] Args: [docker exec --privileged false-20220602191600-12108 sudo service kubelet stop]
	I0602 19:49:34.293496    7928 openrc.go:165] stop output: 
	** stderr ** 
	Error response from daemon: Container 69e40d0fcb6190193fe373cd554a511deee3c01e27d9b4a3fa3c04508762bd11 is not running
	
	** /stderr **
	W0602 19:49:34.293533    7928 kic.go:441] couldn't force stop kubelet. will continue with stop anyways: sudo service kubelet stop: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: Container 69e40d0fcb6190193fe373cd554a511deee3c01e27d9b4a3fa3c04508762bd11 is not running
	I0602 19:49:34.318769    7928 kic_runner.go:93] Run: docker ps -a --filter=name=k8s_.*_(kube-system|kubernetes-dashboard|storage-gluster|istio-operator)_ --format={{.ID}}
	I0602 19:49:34.318769    7928 kic_runner.go:114] Args: [docker exec --privileged false-20220602191600-12108 docker ps -a --filter=name=k8s_.*_(kube-system|kubernetes-dashboard|storage-gluster|istio-operator)_ --format={{.ID}}]
	I0602 19:49:35.536578    7928 kic.go:452] unable list containers : docker: docker ps -a --filter=name=k8s_.*_(kube-system|kubernetes-dashboard|storage-gluster|istio-operator)_ --format={{.ID}}: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: Container 69e40d0fcb6190193fe373cd554a511deee3c01e27d9b4a3fa3c04508762bd11 is not running
	I0602 19:49:35.536578    7928 kic.go:462] successfully stopped kubernetes!
	I0602 19:49:35.555751    7928 kic_runner.go:93] Run: pgrep kube-apiserver
	I0602 19:49:35.555751    7928 kic_runner.go:114] Args: [docker exec --privileged false-20220602191600-12108 pgrep kube-apiserver]
	I0602 19:49:38.056144    7928 cli_runner.go:164] Run: docker container inspect false-20220602191600-12108 --format={{.State.Status}}
	I0602 19:49:39.304751    7928 cli_runner.go:217] Completed: docker container inspect false-20220602191600-12108 --format={{.State.Status}}: (1.2484926s)
	I0602 19:49:42.322922    7928 cli_runner.go:164] Run: docker container inspect false-20220602191600-12108 --format={{.State.Status}}
	I0602 19:49:43.623299    7928 cli_runner.go:217] Completed: docker container inspect false-20220602191600-12108 --format={{.State.Status}}: (1.3003716s)
	I0602 19:49:46.662410    7928 cli_runner.go:164] Run: docker container inspect false-20220602191600-12108 --format={{.State.Status}}
	I0602 19:49:48.119986    7928 cli_runner.go:217] Completed: docker container inspect false-20220602191600-12108 --format={{.State.Status}}: (1.45757s)
	I0602 19:49:51.151230    7928 cli_runner.go:164] Run: docker container inspect false-20220602191600-12108 --format={{.State.Status}}
	I0602 19:49:52.355887    7928 cli_runner.go:217] Completed: docker container inspect false-20220602191600-12108 --format={{.State.Status}}: (1.2046525s)
	I0602 19:49:55.377359    7928 cli_runner.go:164] Run: docker container inspect false-20220602191600-12108 --format={{.State.Status}}
	I0602 19:49:56.610207    7928 cli_runner.go:217] Completed: docker container inspect false-20220602191600-12108 --format={{.State.Status}}: (1.2327599s)
	I0602 19:49:59.629284    7928 cli_runner.go:164] Run: docker container inspect false-20220602191600-12108 --format={{.State.Status}}
	I0602 19:50:00.883484    7928 cli_runner.go:217] Completed: docker container inspect false-20220602191600-12108 --format={{.State.Status}}: (1.2531031s)
	I0602 19:50:03.911471    7928 cli_runner.go:164] Run: docker container inspect false-20220602191600-12108 --format={{.State.Status}}
	I0602 19:50:05.151753    7928 cli_runner.go:217] Completed: docker container inspect false-20220602191600-12108 --format={{.State.Status}}: (1.2402757s)
	I0602 19:50:08.182473    7928 cli_runner.go:164] Run: docker container inspect false-20220602191600-12108 --format={{.State.Status}}
	I0602 19:50:09.396850    7928 cli_runner.go:217] Completed: docker container inspect false-20220602191600-12108 --format={{.State.Status}}: (1.2143717s)
	I0602 19:50:12.417624    7928 cli_runner.go:164] Run: docker container inspect false-20220602191600-12108 --format={{.State.Status}}
	I0602 19:50:13.655042    7928 cli_runner.go:217] Completed: docker container inspect false-20220602191600-12108 --format={{.State.Status}}: (1.2373535s)
	I0602 19:50:16.685761    7928 cli_runner.go:164] Run: docker container inspect false-20220602191600-12108 --format={{.State.Status}}
	I0602 19:50:17.946219    7928 cli_runner.go:217] Completed: docker container inspect false-20220602191600-12108 --format={{.State.Status}}: (1.260452s)
	I0602 19:50:20.977102    7928 cli_runner.go:164] Run: docker container inspect false-20220602191600-12108 --format={{.State.Status}}
	I0602 19:50:22.226478    7928 cli_runner.go:217] Completed: docker container inspect false-20220602191600-12108 --format={{.State.Status}}: (1.2493705s)
	I0602 19:50:25.249421    7928 cli_runner.go:164] Run: docker container inspect false-20220602191600-12108 --format={{.State.Status}}
	I0602 19:50:26.471007    7928 cli_runner.go:217] Completed: docker container inspect false-20220602191600-12108 --format={{.State.Status}}: (1.2215808s)
	I0602 19:50:29.492215    7928 cli_runner.go:164] Run: docker container inspect false-20220602191600-12108 --format={{.State.Status}}
	I0602 19:50:30.780452    7928 cli_runner.go:217] Completed: docker container inspect false-20220602191600-12108 --format={{.State.Status}}: (1.288093s)
	I0602 19:50:33.794853    7928 cli_runner.go:164] Run: docker container inspect false-20220602191600-12108 --format={{.State.Status}}
	I0602 19:50:35.068028    7928 cli_runner.go:217] Completed: docker container inspect false-20220602191600-12108 --format={{.State.Status}}: (1.2729937s)
	I0602 19:50:38.092976    7928 cli_runner.go:164] Run: docker container inspect false-20220602191600-12108 --format={{.State.Status}}
	I0602 19:50:39.301943    7928 cli_runner.go:217] Completed: docker container inspect false-20220602191600-12108 --format={{.State.Status}}: (1.2087575s)
	I0602 19:50:42.328034    7928 cli_runner.go:164] Run: docker container inspect false-20220602191600-12108 --format={{.State.Status}}
	I0602 19:50:43.621623    7928 cli_runner.go:217] Completed: docker container inspect false-20220602191600-12108 --format={{.State.Status}}: (1.2935838s)
	I0602 19:50:46.652160    7928 cli_runner.go:164] Run: docker container inspect false-20220602191600-12108 --format={{.State.Status}}
	I0602 19:50:47.930018    7928 cli_runner.go:217] Completed: docker container inspect false-20220602191600-12108 --format={{.State.Status}}: (1.2778522s)
	I0602 19:50:50.959216    7928 cli_runner.go:164] Run: docker container inspect false-20220602191600-12108 --format={{.State.Status}}
	I0602 19:50:52.181109    7928 cli_runner.go:217] Completed: docker container inspect false-20220602191600-12108 --format={{.State.Status}}: (1.2218877s)
	I0602 19:50:55.225949    7928 cli_runner.go:164] Run: docker container inspect false-20220602191600-12108 --format={{.State.Status}}
	I0602 19:50:56.448552    7928 cli_runner.go:217] Completed: docker container inspect false-20220602191600-12108 --format={{.State.Status}}: (1.2225976s)
	I0602 19:50:59.481099    7928 cli_runner.go:164] Run: docker container inspect false-20220602191600-12108 --format={{.State.Status}}
	I0602 19:51:00.881911    7928 cli_runner.go:217] Completed: docker container inspect false-20220602191600-12108 --format={{.State.Status}}: (1.4008056s)
	I0602 19:51:03.901850    7928 cli_runner.go:164] Run: docker container inspect false-20220602191600-12108 --format={{.State.Status}}
	I0602 19:51:05.222150    7928 cli_runner.go:217] Completed: docker container inspect false-20220602191600-12108 --format={{.State.Status}}: (1.3202937s)
	I0602 19:51:08.242046    7928 cli_runner.go:164] Run: docker container inspect false-20220602191600-12108 --format={{.State.Status}}
	I0602 19:51:09.511257    7928 cli_runner.go:217] Completed: docker container inspect false-20220602191600-12108 --format={{.State.Status}}: (1.2692056s)
	I0602 19:51:12.530192    7928 cli_runner.go:164] Run: docker container inspect false-20220602191600-12108 --format={{.State.Status}}
	I0602 19:51:13.847807    7928 cli_runner.go:217] Completed: docker container inspect false-20220602191600-12108 --format={{.State.Status}}: (1.3176098s)
	I0602 19:51:16.885653    7928 cli_runner.go:164] Run: docker container inspect false-20220602191600-12108 --format={{.State.Status}}
	I0602 19:51:18.416537    7928 cli_runner.go:217] Completed: docker container inspect false-20220602191600-12108 --format={{.State.Status}}: (1.530877s)
	I0602 19:51:21.450825    7928 cli_runner.go:164] Run: docker container inspect false-20220602191600-12108 --format={{.State.Status}}
	I0602 19:51:22.733110    7928 cli_runner.go:217] Completed: docker container inspect false-20220602191600-12108 --format={{.State.Status}}: (1.2822799s)
	I0602 19:51:25.768768    7928 cli_runner.go:164] Run: docker container inspect false-20220602191600-12108 --format={{.State.Status}}
	I0602 19:51:27.036388    7928 cli_runner.go:217] Completed: docker container inspect false-20220602191600-12108 --format={{.State.Status}}: (1.2676149s)
	I0602 19:51:30.064081    7928 cli_runner.go:164] Run: docker container inspect false-20220602191600-12108 --format={{.State.Status}}
	I0602 19:51:31.278363    7928 cli_runner.go:217] Completed: docker container inspect false-20220602191600-12108 --format={{.State.Status}}: (1.2141405s)
	I0602 19:51:34.305304    7928 cli_runner.go:164] Run: docker container inspect false-20220602191600-12108 --format={{.State.Status}}
	I0602 19:51:35.555277    7928 cli_runner.go:217] Completed: docker container inspect false-20220602191600-12108 --format={{.State.Status}}: (1.249842s)
	I0602 19:51:38.576874    7928 cli_runner.go:164] Run: docker container inspect false-20220602191600-12108 --format={{.State.Status}}
	I0602 19:51:39.813114    7928 cli_runner.go:217] Completed: docker container inspect false-20220602191600-12108 --format={{.State.Status}}: (1.236124s)
	I0602 19:51:42.840191    7928 cli_runner.go:164] Run: docker container inspect false-20220602191600-12108 --format={{.State.Status}}
	I0602 19:51:44.109947    7928 cli_runner.go:217] Completed: docker container inspect false-20220602191600-12108 --format={{.State.Status}}: (1.2692132s)
	I0602 19:51:47.131235    7928 cli_runner.go:164] Run: docker container inspect false-20220602191600-12108 --format={{.State.Status}}
	I0602 19:51:48.352583    7928 cli_runner.go:217] Completed: docker container inspect false-20220602191600-12108 --format={{.State.Status}}: (1.2213423s)
	I0602 19:51:51.378106    7928 cli_runner.go:164] Run: docker container inspect false-20220602191600-12108 --format={{.State.Status}}
	I0602 19:51:52.497119    7928 cli_runner.go:217] Completed: docker container inspect false-20220602191600-12108 --format={{.State.Status}}: (1.1190083s)
	I0602 19:51:55.522753    7928 cli_runner.go:164] Run: docker container inspect false-20220602191600-12108 --format={{.State.Status}}
	I0602 19:51:56.780467    7928 cli_runner.go:217] Completed: docker container inspect false-20220602191600-12108 --format={{.State.Status}}: (1.2574839s)
	I0602 19:51:59.800577    7928 cli_runner.go:164] Run: docker container inspect false-20220602191600-12108 --format={{.State.Status}}
	I0602 19:52:00.942385    7928 cli_runner.go:217] Completed: docker container inspect false-20220602191600-12108 --format={{.State.Status}}: (1.1416577s)
	I0602 19:52:03.967574    7928 cli_runner.go:164] Run: docker container inspect false-20220602191600-12108 --format={{.State.Status}}
	I0602 19:52:05.176302    7928 cli_runner.go:217] Completed: docker container inspect false-20220602191600-12108 --format={{.State.Status}}: (1.2087229s)
	I0602 19:52:08.193277    7928 cli_runner.go:164] Run: docker container inspect false-20220602191600-12108 --format={{.State.Status}}
	I0602 19:52:09.411865    7928 cli_runner.go:217] Completed: docker container inspect false-20220602191600-12108 --format={{.State.Status}}: (1.2185823s)
	I0602 19:52:12.432339    7928 cli_runner.go:164] Run: docker container inspect false-20220602191600-12108 --format={{.State.Status}}
	I0602 19:52:13.688315    7928 cli_runner.go:217] Completed: docker container inspect false-20220602191600-12108 --format={{.State.Status}}: (1.2558448s)
	I0602 19:52:16.715414    7928 cli_runner.go:164] Run: docker container inspect false-20220602191600-12108 --format={{.State.Status}}
	I0602 19:52:18.025014    7928 cli_runner.go:217] Completed: docker container inspect false-20220602191600-12108 --format={{.State.Status}}: (1.3093085s)
	I0602 19:52:21.060528    7928 cli_runner.go:164] Run: docker container inspect false-20220602191600-12108 --format={{.State.Status}}
	I0602 19:52:22.314852    7928 cli_runner.go:217] Completed: docker container inspect false-20220602191600-12108 --format={{.State.Status}}: (1.2541189s)
	I0602 19:52:25.348633    7928 cli_runner.go:164] Run: docker container inspect false-20220602191600-12108 --format={{.State.Status}}
	I0602 19:52:26.526848    7928 cli_runner.go:217] Completed: docker container inspect false-20220602191600-12108 --format={{.State.Status}}: (1.1781619s)
	I0602 19:52:29.550444    7928 cli_runner.go:164] Run: docker container inspect false-20220602191600-12108 --format={{.State.Status}}
	I0602 19:52:30.773124    7928 cli_runner.go:217] Completed: docker container inspect false-20220602191600-12108 --format={{.State.Status}}: (1.2226748s)
	I0602 19:52:33.791533    7928 cli_runner.go:164] Run: docker container inspect false-20220602191600-12108 --format={{.State.Status}}
	I0602 19:52:35.021564    7928 cli_runner.go:217] Completed: docker container inspect false-20220602191600-12108 --format={{.State.Status}}: (1.230026s)
	I0602 19:52:38.039711    7928 cli_runner.go:164] Run: docker container inspect false-20220602191600-12108 --format={{.State.Status}}
	I0602 19:52:39.275067    7928 cli_runner.go:217] Completed: docker container inspect false-20220602191600-12108 --format={{.State.Status}}: (1.2352587s)
	I0602 19:52:42.305132    7928 cli_runner.go:164] Run: docker container inspect false-20220602191600-12108 --format={{.State.Status}}
	I0602 19:52:43.547123    7928 cli_runner.go:217] Completed: docker container inspect false-20220602191600-12108 --format={{.State.Status}}: (1.2417424s)
	I0602 19:52:46.573876    7928 cli_runner.go:164] Run: docker container inspect false-20220602191600-12108 --format={{.State.Status}}
	I0602 19:52:47.816781    7928 cli_runner.go:217] Completed: docker container inspect false-20220602191600-12108 --format={{.State.Status}}: (1.2428481s)
	I0602 19:52:50.840169    7928 cli_runner.go:164] Run: docker container inspect false-20220602191600-12108 --format={{.State.Status}}
	I0602 19:52:52.086657    7928 cli_runner.go:217] Completed: docker container inspect false-20220602191600-12108 --format={{.State.Status}}: (1.2463052s)
	I0602 19:52:55.110188    7928 cli_runner.go:164] Run: docker container inspect false-20220602191600-12108 --format={{.State.Status}}
	I0602 19:52:56.445406    7928 cli_runner.go:217] Completed: docker container inspect false-20220602191600-12108 --format={{.State.Status}}: (1.335212s)
	I0602 19:52:59.463396    7928 cli_runner.go:164] Run: docker container inspect false-20220602191600-12108 --format={{.State.Status}}
	I0602 19:53:00.690340    7928 cli_runner.go:217] Completed: docker container inspect false-20220602191600-12108 --format={{.State.Status}}: (1.2267864s)
	I0602 19:53:03.721599    7928 cli_runner.go:164] Run: docker container inspect false-20220602191600-12108 --format={{.State.Status}}
	I0602 19:53:04.986174    7928 cli_runner.go:217] Completed: docker container inspect false-20220602191600-12108 --format={{.State.Status}}: (1.2645283s)
	I0602 19:53:08.019791    7928 cli_runner.go:164] Run: docker container inspect false-20220602191600-12108 --format={{.State.Status}}
	I0602 19:53:09.272844    7928 cli_runner.go:217] Completed: docker container inspect false-20220602191600-12108 --format={{.State.Status}}: (1.252977s)
	I0602 19:53:12.295524    7928 cli_runner.go:164] Run: docker container inspect false-20220602191600-12108 --format={{.State.Status}}
	I0602 19:53:13.545939    7928 cli_runner.go:217] Completed: docker container inspect false-20220602191600-12108 --format={{.State.Status}}: (1.250356s)
	I0602 19:53:16.578489    7928 cli_runner.go:164] Run: docker container inspect false-20220602191600-12108 --format={{.State.Status}}
	I0602 19:53:17.866371    7928 cli_runner.go:217] Completed: docker container inspect false-20220602191600-12108 --format={{.State.Status}}: (1.2876334s)
	I0602 19:53:20.883696    7928 cli_runner.go:164] Run: docker container inspect false-20220602191600-12108 --format={{.State.Status}}
	I0602 19:53:22.302391    7928 cli_runner.go:217] Completed: docker container inspect false-20220602191600-12108 --format={{.State.Status}}: (1.4185717s)
	I0602 19:53:25.323537    7928 cli_runner.go:164] Run: docker container inspect false-20220602191600-12108 --format={{.State.Status}}
	I0602 19:53:26.627883    7928 cli_runner.go:217] Completed: docker container inspect false-20220602191600-12108 --format={{.State.Status}}: (1.3042752s)
	I0602 19:53:29.652469    7928 cli_runner.go:164] Run: docker container inspect false-20220602191600-12108 --format={{.State.Status}}
	I0602 19:53:31.100188    7928 cli_runner.go:217] Completed: docker container inspect false-20220602191600-12108 --format={{.State.Status}}: (1.4475278s)
	I0602 19:53:34.123748    7928 cli_runner.go:164] Run: docker container inspect false-20220602191600-12108 --format={{.State.Status}}
	I0602 19:53:35.501760    7928 cli_runner.go:217] Completed: docker container inspect false-20220602191600-12108 --format={{.State.Status}}: (1.3772748s)
	I0602 19:53:38.525578    7928 cli_runner.go:164] Run: docker container inspect false-20220602191600-12108 --format={{.State.Status}}
	I0602 19:53:39.903622    7928 cli_runner.go:217] Completed: docker container inspect false-20220602191600-12108 --format={{.State.Status}}: (1.3778443s)
	I0602 19:53:42.939396    7928 cli_runner.go:164] Run: docker container inspect false-20220602191600-12108 --format={{.State.Status}}
	I0602 19:53:44.269710    7928 cli_runner.go:217] Completed: docker container inspect false-20220602191600-12108 --format={{.State.Status}}: (1.3300775s)
	I0602 19:53:47.304569    7928 cli_runner.go:164] Run: docker container inspect false-20220602191600-12108 --format={{.State.Status}}
	I0602 19:53:48.698470    7928 cli_runner.go:217] Completed: docker container inspect false-20220602191600-12108 --format={{.State.Status}}: (1.393699s)
	I0602 19:53:51.719829    7928 cli_runner.go:164] Run: docker container inspect false-20220602191600-12108 --format={{.State.Status}}
	I0602 19:53:53.046868    7928 cli_runner.go:217] Completed: docker container inspect false-20220602191600-12108 --format={{.State.Status}}: (1.3268909s)
	I0602 19:53:56.062371    7928 stop.go:59] stop err: Maximum number of retries (60) exceeded
	I0602 19:53:56.062502    7928 delete.go:129] stophost failed (probably ok): Temporary Error: stop: Maximum number of retries (60) exceeded
	I0602 19:53:56.082738    7928 cli_runner.go:164] Run: docker container inspect false-20220602191600-12108 --format={{.State.Status}}
	I0602 19:53:57.515926    7928 cli_runner.go:217] Completed: docker container inspect false-20220602191600-12108 --format={{.State.Status}}: (1.4330378s)
	W0602 19:53:57.516022    7928 delete.go:135] deletehost failed: Docker machine "false-20220602191600-12108" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I0602 19:53:57.525933    7928 cli_runner.go:164] Run: docker container inspect -f {{.Id}} false-20220602191600-12108
	I0602 19:53:58.944444    7928 cli_runner.go:217] Completed: docker container inspect -f {{.Id}} false-20220602191600-12108: (1.4159438s)
	I0602 19:53:58.960116    7928 cli_runner.go:164] Run: docker container inspect false-20220602191600-12108 --format={{.State.Status}}
	I0602 19:54:00.278647    7928 cli_runner.go:217] Completed: docker container inspect false-20220602191600-12108 --format={{.State.Status}}: (1.3185252s)
	I0602 19:54:00.286594    7928 cli_runner.go:164] Run: docker exec --privileged -t false-20220602191600-12108 /bin/bash -c "sudo init 0"
	W0602 19:54:01.522120    7928 cli_runner.go:211] docker exec --privileged -t false-20220602191600-12108 /bin/bash -c "sudo init 0" returned with exit code 1
	I0602 19:54:01.522354    7928 cli_runner.go:217] Completed: docker exec --privileged -t false-20220602191600-12108 /bin/bash -c "sudo init 0": (1.2352637s)
	I0602 19:54:01.522406    7928 oci.go:625] error shutdown false-20220602191600-12108: docker exec --privileged -t false-20220602191600-12108 /bin/bash -c "sudo init 0": exit status 1
	stdout:
	
	stderr:
	Error response from daemon: Container 69e40d0fcb6190193fe373cd554a511deee3c01e27d9b4a3fa3c04508762bd11 is not running
	I0602 19:54:02.540155    7928 cli_runner.go:164] Run: docker container inspect false-20220602191600-12108 --format={{.State.Status}}
	I0602 19:54:03.720778    7928 cli_runner.go:217] Completed: docker container inspect false-20220602191600-12108 --format={{.State.Status}}: (1.1804225s)
	I0602 19:54:03.720778    7928 oci.go:639] temporary error: container false-20220602191600-12108 status is  but expect it to be exited
	I0602 19:54:03.720980    7928 oci.go:645] Successfully shutdown container false-20220602191600-12108
	I0602 19:54:03.733381    7928 cli_runner.go:164] Run: docker rm -f -v false-20220602191600-12108
	I0602 19:54:08.868841    7928 cli_runner.go:217] Completed: docker rm -f -v false-20220602191600-12108: (5.1352913s)
	I0602 19:54:08.880793    7928 cli_runner.go:164] Run: docker container inspect -f {{.Id}} false-20220602191600-12108
	W0602 19:54:09.972500    7928 cli_runner.go:211] docker container inspect -f {{.Id}} false-20220602191600-12108 returned with exit code 1
	I0602 19:54:09.972653    7928 cli_runner.go:217] Completed: docker container inspect -f {{.Id}} false-20220602191600-12108: (1.0915822s)
	I0602 19:54:09.984938    7928 cli_runner.go:164] Run: docker network inspect false-20220602191600-12108 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0602 19:54:11.094729    7928 cli_runner.go:211] docker network inspect false-20220602191600-12108 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0602 19:54:11.094729    7928 cli_runner.go:217] Completed: docker network inspect false-20220602191600-12108 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}": (1.1097857s)
	I0602 19:54:11.102187    7928 network_create.go:272] running [docker network inspect false-20220602191600-12108] to gather additional debugging logs...
	I0602 19:54:11.102732    7928 cli_runner.go:164] Run: docker network inspect false-20220602191600-12108
	W0602 19:54:12.238567    7928 cli_runner.go:211] docker network inspect false-20220602191600-12108 returned with exit code 1
	I0602 19:54:12.238605    7928 cli_runner.go:217] Completed: docker network inspect false-20220602191600-12108: (1.1357927s)
	I0602 19:54:12.238645    7928 network_create.go:275] error running [docker network inspect false-20220602191600-12108]: docker network inspect false-20220602191600-12108: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: false-20220602191600-12108
	I0602 19:54:12.238708    7928 network_create.go:277] output of [docker network inspect false-20220602191600-12108]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: false-20220602191600-12108
	
	** /stderr **
	W0602 19:54:12.239814    7928 delete.go:139] delete failed (probably ok) <nil>
	I0602 19:54:12.239814    7928 fix.go:115] Sleeping 1 second for extra luck!
	I0602 19:54:13.244731    7928 start.go:131] createHost starting for "" (driver="docker")
	I0602 19:54:13.251470    7928 out.go:204] * Creating docker container (CPUs=2, Memory=2048MB) ...
	I0602 19:54:13.252360    7928 start.go:165] libmachine.API.Create for "false-20220602191600-12108" (driver="docker")
	I0602 19:54:13.252479    7928 client.go:168] LocalClient.Create starting
	I0602 19:54:13.253476    7928 main.go:134] libmachine: Reading certificate data from C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem
	I0602 19:54:13.253476    7928 main.go:134] libmachine: Decoding PEM data...
	I0602 19:54:13.253476    7928 main.go:134] libmachine: Parsing certificate...
	I0602 19:54:13.254216    7928 main.go:134] libmachine: Reading certificate data from C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\cert.pem
	I0602 19:54:13.254216    7928 main.go:134] libmachine: Decoding PEM data...
	I0602 19:54:13.254216    7928 main.go:134] libmachine: Parsing certificate...
	I0602 19:54:13.267898    7928 cli_runner.go:164] Run: docker network inspect false-20220602191600-12108 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0602 19:54:14.571555    7928 cli_runner.go:211] docker network inspect false-20220602191600-12108 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0602 19:54:14.571636    7928 cli_runner.go:217] Completed: docker network inspect false-20220602191600-12108 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}": (1.3035927s)
	I0602 19:54:14.584540    7928 network_create.go:272] running [docker network inspect false-20220602191600-12108] to gather additional debugging logs...
	I0602 19:54:14.584540    7928 cli_runner.go:164] Run: docker network inspect false-20220602191600-12108
	W0602 19:54:15.837122    7928 cli_runner.go:211] docker network inspect false-20220602191600-12108 returned with exit code 1
	I0602 19:54:15.837122    7928 cli_runner.go:217] Completed: docker network inspect false-20220602191600-12108: (1.2525767s)
	I0602 19:54:15.837360    7928 network_create.go:275] error running [docker network inspect false-20220602191600-12108]: docker network inspect false-20220602191600-12108: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: false-20220602191600-12108
	I0602 19:54:15.837462    7928 network_create.go:277] output of [docker network inspect false-20220602191600-12108]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: false-20220602191600-12108
	
	** /stderr **
	I0602 19:54:15.848753    7928 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0602 19:54:17.194216    7928 cli_runner.go:217] Completed: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}": (1.3454571s)
	I0602 19:54:17.212395    7928 network.go:284] reusing subnet 192.168.49.0 that has expired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc000006520] amended:true}} dirty:map[192.168.49.0:0xc000006520 192.168.58.0:0xc000667750] misses:0}
	I0602 19:54:17.212395    7928 network.go:235] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0602 19:54:17.212395    7928 network_create.go:115] attempt to create docker network false-20220602191600-12108 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0602 19:54:17.221049    7928 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true false-20220602191600-12108
	W0602 19:54:18.667025    7928 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true false-20220602191600-12108 returned with exit code 1
	I0602 19:54:18.667101    7928 cli_runner.go:217] Completed: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true false-20220602191600-12108: (1.4459237s)
	W0602 19:54:18.667255    7928 network_create.go:107] failed to create docker network false-20220602191600-12108 192.168.49.0/24, will retry: subnet is taken
	I0602 19:54:18.685483    7928 network.go:279] skipping subnet 192.168.49.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc000006520] amended:true}} dirty:map[192.168.49.0:0xc000006520 192.168.58.0:0xc000667750] misses:0}
	I0602 19:54:18.685483    7928 network.go:238] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0602 19:54:18.702160    7928 network.go:284] reusing subnet 192.168.58.0 that has expired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc000006520] amended:true}} dirty:map[192.168.49.0:0xc000006520 192.168.58.0:0xc000667750] misses:1}
	I0602 19:54:18.702297    7928 network.go:235] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0602 19:54:18.702297    7928 network_create.go:115] attempt to create docker network false-20220602191600-12108 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
	I0602 19:54:18.709969    7928 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true false-20220602191600-12108
	I0602 19:54:20.251751    7928 cli_runner.go:217] Completed: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true false-20220602191600-12108: (1.5412297s)
	I0602 19:54:20.251751    7928 network_create.go:99] docker network false-20220602191600-12108 192.168.58.0/24 created
	I0602 19:54:20.251751    7928 kic.go:106] calculated static IP "192.168.58.2" for the "false-20220602191600-12108" container
	I0602 19:54:20.272576    7928 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0602 19:54:21.667162    7928 cli_runner.go:217] Completed: docker ps -a --format {{.Names}}: (1.3943322s)
	I0602 19:54:21.676408    7928 cli_runner.go:164] Run: docker volume create false-20220602191600-12108 --label name.minikube.sigs.k8s.io=false-20220602191600-12108 --label created_by.minikube.sigs.k8s.io=true
	I0602 19:54:22.896023    7928 cli_runner.go:217] Completed: docker volume create false-20220602191600-12108 --label name.minikube.sigs.k8s.io=false-20220602191600-12108 --label created_by.minikube.sigs.k8s.io=true: (1.2194281s)
	I0602 19:54:22.896089    7928 oci.go:103] Successfully created a docker volume false-20220602191600-12108
	I0602 19:54:22.911194    7928 cli_runner.go:164] Run: docker run --rm --name false-20220602191600-12108-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=false-20220602191600-12108 --entrypoint /usr/bin/test -v false-20220602191600-12108:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496 -d /var/lib
	I0602 19:54:25.680997    7928 cli_runner.go:217] Completed: docker run --rm --name false-20220602191600-12108-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=false-20220602191600-12108 --entrypoint /usr/bin/test -v false-20220602191600-12108:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496 -d /var/lib: (2.7694274s)
	I0602 19:54:25.680997    7928 oci.go:107] Successfully prepared a docker volume false-20220602191600-12108
	I0602 19:54:25.680997    7928 preload.go:132] Checking if preload exists for k8s version v1.23.6 and runtime docker
	I0602 19:54:25.681170    7928 kic.go:179] Starting extracting preloaded images to volume ...
	I0602 19:54:25.690981    7928 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.23.6-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v false-20220602191600-12108:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496 -I lz4 -xf /preloaded.tar -C /extractDir
	I0602 19:54:53.599646    7928 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.23.6-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v false-20220602191600-12108:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496 -I lz4 -xf /preloaded.tar -C /extractDir: (27.9083051s)
	I0602 19:54:53.599841    7928 kic.go:188] duration metric: took 27.918549 seconds to extract preloaded images to volume
	I0602 19:54:53.608273    7928 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0602 19:54:56.002272    7928 cli_runner.go:217] Completed: docker system info --format "{{json .}}": (2.3938513s)
	I0602 19:54:56.002885    7928 info.go:265] docker info: {ID:DTJ5:6HSD:HQYO:NIUY:6CLH:5UQL:6FBK:7BMO:GVE6:JAI7:26MB:GZA3 Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:78 OomKillDisable:true NGoroutines:69 SystemTime:2022-06-02 19:54:54.8066394 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:4 KernelVersion:5.10.102.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_
64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:53902323712 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.16 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16 Expected:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16} RuncCommit:{ID:v1.1.1-0-g52de29d Expected:v1.1.1-0-g52de29d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,
profile=default] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.6.0] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:C:\Program Files\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc.
Version:v0.17.0]] Warnings:<nil>}}
	I0602 19:54:56.013069    7928 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0602 19:54:58.242655    7928 cli_runner.go:217] Completed: docker info --format "'{{json .SecurityOptions}}'": (2.2295759s)
	I0602 19:54:58.257792    7928 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname false-20220602191600-12108 --name false-20220602191600-12108 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=false-20220602191600-12108 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=false-20220602191600-12108 --network false-20220602191600-12108 --ip 192.168.58.2 --volume false-20220602191600-12108:/var --security-opt apparmor=unconfined --memory=2048mb --memory-swap=2048mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496
	I0602 19:55:02.234766    7928 cli_runner.go:217] Completed: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname false-20220602191600-12108 --name false-20220602191600-12108 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=false-20220602191600-12108 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=false-20220602191600-12108 --network false-20220602191600-12108 --ip 192.168.58.2 --volume false-20220602191600-12108:/var --security-opt apparmor=unconfined --memory=2048mb --memory-swap=2048mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496: (3.9767712s)
	I0602 19:55:02.246548    7928 cli_runner.go:164] Run: docker container inspect false-20220602191600-12108 --format={{.State.Running}}
	I0602 19:55:03.510323    7928 cli_runner.go:217] Completed: docker container inspect false-20220602191600-12108 --format={{.State.Running}}: (1.2637691s)
	I0602 19:55:03.675315    7928 cli_runner.go:164] Run: docker container inspect false-20220602191600-12108 --format={{.State.Status}}
	I0602 19:55:04.856505    7928 cli_runner.go:217] Completed: docker container inspect false-20220602191600-12108 --format={{.State.Status}}: (1.1809273s)
	I0602 19:55:04.865336    7928 cli_runner.go:164] Run: docker exec false-20220602191600-12108 stat /var/lib/dpkg/alternatives/iptables
	I0602 19:55:06.412150    7928 cli_runner.go:217] Completed: docker exec false-20220602191600-12108 stat /var/lib/dpkg/alternatives/iptables: (1.5468068s)
	I0602 19:55:06.412150    7928 oci.go:247] the created container "false-20220602191600-12108" has a running status.
	I0602 19:55:06.412150    7928 kic.go:210] Creating ssh key for kic: C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\false-20220602191600-12108\id_rsa...
	I0602 19:55:06.608399    7928 kic_runner.go:191] docker (temp): C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\false-20220602191600-12108\id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0602 19:55:08.044076    7928 cli_runner.go:164] Run: docker container inspect false-20220602191600-12108 --format={{.State.Status}}
	I0602 19:55:09.190986    7928 cli_runner.go:217] Completed: docker container inspect false-20220602191600-12108 --format={{.State.Status}}: (1.1468643s)
	I0602 19:55:09.217823    7928 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0602 19:55:09.217823    7928 kic_runner.go:114] Args: [docker exec --privileged false-20220602191600-12108 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0602 19:55:10.555714    7928 kic_runner.go:123] Done: [docker exec --privileged false-20220602191600-12108 chown docker:docker /home/docker/.ssh/authorized_keys]: (1.3378856s)
	I0602 19:55:10.564301    7928 kic.go:250] ensuring only current user has permissions to key file located at : C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\false-20220602191600-12108\id_rsa...
	I0602 19:55:11.120374    7928 cli_runner.go:164] Run: docker container inspect false-20220602191600-12108 --format={{.State.Status}}
	I0602 19:55:12.260691    7928 cli_runner.go:217] Completed: docker container inspect false-20220602191600-12108 --format={{.State.Status}}: (1.1401637s)
	I0602 19:55:12.260691    7928 machine.go:88] provisioning docker machine ...
	I0602 19:55:12.260814    7928 ubuntu.go:169] provisioning hostname "false-20220602191600-12108"
	I0602 19:55:12.273123    7928 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-20220602191600-12108
	I0602 19:55:13.467958    7928 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-20220602191600-12108: (1.1946799s)
	I0602 19:55:13.477570    7928 main.go:134] libmachine: Using SSH client type: native
	I0602 19:55:13.488279    7928 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x5b2ea0] 0x5b5d00 <nil>  [] 0s} 127.0.0.1 55306 <nil> <nil>}
	I0602 19:55:13.488381    7928 main.go:134] libmachine: About to run SSH command:
	sudo hostname false-20220602191600-12108 && echo "false-20220602191600-12108" | sudo tee /etc/hostname
	I0602 19:55:13.865687    7928 main.go:134] libmachine: SSH cmd err, output: <nil>: false-20220602191600-12108
	
	I0602 19:55:13.878751    7928 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-20220602191600-12108
	I0602 19:55:15.133579    7928 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-20220602191600-12108: (1.2545558s)
	I0602 19:55:15.139315    7928 main.go:134] libmachine: Using SSH client type: native
	I0602 19:55:15.139842    7928 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x5b2ea0] 0x5b5d00 <nil>  [] 0s} 127.0.0.1 55306 <nil> <nil>}
	I0602 19:55:15.139925    7928 main.go:134] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfalse-20220602191600-12108' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 false-20220602191600-12108/g' /etc/hosts;
				else 
					echo '127.0.1.1 false-20220602191600-12108' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0602 19:55:15.362924    7928 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I0602 19:55:15.363000    7928 ubuntu.go:175] set auth options {CertDir:C:\Users\jenkins.minikube7\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube7\minikube-integration\.minikube}
	I0602 19:55:15.363000    7928 ubuntu.go:177] setting up certificates
	I0602 19:55:15.363000    7928 provision.go:83] configureAuth start
	I0602 19:55:15.373662    7928 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" false-20220602191600-12108
	I0602 19:55:16.571624    7928 cli_runner.go:217] Completed: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" false-20220602191600-12108: (1.1976815s)
	I0602 19:55:16.571707    7928 provision.go:138] copyHostCerts
	I0602 19:55:16.572100    7928 exec_runner.go:144] found C:\Users\jenkins.minikube7\minikube-integration\.minikube/ca.pem, removing ...
	I0602 19:55:16.572100    7928 exec_runner.go:207] rm: C:\Users\jenkins.minikube7\minikube-integration\.minikube\ca.pem
	I0602 19:55:16.572535    7928 exec_runner.go:151] cp: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube7\minikube-integration\.minikube/ca.pem (1078 bytes)
	I0602 19:55:16.573903    7928 exec_runner.go:144] found C:\Users\jenkins.minikube7\minikube-integration\.minikube/cert.pem, removing ...
	I0602 19:55:16.573903    7928 exec_runner.go:207] rm: C:\Users\jenkins.minikube7\minikube-integration\.minikube\cert.pem
	I0602 19:55:16.574278    7928 exec_runner.go:151] cp: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube7\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0602 19:55:16.575410    7928 exec_runner.go:144] found C:\Users\jenkins.minikube7\minikube-integration\.minikube/key.pem, removing ...
	I0602 19:55:16.575410    7928 exec_runner.go:207] rm: C:\Users\jenkins.minikube7\minikube-integration\.minikube\key.pem
	I0602 19:55:16.575410    7928 exec_runner.go:151] cp: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube7\minikube-integration\.minikube/key.pem (1675 bytes)
	I0602 19:55:16.576916    7928 provision.go:112] generating server cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.false-20220602191600-12108 san=[192.168.58.2 127.0.0.1 localhost 127.0.0.1 minikube false-20220602191600-12108]
	I0602 19:55:16.719100    7928 provision.go:172] copyRemoteCerts
	I0602 19:55:16.739684    7928 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0602 19:55:16.744690    7928 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-20220602191600-12108
	I0602 19:55:17.964674    7928 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-20220602191600-12108: (1.2197765s)
	I0602 19:55:17.965197    7928 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55306 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\false-20220602191600-12108\id_rsa Username:docker}
	I0602 19:55:18.121586    7928 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (1.3818961s)
	I0602 19:55:18.122414    7928 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0602 19:55:18.176916    7928 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1253 bytes)
	I0602 19:55:18.235903    7928 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0602 19:55:18.291320    7928 provision.go:86] duration metric: configureAuth took 2.9283074s
	I0602 19:55:18.291320    7928 ubuntu.go:193] setting minikube options for container-runtime
	I0602 19:55:18.292399    7928 config.go:178] Loaded profile config "false-20220602191600-12108": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.6
	I0602 19:55:18.306593    7928 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-20220602191600-12108
	I0602 19:55:19.483486    7928 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-20220602191600-12108: (1.1767637s)
	I0602 19:55:19.488463    7928 main.go:134] libmachine: Using SSH client type: native
	I0602 19:55:19.488992    7928 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x5b2ea0] 0x5b5d00 <nil>  [] 0s} 127.0.0.1 55306 <nil> <nil>}
	I0602 19:55:19.488992    7928 main.go:134] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0602 19:55:19.715864    7928 main.go:134] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0602 19:55:19.715957    7928 ubuntu.go:71] root file system type: overlay
	I0602 19:55:19.716420    7928 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0602 19:55:19.725560    7928 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-20220602191600-12108
	I0602 19:55:20.861700    7928 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-20220602191600-12108: (1.1360708s)
	I0602 19:55:20.865849    7928 main.go:134] libmachine: Using SSH client type: native
	I0602 19:55:20.866228    7928 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x5b2ea0] 0x5b5d00 <nil>  [] 0s} 127.0.0.1 55306 <nil> <nil>}
	I0602 19:55:20.866391    7928 main.go:134] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0602 19:55:21.151247    7928 main.go:134] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0602 19:55:21.165920    7928 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-20220602191600-12108
	I0602 19:55:22.371189    7928 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-20220602191600-12108: (1.2051709s)
	I0602 19:55:22.377471    7928 main.go:134] libmachine: Using SSH client type: native
	I0602 19:55:22.378321    7928 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x5b2ea0] 0x5b5d00 <nil>  [] 0s} 127.0.0.1 55306 <nil> <nil>}
	I0602 19:55:22.378321    7928 main.go:134] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0602 19:55:24.231461    7928 main.go:134] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2022-05-12 09:15:28.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2022-06-02 19:55:21.126714000 +0000
	@@ -1,30 +1,32 @@
	 [Unit]
	 Description=Docker Application Container Engine
	 Documentation=https://docs.docker.com
	-After=network-online.target docker.socket firewalld.service containerd.service
	+BindsTo=containerd.service
	+After=network-online.target firewalld.service containerd.service
	 Wants=network-online.target
	-Requires=docker.socket containerd.service
	+Requires=docker.socket
	+StartLimitBurst=3
	+StartLimitIntervalSec=60
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutSec=0
	-RestartSec=2
	-Restart=always
	-
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	+Restart=on-failure
	 
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP $MAINPID
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	@@ -32,16 +34,16 @@
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	 
	 # kill only the docker process, not all processes in the cgroup
	 KillMode=process
	-OOMScoreAdjust=-500
	 
	 [Install]
	 WantedBy=multi-user.target
	Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install enable docker
	
	I0602 19:55:24.231612    7928 machine.go:91] provisioned docker machine in 11.9708685s
	I0602 19:55:24.231612    7928 client.go:171] LocalClient.Create took 1m10.9788229s
	I0602 19:55:24.231685    7928 start.go:173] duration metric: libmachine.API.Create for "false-20220602191600-12108" took 1m10.9790147s
	I0602 19:55:24.231783    7928 start.go:306] post-start starting for "false-20220602191600-12108" (driver="docker")
	I0602 19:55:24.231783    7928 start.go:316] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0602 19:55:24.250814    7928 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0602 19:55:24.262381    7928 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-20220602191600-12108
	I0602 19:55:25.410960    7928 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-20220602191600-12108: (1.1484108s)
	I0602 19:55:25.411025    7928 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55306 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\false-20220602191600-12108\id_rsa Username:docker}
	I0602 19:55:25.574674    7928 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (1.323706s)
	I0602 19:55:25.586196    7928 ssh_runner.go:195] Run: cat /etc/os-release
	I0602 19:55:25.604573    7928 main.go:134] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0602 19:55:25.604573    7928 main.go:134] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0602 19:55:25.604573    7928 main.go:134] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0602 19:55:25.606105    7928 info.go:137] Remote host: Ubuntu 20.04.4 LTS
	I0602 19:55:25.606105    7928 filesync.go:126] Scanning C:\Users\jenkins.minikube7\minikube-integration\.minikube\addons for local assets ...
	I0602 19:55:25.606565    7928 filesync.go:126] Scanning C:\Users\jenkins.minikube7\minikube-integration\.minikube\files for local assets ...
	I0602 19:55:25.607420    7928 filesync.go:149] local asset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\files\etc\ssl\certs\121082.pem -> 121082.pem in /etc/ssl/certs
	I0602 19:55:25.626410    7928 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0602 19:55:25.660005    7928 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\files\etc\ssl\certs\121082.pem --> /etc/ssl/certs/121082.pem (1708 bytes)
	I0602 19:55:25.734932    7928 start.go:309] post-start completed in 1.5031415s
	I0602 19:55:25.751923    7928 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" false-20220602191600-12108
	I0602 19:55:26.903944    7928 cli_runner.go:217] Completed: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" false-20220602191600-12108: (1.1519376s)
	I0602 19:55:26.904149    7928 profile.go:148] Saving config to C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\false-20220602191600-12108\config.json ...
	I0602 19:55:26.916670    7928 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0602 19:55:26.925220    7928 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-20220602191600-12108
	I0602 19:55:28.062063    7928 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-20220602191600-12108: (1.1366775s)
	I0602 19:55:28.062525    7928 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55306 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\false-20220602191600-12108\id_rsa Username:docker}
	I0602 19:55:28.155687    7928 ssh_runner.go:235] Completed: sh -c "df -h /var | awk 'NR==2{print $5}'": (1.2390115s)
	I0602 19:55:28.168486    7928 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0602 19:55:28.189843    7928 start.go:134] duration metric: createHost completed in 1m14.9445943s
	I0602 19:55:28.205214    7928 cli_runner.go:164] Run: docker container inspect false-20220602191600-12108 --format={{.State.Status}}
	I0602 19:55:29.340051    7928 cli_runner.go:217] Completed: docker container inspect false-20220602191600-12108 --format={{.State.Status}}: (1.1347673s)
	W0602 19:55:29.340180    7928 fix.go:129] unexpected machine state, will restart: <nil>
	I0602 19:55:29.340262    7928 machine.go:88] provisioning docker machine ...
	I0602 19:55:29.340351    7928 ubuntu.go:169] provisioning hostname "false-20220602191600-12108"
	I0602 19:55:29.347841    7928 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-20220602191600-12108
	I0602 19:55:30.479093    7928 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-20220602191600-12108: (1.1311972s)
	I0602 19:55:30.481128    7928 main.go:134] libmachine: Using SSH client type: native
	I0602 19:55:30.481128    7928 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x5b2ea0] 0x5b5d00 <nil>  [] 0s} 127.0.0.1 55306 <nil> <nil>}
	I0602 19:55:30.481128    7928 main.go:134] libmachine: About to run SSH command:
	sudo hostname false-20220602191600-12108 && echo "false-20220602191600-12108" | sudo tee /etc/hostname
	I0602 19:55:30.750012    7928 main.go:134] libmachine: SSH cmd err, output: <nil>: false-20220602191600-12108
	
	I0602 19:55:30.759351    7928 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-20220602191600-12108
	I0602 19:55:31.848754    7928 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-20220602191600-12108: (1.0892009s)
	I0602 19:55:31.858909    7928 main.go:134] libmachine: Using SSH client type: native
	I0602 19:55:31.859773    7928 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x5b2ea0] 0x5b5d00 <nil>  [] 0s} 127.0.0.1 55306 <nil> <nil>}
	I0602 19:55:31.859871    7928 main.go:134] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfalse-20220602191600-12108' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 false-20220602191600-12108/g' /etc/hosts;
				else 
					echo '127.0.1.1 false-20220602191600-12108' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0602 19:55:32.315026    7928 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I0602 19:55:32.315156    7928 ubuntu.go:175] set auth options {CertDir:C:\Users\jenkins.minikube7\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube7\minikube-integration\.minikube}
	I0602 19:55:32.315201    7928 ubuntu.go:177] setting up certificates
	I0602 19:55:32.315328    7928 provision.go:83] configureAuth start
	I0602 19:55:32.328363    7928 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" false-20220602191600-12108
	I0602 19:55:33.650760    7928 cli_runner.go:217] Completed: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" false-20220602191600-12108: (1.3222779s)
	I0602 19:55:33.650977    7928 provision.go:138] copyHostCerts
	I0602 19:55:33.651703    7928 exec_runner.go:144] found C:\Users\jenkins.minikube7\minikube-integration\.minikube/ca.pem, removing ...
	I0602 19:55:33.651763    7928 exec_runner.go:207] rm: C:\Users\jenkins.minikube7\minikube-integration\.minikube\ca.pem
	I0602 19:55:33.652252    7928 exec_runner.go:151] cp: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube7\minikube-integration\.minikube/ca.pem (1078 bytes)
	I0602 19:55:33.653588    7928 exec_runner.go:144] found C:\Users\jenkins.minikube7\minikube-integration\.minikube/cert.pem, removing ...
	I0602 19:55:33.653588    7928 exec_runner.go:207] rm: C:\Users\jenkins.minikube7\minikube-integration\.minikube\cert.pem
	I0602 19:55:33.654053    7928 exec_runner.go:151] cp: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube7\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0602 19:55:33.655529    7928 exec_runner.go:144] found C:\Users\jenkins.minikube7\minikube-integration\.minikube/key.pem, removing ...
	I0602 19:55:33.655662    7928 exec_runner.go:207] rm: C:\Users\jenkins.minikube7\minikube-integration\.minikube\key.pem
	I0602 19:55:33.656212    7928 exec_runner.go:151] cp: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube7\minikube-integration\.minikube/key.pem (1675 bytes)
	I0602 19:55:33.657808    7928 provision.go:112] generating server cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.false-20220602191600-12108 san=[192.168.58.2 127.0.0.1 localhost 127.0.0.1 minikube false-20220602191600-12108]
	I0602 19:55:33.784457    7928 provision.go:172] copyRemoteCerts
	I0602 19:55:33.794866    7928 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0602 19:55:33.796623    7928 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-20220602191600-12108
	I0602 19:55:35.109588    7928 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-20220602191600-12108: (1.3128155s)
	I0602 19:55:35.109927    7928 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55306 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\false-20220602191600-12108\id_rsa Username:docker}
	I0602 19:55:35.283628    7928 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (1.4887561s)
	I0602 19:55:35.284082    7928 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0602 19:55:35.338082    7928 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1253 bytes)
	I0602 19:55:35.392812    7928 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0602 19:55:35.454059    7928 provision.go:86] duration metric: configureAuth took 3.1386745s
	I0602 19:55:35.454059    7928 ubuntu.go:193] setting minikube options for container-runtime
	I0602 19:55:35.455005    7928 config.go:178] Loaded profile config "false-20220602191600-12108": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.6
	I0602 19:55:35.466653    7928 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-20220602191600-12108
	I0602 19:55:36.657560    7928 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-20220602191600-12108: (1.1907117s)
	I0602 19:55:36.662258    7928 main.go:134] libmachine: Using SSH client type: native
	I0602 19:55:36.662869    7928 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x5b2ea0] 0x5b5d00 <nil>  [] 0s} 127.0.0.1 55306 <nil> <nil>}
	I0602 19:55:36.662869    7928 main.go:134] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0602 19:55:36.827498    7928 main.go:134] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0602 19:55:36.827498    7928 ubuntu.go:71] root file system type: overlay
	I0602 19:55:36.828994    7928 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0602 19:55:36.839613    7928 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-20220602191600-12108
	I0602 19:55:38.041630    7928 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-20220602191600-12108: (1.2019476s)
	I0602 19:55:38.046532    7928 main.go:134] libmachine: Using SSH client type: native
	I0602 19:55:38.046532    7928 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x5b2ea0] 0x5b5d00 <nil>  [] 0s} 127.0.0.1 55306 <nil> <nil>}
	I0602 19:55:38.047075    7928 main.go:134] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0602 19:55:38.288792    7928 main.go:134] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0602 19:55:38.300822    7928 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-20220602191600-12108
	I0602 19:55:39.514542    7928 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-20220602191600-12108: (1.2136509s)
	I0602 19:55:39.518106    7928 main.go:134] libmachine: Using SSH client type: native
	I0602 19:55:39.518900    7928 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x5b2ea0] 0x5b5d00 <nil>  [] 0s} 127.0.0.1 55306 <nil> <nil>}
	I0602 19:55:39.518900    7928 main.go:134] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0602 19:55:39.764500    7928 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I0602 19:55:39.764538    7928 machine.go:91] provisioned docker machine in 10.4242303s
	I0602 19:55:39.764538    7928 start.go:306] post-start starting for "false-20220602191600-12108" (driver="docker")
	I0602 19:55:39.764617    7928 start.go:316] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0602 19:55:39.776728    7928 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0602 19:55:39.785954    7928 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-20220602191600-12108
	I0602 19:55:40.939835    7928 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-20220602191600-12108: (1.1537335s)
	I0602 19:55:40.940266    7928 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55306 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\false-20220602191600-12108\id_rsa Username:docker}
	I0602 19:55:41.115629    7928 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (1.3388954s)
	I0602 19:55:41.131738    7928 ssh_runner.go:195] Run: cat /etc/os-release
	I0602 19:55:41.146600    7928 main.go:134] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0602 19:55:41.146600    7928 main.go:134] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0602 19:55:41.146600    7928 main.go:134] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0602 19:55:41.146600    7928 info.go:137] Remote host: Ubuntu 20.04.4 LTS
	I0602 19:55:41.146600    7928 filesync.go:126] Scanning C:\Users\jenkins.minikube7\minikube-integration\.minikube\addons for local assets ...
	I0602 19:55:41.147178    7928 filesync.go:126] Scanning C:\Users\jenkins.minikube7\minikube-integration\.minikube\files for local assets ...
	I0602 19:55:41.148079    7928 filesync.go:149] local asset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\files\etc\ssl\certs\121082.pem -> 121082.pem in /etc/ssl/certs
	I0602 19:55:41.172409    7928 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0602 19:55:41.211492    7928 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\files\etc\ssl\certs\121082.pem --> /etc/ssl/certs/121082.pem (1708 bytes)
	I0602 19:55:41.288585    7928 start.go:309] post-start completed in 1.5238948s
	I0602 19:55:41.301824    7928 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0602 19:55:41.308874    7928 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-20220602191600-12108
	I0602 19:55:42.450883    7928 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-20220602191600-12108: (1.1418851s)
	I0602 19:55:42.451218    7928 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55306 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\false-20220602191600-12108\id_rsa Username:docker}
	I0602 19:55:42.599358    7928 ssh_runner.go:235] Completed: sh -c "df -h /var | awk 'NR==2{print $5}'": (1.2975278s)
	I0602 19:55:42.615995    7928 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0602 19:55:42.638620    7928 fix.go:57] fixHost completed within 6m15.8114849s
	I0602 19:55:42.638620    7928 start.go:81] releasing machines lock for "false-20220602191600-12108", held for 6m15.8114849s
	I0602 19:55:42.652167    7928 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" false-20220602191600-12108
	I0602 19:55:43.789153    7928 cli_runner.go:217] Completed: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" false-20220602191600-12108: (1.1369816s)
	I0602 19:55:43.792168    7928 ssh_runner.go:195] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0602 19:55:43.801152    7928 ssh_runner.go:195] Run: sudo service containerd status
	I0602 19:55:43.804152    7928 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-20220602191600-12108
	I0602 19:55:43.810140    7928 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-20220602191600-12108
	I0602 19:55:45.030992    7928 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-20220602191600-12108: (1.2205997s)
	I0602 19:55:45.031551    7928 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55306 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\false-20220602191600-12108\id_rsa Username:docker}
	I0602 19:55:45.061500    7928 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-20220602191600-12108: (1.2573418s)
	I0602 19:55:45.061500    7928 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55306 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\false-20220602191600-12108\id_rsa Username:docker}
	I0602 19:55:45.314927    7928 ssh_runner.go:235] Completed: sudo service containerd status: (1.5136957s)
	I0602 19:55:45.334034    7928 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0602 19:55:45.386779    7928 ssh_runner.go:235] Completed: curl -sS -m 2 https://k8s.gcr.io/: (1.5946039s)
	I0602 19:55:45.405876    7928 cruntime.go:273] skipping containerd shutdown because we are bound to it
	I0602 19:55:45.418984    7928 ssh_runner.go:195] Run: sudo service crio status
	I0602 19:55:45.482302    7928 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/dockershim.sock
	image-endpoint: unix:///var/run/dockershim.sock
	" | sudo tee /etc/crictl.yaml"
	I0602 19:55:45.535831    7928 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0602 19:55:45.594145    7928 ssh_runner.go:195] Run: sudo service docker status
	I0602 19:55:45.646029    7928 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0602 19:55:45.756722    7928 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0602 19:55:45.929551    7928 out.go:204] * Preparing Kubernetes v1.23.6 on Docker 20.10.16 ...
	I0602 19:55:45.938629    7928 cli_runner.go:164] Run: docker exec -t false-20220602191600-12108 dig +short host.docker.internal
	I0602 19:55:47.358832    7928 cli_runner.go:217] Completed: docker exec -t false-20220602191600-12108 dig +short host.docker.internal: (1.4201534s)
	I0602 19:55:47.359022    7928 network.go:96] got host ip for mount in container by digging dns: 192.168.65.2
	I0602 19:55:47.377882    7928 ssh_runner.go:195] Run: grep 192.168.65.2	host.minikube.internal$ /etc/hosts
	I0602 19:55:47.403242    7928 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.65.2	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0602 19:55:47.468172    7928 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" false-20220602191600-12108
	I0602 19:55:48.618327    7928 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" false-20220602191600-12108: (1.1498719s)
	I0602 19:55:48.618910    7928 preload.go:132] Checking if preload exists for k8s version v1.23.6 and runtime docker
	I0602 19:55:48.627777    7928 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0602 19:55:48.710159    7928 docker.go:610] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.23.6
	k8s.gcr.io/kube-scheduler:v1.23.6
	k8s.gcr.io/kube-controller-manager:v1.23.6
	k8s.gcr.io/kube-proxy:v1.23.6
	k8s.gcr.io/etcd:3.5.1-0
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0602 19:55:48.710159    7928 docker.go:541] Images already preloaded, skipping extraction
	I0602 19:55:48.720800    7928 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0602 19:55:48.852677    7928 docker.go:610] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.23.6
	k8s.gcr.io/kube-controller-manager:v1.23.6
	k8s.gcr.io/kube-proxy:v1.23.6
	k8s.gcr.io/kube-scheduler:v1.23.6
	k8s.gcr.io/etcd:3.5.1-0
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0602 19:55:48.852769    7928 cache_images.go:84] Images are preloaded, skipping loading
	I0602 19:55:48.865863    7928 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0602 19:55:49.093824    7928 cni.go:95] Creating CNI manager for "false"
	I0602 19:55:49.093824    7928 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0602 19:55:49.093824    7928 kubeadm.go:158] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.58.2 APIServerPort:8443 KubernetesVersion:v1.23.6 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:false-20220602191600-12108 NodeName:false-20220602191600-12108 DNSDomain:cluster.local CRISocket:/var/run/dockershim.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.58.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.58.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/mi
nikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	I0602 19:55:49.094549    7928 kubeadm.go:162] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.58.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/dockershim.sock
	  name: "false-20220602191600-12108"
	  kubeletExtraArgs:
	    node-ip: 192.168.58.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.58.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.23.6
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0602 19:55:49.094549    7928 kubeadm.go:961] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.23.6/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=docker --hostname-override=false-20220602191600-12108 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.58.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.23.6 ClusterName:false-20220602191600-12108 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:false NodeIP: NodePort:8443 NodeName:}
	I0602 19:55:49.110965    7928 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.23.6
	I0602 19:55:49.143508    7928 binaries.go:44] Found k8s binaries, skipping transfer
	I0602 19:55:49.156582    7928 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /var/lib/minikube /etc/init.d
	I0602 19:55:49.183287    7928 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (352 bytes)
	I0602 19:55:49.226767    7928 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0602 19:55:49.276578    7928 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2048 bytes)
	I0602 19:55:49.321910    7928 ssh_runner.go:362] scp memory --> /var/lib/minikube/openrc-restart-wrapper.sh (233 bytes)
	I0602 19:55:49.369815    7928 ssh_runner.go:362] scp memory --> /etc/init.d/kubelet (839 bytes)
	I0602 19:55:49.429370    7928 ssh_runner.go:195] Run: grep 192.168.58.2	control-plane.minikube.internal$ /etc/hosts
	I0602 19:55:49.442812    7928 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.58.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0602 19:55:49.472813    7928 certs.go:54] Setting up C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\false-20220602191600-12108 for IP: 192.168.58.2
	I0602 19:55:49.472813    7928 certs.go:182] skipping minikubeCA CA generation: C:\Users\jenkins.minikube7\minikube-integration\.minikube\ca.key
	I0602 19:55:49.473701    7928 certs.go:182] skipping proxyClientCA CA generation: C:\Users\jenkins.minikube7\minikube-integration\.minikube\proxy-client-ca.key
	I0602 19:55:49.474293    7928 certs.go:302] generating minikube-user signed cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\false-20220602191600-12108\client.key
	I0602 19:55:49.474543    7928 crypto.go:68] Generating cert C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\false-20220602191600-12108\client.crt with IP's: []
	I0602 19:55:49.667116    7928 crypto.go:156] Writing cert to C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\false-20220602191600-12108\client.crt ...
	I0602 19:55:49.667116    7928 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\false-20220602191600-12108\client.crt: {Name:mk55edf9033ff91201c38dfc7ea1494fa1aed512 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0602 19:55:49.677409    7928 crypto.go:164] Writing key to C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\false-20220602191600-12108\client.key ...
	I0602 19:55:49.677409    7928 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\false-20220602191600-12108\client.key: {Name:mk0c82afa81e6e17dfa28e10ebbb2ceddbf7361d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0602 19:55:49.678250    7928 certs.go:302] generating minikube signed cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\false-20220602191600-12108\apiserver.key.cee25041
	I0602 19:55:49.678250    7928 crypto.go:68] Generating cert C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\false-20220602191600-12108\apiserver.crt.cee25041 with IP's: [192.168.58.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I0602 19:55:49.987685    7928 crypto.go:156] Writing cert to C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\false-20220602191600-12108\apiserver.crt.cee25041 ...
	I0602 19:55:49.987685    7928 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\false-20220602191600-12108\apiserver.crt.cee25041: {Name:mk6384adab5439ce1cff0ffcb1cc8ade2542efa0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0602 19:55:49.989650    7928 crypto.go:164] Writing key to C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\false-20220602191600-12108\apiserver.key.cee25041 ...
	I0602 19:55:49.989650    7928 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\false-20220602191600-12108\apiserver.key.cee25041: {Name:mk3bb8a2f180ba2f2383888d92e97ee18e57a724 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0602 19:55:49.990784    7928 certs.go:320] copying C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\false-20220602191600-12108\apiserver.crt.cee25041 -> C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\false-20220602191600-12108\apiserver.crt
	I0602 19:55:49.998068    7928 certs.go:324] copying C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\false-20220602191600-12108\apiserver.key.cee25041 -> C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\false-20220602191600-12108\apiserver.key
	I0602 19:55:50.000697    7928 certs.go:302] generating aggregator signed cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\false-20220602191600-12108\proxy-client.key
	I0602 19:55:50.000924    7928 crypto.go:68] Generating cert C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\false-20220602191600-12108\proxy-client.crt with IP's: []
	I0602 19:55:50.490149    7928 crypto.go:156] Writing cert to C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\false-20220602191600-12108\proxy-client.crt ...
	I0602 19:55:50.490149    7928 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\false-20220602191600-12108\proxy-client.crt: {Name:mk35271008c18f76e7258b880fc377a90b77265b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0602 19:55:50.494224    7928 crypto.go:164] Writing key to C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\false-20220602191600-12108\proxy-client.key ...
	I0602 19:55:50.494224    7928 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\false-20220602191600-12108\proxy-client.key: {Name:mk8871960fa9da3aed7cf3850ae02916a1a09b05 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0602 19:55:50.500561    7928 certs.go:388] found cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\12108.pem (1338 bytes)
	W0602 19:55:50.503300    7928 certs.go:384] ignoring C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\12108_empty.pem, impossibly tiny 0 bytes
	I0602 19:55:50.503541    7928 certs.go:388] found cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca-key.pem (1675 bytes)
	I0602 19:55:50.503678    7928 certs.go:388] found cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem (1078 bytes)
	I0602 19:55:50.503949    7928 certs.go:388] found cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0602 19:55:50.504216    7928 certs.go:388] found cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\key.pem (1675 bytes)
	I0602 19:55:50.504502    7928 certs.go:388] found cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\files\etc\ssl\certs\C:\Users\jenkins.minikube7\minikube-integration\.minikube\files\etc\ssl\certs\121082.pem (1708 bytes)
	I0602 19:55:50.505581    7928 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\false-20220602191600-12108\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0602 19:55:50.592821    7928 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\false-20220602191600-12108\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0602 19:55:50.665533    7928 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\false-20220602191600-12108\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0602 19:55:50.719904    7928 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\false-20220602191600-12108\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0602 19:55:50.777794    7928 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0602 19:55:50.828451    7928 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0602 19:55:50.893622    7928 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0602 19:55:50.946463    7928 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0602 19:55:51.004786    7928 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0602 19:55:51.072185    7928 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\12108.pem --> /usr/share/ca-certificates/12108.pem (1338 bytes)
	I0602 19:55:51.178488    7928 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\files\etc\ssl\certs\121082.pem --> /usr/share/ca-certificates/121082.pem (1708 bytes)
	I0602 19:55:51.248324    7928 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (752 bytes)
	I0602 19:55:51.304759    7928 ssh_runner.go:195] Run: openssl version
	I0602 19:55:51.339346    7928 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0602 19:55:51.384923    7928 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0602 19:55:51.404561    7928 certs.go:431] hashing: -rw-r--r-- 1 root root 1111 Jun  2 17:16 /usr/share/ca-certificates/minikubeCA.pem
	I0602 19:55:51.415334    7928 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0602 19:55:51.448217    7928 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0602 19:55:51.494326    7928 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/12108.pem && ln -fs /usr/share/ca-certificates/12108.pem /etc/ssl/certs/12108.pem"
	I0602 19:55:51.533093    7928 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12108.pem
	I0602 19:55:51.547688    7928 certs.go:431] hashing: -rw-r--r-- 1 root root 1338 Jun  2 17:28 /usr/share/ca-certificates/12108.pem
	I0602 19:55:51.560031    7928 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12108.pem
	I0602 19:55:51.593862    7928 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/12108.pem /etc/ssl/certs/51391683.0"
	I0602 19:55:51.659522    7928 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/121082.pem && ln -fs /usr/share/ca-certificates/121082.pem /etc/ssl/certs/121082.pem"
	I0602 19:55:51.717640    7928 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/121082.pem
	I0602 19:55:51.736132    7928 certs.go:431] hashing: -rw-r--r-- 1 root root 1708 Jun  2 17:28 /usr/share/ca-certificates/121082.pem
	I0602 19:55:51.747825    7928 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/121082.pem
	I0602 19:55:51.782011    7928 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/121082.pem /etc/ssl/certs/3ec20f2e.0"
	I0602 19:55:51.819676    7928 kubeadm.go:395] StartCluster: {Name:false-20220602191600-12108 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.6 ClusterName:false-20220602191600-12108 Namespace:default APIServerName:minikube
CA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:false NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.23.6 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:5m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube7:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror
: DisableOptimizations:false DisableMetrics:false}
	I0602 19:55:51.831113    7928 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0602 19:55:51.932310    7928 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0602 19:55:51.975994    7928 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0602 19:55:52.001798    7928 kubeadm.go:221] ignoring SystemVerification for kubeadm because of docker driver
	I0602 19:55:52.013424    7928 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0602 19:55:52.044901    7928 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0602 19:55:52.044901    7928 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"

                                                
                                                
** /stderr **
net_test.go:103: failed start: exit status 1
--- FAIL: TestNetworkPlugins/group/false/Start (462.98s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (356.58s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:169: (dbg) Run:  kubectl --context auto-20220602191545-12108 exec deployment/netcat -- nslookup kubernetes.default
net_test.go:169: (dbg) Non-zero exit: kubectl --context auto-20220602191545-12108 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.5505608s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:169: (dbg) Run:  kubectl --context auto-20220602191545-12108 exec deployment/netcat -- nslookup kubernetes.default
net_test.go:169: (dbg) Non-zero exit: kubectl --context auto-20220602191545-12108 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.5280785s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:169: (dbg) Run:  kubectl --context auto-20220602191545-12108 exec deployment/netcat -- nslookup kubernetes.default
E0602 19:50:47.524999   12108 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\no-preload-20220602192234-12108\client.crt: The system cannot find the path specified.
net_test.go:169: (dbg) Non-zero exit: kubectl --context auto-20220602191545-12108 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.5620431s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:169: (dbg) Run:  kubectl --context auto-20220602191545-12108 exec deployment/netcat -- nslookup kubernetes.default
E0602 19:51:04.375700   12108 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\old-k8s-version-20220602192231-12108\client.crt: The system cannot find the path specified.

                                                
                                                
=== CONT  TestNetworkPlugins/group/auto/DNS
net_test.go:169: (dbg) Non-zero exit: kubectl --context auto-20220602191545-12108 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.5105715s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **

                                                
                                                
=== CONT  TestNetworkPlugins/group/auto/DNS
net_test.go:169: (dbg) Run:  kubectl --context auto-20220602191545-12108 exec deployment/netcat -- nslookup kubernetes.default

                                                
                                                
=== CONT  TestNetworkPlugins/group/auto/DNS
net_test.go:169: (dbg) Non-zero exit: kubectl --context auto-20220602191545-12108 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.664934s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:169: (dbg) Run:  kubectl --context auto-20220602191545-12108 exec deployment/netcat -- nslookup kubernetes.default

                                                
                                                
=== CONT  TestNetworkPlugins/group/auto/DNS
net_test.go:169: (dbg) Non-zero exit: kubectl --context auto-20220602191545-12108 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.594241s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:169: (dbg) Run:  kubectl --context auto-20220602191545-12108 exec deployment/netcat -- nslookup kubernetes.default
E0602 19:51:57.287419   12108 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\addons-20220602171403-12108\client.crt: The system cannot find the path specified.

                                                
                                                
=== CONT  TestNetworkPlugins/group/auto/DNS
net_test.go:169: (dbg) Non-zero exit: kubectl --context auto-20220602191545-12108 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.5880523s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:169: (dbg) Run:  kubectl --context auto-20220602191545-12108 exec deployment/netcat -- nslookup kubernetes.default
net_test.go:169: (dbg) Non-zero exit: kubectl --context auto-20220602191545-12108 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.6245155s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
E0602 19:52:41.008373   12108 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\ingress-addon-legacy-20220602180932-12108\client.crt: The system cannot find the path specified.
net_test.go:169: (dbg) Run:  kubectl --context auto-20220602191545-12108 exec deployment/netcat -- nslookup kubernetes.default
net_test.go:169: (dbg) Non-zero exit: kubectl --context auto-20220602191545-12108 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.6124888s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **

                                                
                                                
=== CONT  TestNetworkPlugins/group/auto/DNS
net_test.go:169: (dbg) Run:  kubectl --context auto-20220602191545-12108 exec deployment/netcat -- nslookup kubernetes.default

                                                
                                                
=== CONT  TestNetworkPlugins/group/auto/DNS
net_test.go:169: (dbg) Non-zero exit: kubectl --context auto-20220602191545-12108 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.6356212s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **

                                                
                                                
=== CONT  TestNetworkPlugins/group/auto/DNS
net_test.go:169: (dbg) Run:  kubectl --context auto-20220602191545-12108 exec deployment/netcat -- nslookup kubernetes.default

                                                
                                                
=== CONT  TestNetworkPlugins/group/auto/DNS
net_test.go:169: (dbg) Non-zero exit: kubectl --context auto-20220602191545-12108 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.6425536s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **

                                                
                                                
=== CONT  TestNetworkPlugins/group/auto/DNS
net_test.go:169: (dbg) Run:  kubectl --context auto-20220602191545-12108 exec deployment/netcat -- nslookup kubernetes.default

                                                
                                                
=== CONT  TestNetworkPlugins/group/auto/DNS
net_test.go:169: (dbg) Non-zero exit: kubectl --context auto-20220602191545-12108 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.5889132s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:175: failed to do nslookup on kubernetes.default: exit status 1
net_test.go:180: failed nslookup: got=";; connection timed out; no servers could be reached\n\n\n", want=*"10.96.0.1"*
--- FAIL: TestNetworkPlugins/group/auto/DNS (356.58s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Start (85.33s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Start
net_test.go:101: (dbg) Run:  out/minikube-windows-amd64.exe start -p kubenet-20220602191545-12108 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --network-plugin=kubenet --driver=docker

                                                
                                                
=== CONT  TestNetworkPlugins/group/kubenet/Start
net_test.go:101: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p kubenet-20220602191545-12108 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --network-plugin=kubenet --driver=docker: exit status 1 (1m25.3213441s)

                                                
                                                
-- stdout --
	* [kubenet-20220602191545-12108] minikube v1.26.0-beta.1 on Microsoft Windows 10 Enterprise N 10.0.19044 Build 19044
	  - KUBECONFIG=C:\Users\jenkins.minikube7\minikube-integration\kubeconfig
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube7\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=14269
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	* Using the docker driver based on user configuration
	* Using Docker Desktop driver with the root privilege
	* Starting control plane node kubenet-20220602191545-12108 in cluster kubenet-20220602191545-12108
	* Pulling base image ...
	* Creating docker container (CPUs=2, Memory=2048MB) ...

                                                
                                                
-- /stdout --
** stderr ** 
	I0602 19:54:21.003324   14284 out.go:296] Setting OutFile to fd 1436 ...
	I0602 19:54:21.082292   14284 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0602 19:54:21.082292   14284 out.go:309] Setting ErrFile to fd 1844...
	I0602 19:54:21.082292   14284 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0602 19:54:21.099606   14284 out.go:303] Setting JSON to false
	I0602 19:54:21.102846   14284 start.go:115] hostinfo: {"hostname":"minikube7","uptime":62803,"bootTime":1654136858,"procs":192,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19044 Build 19044","kernelVersion":"10.0.19044 Build 19044","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f66ed2ea-6c04-4a6b-8eea-b2fb0953e990"}
	W0602 19:54:21.102846   14284 start.go:123] gopshost.Virtualization returned error: not implemented yet
	I0602 19:54:21.108108   14284 out.go:177] * [kubenet-20220602191545-12108] minikube v1.26.0-beta.1 on Microsoft Windows 10 Enterprise N 10.0.19044 Build 19044
	I0602 19:54:21.112881   14284 notify.go:193] Checking for updates...
	I0602 19:54:21.115555   14284 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube7\minikube-integration\kubeconfig
	I0602 19:54:21.117627   14284 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube7\minikube-integration\.minikube
	I0602 19:54:21.120297   14284 out.go:177]   - MINIKUBE_LOCATION=14269
	I0602 19:54:21.123451   14284 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0602 19:54:21.126107   14284 config.go:178] Loaded profile config "auto-20220602191545-12108": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.6
	I0602 19:54:21.126885   14284 config.go:178] Loaded profile config "enable-default-cni-20220602191545-12108": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.6
	I0602 19:54:21.126885   14284 config.go:178] Loaded profile config "false-20220602191600-12108": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.6
	I0602 19:54:21.127710   14284 driver.go:358] Setting default libvirt URI to qemu:///system
	I0602 19:54:24.242032   14284 docker.go:137] docker version: linux-20.10.16
	I0602 19:54:24.258019   14284 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0602 19:54:26.789067   14284 cli_runner.go:217] Completed: docker system info --format "{{json .}}": (2.5310362s)
	I0602 19:54:26.789868   14284 info.go:265] docker info: {ID:DTJ5:6HSD:HQYO:NIUY:6CLH:5UQL:6FBK:7BMO:GVE6:JAI7:26MB:GZA3 Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:74 OomKillDisable:true NGoroutines:66 SystemTime:2022-06-02 19:54:25.519426 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:4 KernelVersion:5.10.102.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_6
4 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:53902323712 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.16 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16 Expected:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16} RuncCommit:{ID:v1.1.1-0-g52de29d Expected:v1.1.1-0-g52de29d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,p
rofile=default] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.6.0] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:C:\Program Files\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc.
Version:v0.17.0]] Warnings:<nil>}}
	I0602 19:54:26.792611   14284 out.go:177] * Using the docker driver based on user configuration
	I0602 19:54:26.799643   14284 start.go:284] selected driver: docker
	I0602 19:54:26.799643   14284 start.go:806] validating driver "docker" against <nil>
	I0602 19:54:26.799643   14284 start.go:817] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0602 19:54:26.903202   14284 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0602 19:54:29.207410   14284 cli_runner.go:217] Completed: docker system info --format "{{json .}}": (2.3038993s)
	I0602 19:54:29.207729   14284 info.go:265] docker info: {ID:DTJ5:6HSD:HQYO:NIUY:6CLH:5UQL:6FBK:7BMO:GVE6:JAI7:26MB:GZA3 Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:81 OomKillDisable:true NGoroutines:72 SystemTime:2022-06-02 19:54:28.0956076 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:4 KernelVersion:5.10.102.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_
64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:53902323712 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.16 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16 Expected:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16} RuncCommit:{ID:v1.1.1-0-g52de29d Expected:v1.1.1-0-g52de29d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,
profile=default] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.6.0] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:C:\Program Files\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc.
Version:v0.17.0]] Warnings:<nil>}}
	I0602 19:54:29.208066   14284 start_flags.go:292] no existing cluster config was found, will generate one from the flags 
	I0602 19:54:29.208104   14284 start_flags.go:847] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0602 19:54:29.404711   14284 out.go:177] * Using Docker Desktop driver with the root privilege
	I0602 19:54:29.513559   14284 cni.go:91] network plugin configured as "kubenet", returning disabled
	I0602 19:54:29.513559   14284 start_flags.go:306] config:
	{Name:kubenet-20220602191545-12108 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.6 ClusterName:kubenet-20220602191545-12108 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.l
ocal ContainerRuntime:docker CRISocket: NetworkPlugin:kubenet FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:5m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube7:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0602 19:54:29.556236   14284 out.go:177] * Starting control plane node kubenet-20220602191545-12108 in cluster kubenet-20220602191545-12108
	I0602 19:54:29.562375   14284 cache.go:120] Beginning downloading kic base image for docker with docker
	I0602 19:54:29.566596   14284 out.go:177] * Pulling base image ...
	I0602 19:54:29.572179   14284 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496 in local docker daemon
	I0602 19:54:29.571650   14284 preload.go:132] Checking if preload exists for k8s version v1.23.6 and runtime docker
	I0602 19:54:29.572224   14284 preload.go:148] Found local preload: C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.23.6-docker-overlay2-amd64.tar.lz4
	I0602 19:54:29.572224   14284 cache.go:57] Caching tarball of preloaded images
	I0602 19:54:29.572773   14284 preload.go:174] Found C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.23.6-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0602 19:54:29.572864   14284 cache.go:60] Finished verifying existence of preloaded tar for  v1.23.6 on docker
	I0602 19:54:29.572864   14284 profile.go:148] Saving config to C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\kubenet-20220602191545-12108\config.json ...
	I0602 19:54:29.572864   14284 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\kubenet-20220602191545-12108\config.json: {Name:mkfff16e2b0be76c78609e1ecf78b515ff529aa6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0602 19:54:30.797392   14284 image.go:79] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496 in local docker daemon, skipping pull
	I0602 19:54:30.797556   14284 cache.go:141] gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496 exists in daemon, skipping load
	I0602 19:54:30.797591   14284 cache.go:206] Successfully downloaded all kic artifacts
	I0602 19:54:30.797694   14284 start.go:352] acquiring machines lock for kubenet-20220602191545-12108: {Name:mkc077a8171ba9b8e51de76e3db97bc90af3e106 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0602 19:54:30.797694   14284 start.go:356] acquired machines lock for "kubenet-20220602191545-12108" in 0s
	I0602 19:54:30.797694   14284 start.go:91] Provisioning new machine with config: &{Name:kubenet-20220602191545-12108 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.6 ClusterName:kubenet-20220602191545-12108 Namespace:defa
ult APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:kubenet FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.23.6 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:5m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube7:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:doc
ker BinaryMirror: DisableOptimizations:false DisableMetrics:false} &{Name: IP: Port:8443 KubernetesVersion:v1.23.6 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0602 19:54:30.801309   14284 start.go:131] createHost starting for "" (driver="docker")
	I0602 19:54:30.806296   14284 out.go:204] * Creating docker container (CPUs=2, Memory=2048MB) ...
	I0602 19:54:30.806966   14284 start.go:165] libmachine.API.Create for "kubenet-20220602191545-12108" (driver="docker")
	I0602 19:54:30.806966   14284 client.go:168] LocalClient.Create starting
	I0602 19:54:30.806966   14284 main.go:134] libmachine: Reading certificate data from C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem
	I0602 19:54:30.806966   14284 main.go:134] libmachine: Decoding PEM data...
	I0602 19:54:30.806966   14284 main.go:134] libmachine: Parsing certificate...
	I0602 19:54:30.808077   14284 main.go:134] libmachine: Reading certificate data from C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\cert.pem
	I0602 19:54:30.808077   14284 main.go:134] libmachine: Decoding PEM data...
	I0602 19:54:30.808077   14284 main.go:134] libmachine: Parsing certificate...
	I0602 19:54:30.818250   14284 cli_runner.go:164] Run: docker network inspect kubenet-20220602191545-12108 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0602 19:54:32.028812   14284 cli_runner.go:211] docker network inspect kubenet-20220602191545-12108 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0602 19:54:32.028812   14284 cli_runner.go:217] Completed: docker network inspect kubenet-20220602191545-12108 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}": (1.2102037s)
	I0602 19:54:32.037707   14284 network_create.go:272] running [docker network inspect kubenet-20220602191545-12108] to gather additional debugging logs...
	I0602 19:54:32.037707   14284 cli_runner.go:164] Run: docker network inspect kubenet-20220602191545-12108
	W0602 19:54:33.241038   14284 cli_runner.go:211] docker network inspect kubenet-20220602191545-12108 returned with exit code 1
	I0602 19:54:33.241038   14284 cli_runner.go:217] Completed: docker network inspect kubenet-20220602191545-12108: (1.2032744s)
	I0602 19:54:33.241038   14284 network_create.go:275] error running [docker network inspect kubenet-20220602191545-12108]: docker network inspect kubenet-20220602191545-12108: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: kubenet-20220602191545-12108
	I0602 19:54:33.241038   14284 network_create.go:277] output of [docker network inspect kubenet-20220602191545-12108]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: kubenet-20220602191545-12108
	
	** /stderr **
	I0602 19:54:33.253913   14284 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0602 19:54:34.557732   14284 cli_runner.go:217] Completed: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}": (1.3038135s)
	I0602 19:54:34.588734   14284 network.go:288] reserving subnet 192.168.49.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[] amended:true}} dirty:map[192.168.49.0:0xc00058a9a0] misses:0}
	I0602 19:54:34.589050   14284 network.go:235] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0602 19:54:34.589050   14284 network_create.go:115] attempt to create docker network kubenet-20220602191545-12108 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0602 19:54:34.598183   14284 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true kubenet-20220602191545-12108
	W0602 19:54:35.730074   14284 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true kubenet-20220602191545-12108 returned with exit code 1
	I0602 19:54:35.730074   14284 cli_runner.go:217] Completed: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true kubenet-20220602191545-12108: (1.1318867s)
	W0602 19:54:35.730074   14284 network_create.go:107] failed to create docker network kubenet-20220602191545-12108 192.168.49.0/24, will retry: subnet is taken
	I0602 19:54:35.750045   14284 network.go:279] skipping subnet 192.168.49.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc00058a9a0] amended:false}} dirty:map[] misses:0}
	I0602 19:54:35.750713   14284 network.go:238] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0602 19:54:35.768521   14284 network.go:288] reserving subnet 192.168.58.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc00058a9a0] amended:true}} dirty:map[192.168.49.0:0xc00058a9a0 192.168.58.0:0xc00058aa78] misses:0}
	I0602 19:54:35.768521   14284 network.go:235] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0602 19:54:35.768521   14284 network_create.go:115] attempt to create docker network kubenet-20220602191545-12108 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
	I0602 19:54:35.780300   14284 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true kubenet-20220602191545-12108
	W0602 19:54:36.908638   14284 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true kubenet-20220602191545-12108 returned with exit code 1
	I0602 19:54:36.908638   14284 cli_runner.go:217] Completed: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true kubenet-20220602191545-12108: (1.128011s)
	W0602 19:54:36.908713   14284 network_create.go:107] failed to create docker network kubenet-20220602191545-12108 192.168.58.0/24, will retry: subnet is taken
	I0602 19:54:36.926493   14284 network.go:279] skipping subnet 192.168.58.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc00058a9a0] amended:true}} dirty:map[192.168.49.0:0xc00058a9a0 192.168.58.0:0xc00058aa78] misses:1}
	I0602 19:54:36.926493   14284 network.go:238] skipping subnet 192.168.58.0/24 that is reserved: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0602 19:54:36.948891   14284 network.go:288] reserving subnet 192.168.67.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc00058a9a0] amended:true}} dirty:map[192.168.49.0:0xc00058a9a0 192.168.58.0:0xc00058aa78 192.168.67.0:0xc0002383f8] misses:1}
	I0602 19:54:36.949045   14284 network.go:235] using free private subnet 192.168.67.0/24: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0602 19:54:36.949045   14284 network_create.go:115] attempt to create docker network kubenet-20220602191545-12108 192.168.67.0/24 with gateway 192.168.67.1 and MTU of 1500 ...
	I0602 19:54:36.955288   14284 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true kubenet-20220602191545-12108
	I0602 19:54:39.400621   14284 cli_runner.go:217] Completed: docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true kubenet-20220602191545-12108: (2.4453225s)
	I0602 19:54:39.400621   14284 network_create.go:99] docker network kubenet-20220602191545-12108 192.168.67.0/24 created
	I0602 19:54:39.400621   14284 kic.go:106] calculated static IP "192.168.67.2" for the "kubenet-20220602191545-12108" container
	I0602 19:54:39.417995   14284 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0602 19:54:40.674394   14284 cli_runner.go:217] Completed: docker ps -a --format {{.Names}}: (1.2563939s)
	I0602 19:54:40.696923   14284 cli_runner.go:164] Run: docker volume create kubenet-20220602191545-12108 --label name.minikube.sigs.k8s.io=kubenet-20220602191545-12108 --label created_by.minikube.sigs.k8s.io=true
	I0602 19:54:42.307654   14284 cli_runner.go:217] Completed: docker volume create kubenet-20220602191545-12108 --label name.minikube.sigs.k8s.io=kubenet-20220602191545-12108 --label created_by.minikube.sigs.k8s.io=true: (1.6105158s)
	I0602 19:54:42.307988   14284 oci.go:103] Successfully created a docker volume kubenet-20220602191545-12108
	I0602 19:54:42.322832   14284 cli_runner.go:164] Run: docker run --rm --name kubenet-20220602191545-12108-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=kubenet-20220602191545-12108 --entrypoint /usr/bin/test -v kubenet-20220602191545-12108:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496 -d /var/lib
	I0602 19:54:53.374968   14284 cli_runner.go:217] Completed: docker run --rm --name kubenet-20220602191545-12108-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=kubenet-20220602191545-12108 --entrypoint /usr/bin/test -v kubenet-20220602191545-12108:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496 -d /var/lib: (11.0516599s)
	I0602 19:54:53.375090   14284 oci.go:107] Successfully prepared a docker volume kubenet-20220602191545-12108
	I0602 19:54:53.375136   14284 preload.go:132] Checking if preload exists for k8s version v1.23.6 and runtime docker
	I0602 19:54:53.375195   14284 kic.go:179] Starting extracting preloaded images to volume ...
	I0602 19:54:53.384406   14284 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.23.6-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v kubenet-20220602191545-12108:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496 -I lz4 -xf /preloaded.tar -C /extractDir
	I0602 19:55:21.297624   14284 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.23.6-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v kubenet-20220602191545-12108:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496 -I lz4 -xf /preloaded.tar -C /extractDir: (27.9130219s)
	I0602 19:55:21.297903   14284 kic.go:188] duration metric: took 27.922585 seconds to extract preloaded images to volume
	I0602 19:55:21.308950   14284 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0602 19:55:23.542707   14284 cli_runner.go:217] Completed: docker system info --format "{{json .}}": (2.2333773s)
	I0602 19:55:23.542795   14284 info.go:265] docker info: {ID:DTJ5:6HSD:HQYO:NIUY:6CLH:5UQL:6FBK:7BMO:GVE6:JAI7:26MB:GZA3 Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:80 OomKillDisable:true NGoroutines:60 SystemTime:2022-06-02 19:55:22.3904963 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:4 KernelVersion:5.10.102.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_
64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:53902323712 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.16 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16 Expected:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16} RuncCommit:{ID:v1.1.1-0-g52de29d Expected:v1.1.1-0-g52de29d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,
profile=default] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.6.0] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:C:\Program Files\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc.
Version:v0.17.0]] Warnings:<nil>}}
	I0602 19:55:23.551928   14284 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0602 19:55:25.766208   14284 cli_runner.go:217] Completed: docker info --format "'{{json .SecurityOptions}}'": (2.2140445s)
	I0602 19:55:25.774515   14284 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname kubenet-20220602191545-12108 --name kubenet-20220602191545-12108 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=kubenet-20220602191545-12108 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=kubenet-20220602191545-12108 --network kubenet-20220602191545-12108 --ip 192.168.67.2 --volume kubenet-20220602191545-12108:/var --security-opt apparmor=unconfined --memory=2048mb --memory-swap=2048mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496
	I0602 19:55:33.322126   14284 cli_runner.go:217] Completed: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname kubenet-20220602191545-12108 --name kubenet-20220602191545-12108 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=kubenet-20220602191545-12108 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=kubenet-20220602191545-12108 --network kubenet-20220602191545-12108 --ip 192.168.67.2 --volume kubenet-20220602191545-12108:/var --security-opt apparmor=unconfined --memory=2048mb --memory-swap=2048mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496: (7.5473552s)
	I0602 19:55:33.345666   14284 cli_runner.go:164] Run: docker container inspect kubenet-20220602191545-12108 --format={{.State.Running}}
	I0602 19:55:34.658558   14284 cli_runner.go:217] Completed: docker container inspect kubenet-20220602191545-12108 --format={{.State.Running}}: (1.3128865s)
	I0602 19:55:34.667853   14284 cli_runner.go:164] Run: docker container inspect kubenet-20220602191545-12108 --format={{.State.Status}}
	I0602 19:55:35.913886   14284 cli_runner.go:217] Completed: docker container inspect kubenet-20220602191545-12108 --format={{.State.Status}}: (1.2457893s)
	I0602 19:55:35.922321   14284 cli_runner.go:164] Run: docker exec kubenet-20220602191545-12108 stat /var/lib/dpkg/alternatives/iptables
	I0602 19:55:37.361383   14284 cli_runner.go:217] Completed: docker exec kubenet-20220602191545-12108 stat /var/lib/dpkg/alternatives/iptables: (1.4390012s)
	I0602 19:55:37.361415   14284 oci.go:247] the created container "kubenet-20220602191545-12108" has a running status.
	I0602 19:55:37.361480   14284 kic.go:210] Creating ssh key for kic: C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\kubenet-20220602191545-12108\id_rsa...
	I0602 19:55:37.524732   14284 kic_runner.go:191] docker (temp): C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\kubenet-20220602191545-12108\id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0602 19:55:38.848938   14284 cli_runner.go:164] Run: docker container inspect kubenet-20220602191545-12108 --format={{.State.Status}}
	I0602 19:55:40.055340   14284 cli_runner.go:217] Completed: docker container inspect kubenet-20220602191545-12108 --format={{.State.Status}}: (1.2062997s)
	I0602 19:55:40.074726   14284 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0602 19:55:40.074726   14284 kic_runner.go:114] Args: [docker exec --privileged kubenet-20220602191545-12108 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0602 19:55:41.451764   14284 kic_runner.go:123] Done: [docker exec --privileged kubenet-20220602191545-12108 chown docker:docker /home/docker/.ssh/authorized_keys]: (1.3770328s)
	I0602 19:55:41.457463   14284 kic.go:250] ensuring only current user has permissions to key file located at : C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\kubenet-20220602191545-12108\id_rsa...
	I0602 19:55:41.950585   14284 cli_runner.go:164] Run: docker container inspect kubenet-20220602191545-12108 --format={{.State.Status}}
	I0602 19:55:43.097725   14284 cli_runner.go:217] Completed: docker container inspect kubenet-20220602191545-12108 --format={{.State.Status}}: (1.1470924s)
	I0602 19:55:43.097844   14284 machine.go:88] provisioning docker machine ...
	I0602 19:55:43.097844   14284 ubuntu.go:169] provisioning hostname "kubenet-20220602191545-12108"
	I0602 19:55:43.104723   14284 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-20220602191545-12108
	I0602 19:55:44.308104   14284 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-20220602191545-12108: (1.2033752s)
	I0602 19:55:44.314743   14284 main.go:134] libmachine: Using SSH client type: native
	I0602 19:55:44.321228   14284 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x5b2ea0] 0x5b5d00 <nil>  [] 0s} 127.0.0.1 55335 <nil> <nil>}
	I0602 19:55:44.321228   14284 main.go:134] libmachine: About to run SSH command:
	sudo hostname kubenet-20220602191545-12108 && echo "kubenet-20220602191545-12108" | sudo tee /etc/hostname
	I0602 19:55:44.580916   14284 main.go:134] libmachine: SSH cmd err, output: <nil>: kubenet-20220602191545-12108
	
	I0602 19:55:44.591624   14284 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-20220602191545-12108
	I0602 19:55:45.906808   14284 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-20220602191545-12108: (1.315082s)
	I0602 19:55:45.912031   14284 main.go:134] libmachine: Using SSH client type: native
	I0602 19:55:45.912031   14284 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x5b2ea0] 0x5b5d00 <nil>  [] 0s} 127.0.0.1 55335 <nil> <nil>}
	I0602 19:55:45.912031   14284 main.go:134] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skubenet-20220602191545-12108' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kubenet-20220602191545-12108/g' /etc/hosts;
				else 
					echo '127.0.1.1 kubenet-20220602191545-12108' | sudo tee -a /etc/hosts; 
				fi
			fi

                                                
                                                
** /stderr **
net_test.go:103: failed start: exit status 1
--- FAIL: TestNetworkPlugins/group/kubenet/Start (85.33s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (20.56s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:101: (dbg) Run:  out/minikube-windows-amd64.exe start -p kindnet-20220602191600-12108 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=kindnet --driver=docker

                                                
                                                
=== CONT  TestNetworkPlugins/group/kindnet/Start
net_test.go:101: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p kindnet-20220602191600-12108 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=kindnet --driver=docker: exit status 1 (20.5481746s)

                                                
                                                
-- stdout --
	* [kindnet-20220602191600-12108] minikube v1.26.0-beta.1 on Microsoft Windows 10 Enterprise N 10.0.19044 Build 19044
	  - KUBECONFIG=C:\Users\jenkins.minikube7\minikube-integration\kubeconfig
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube7\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=14269
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	* Using the docker driver based on user configuration
	* Using Docker Desktop driver with the root privilege
	* Starting control plane node kindnet-20220602191600-12108 in cluster kindnet-20220602191600-12108
	* Pulling base image ...
	* Creating docker container (CPUs=2, Memory=2048MB) ...

                                                
                                                
-- /stdout --
** stderr ** 
	I0602 19:55:40.045092   10972 out.go:296] Setting OutFile to fd 1952 ...
	I0602 19:55:40.113873   10972 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0602 19:55:40.113873   10972 out.go:309] Setting ErrFile to fd 1880...
	I0602 19:55:40.113944   10972 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0602 19:55:40.122994   10972 out.go:303] Setting JSON to false
	I0602 19:55:40.132905   10972 start.go:115] hostinfo: {"hostname":"minikube7","uptime":62882,"bootTime":1654136858,"procs":191,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19044 Build 19044","kernelVersion":"10.0.19044 Build 19044","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f66ed2ea-6c04-4a6b-8eea-b2fb0953e990"}
	W0602 19:55:40.133036   10972 start.go:123] gopshost.Virtualization returned error: not implemented yet
	I0602 19:55:40.138133   10972 out.go:177] * [kindnet-20220602191600-12108] minikube v1.26.0-beta.1 on Microsoft Windows 10 Enterprise N 10.0.19044 Build 19044
	I0602 19:55:40.141822   10972 notify.go:193] Checking for updates...
	I0602 19:55:40.144875   10972 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube7\minikube-integration\kubeconfig
	I0602 19:55:40.147851   10972 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube7\minikube-integration\.minikube
	I0602 19:55:40.148073   10972 out.go:177]   - MINIKUBE_LOCATION=14269
	I0602 19:55:40.154179   10972 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0602 19:55:40.159425   10972 config.go:178] Loaded profile config "auto-20220602191545-12108": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.6
	I0602 19:55:40.160066   10972 config.go:178] Loaded profile config "false-20220602191600-12108": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.6
	I0602 19:55:40.160961   10972 config.go:178] Loaded profile config "kubenet-20220602191545-12108": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.6
	I0602 19:55:40.161279   10972 driver.go:358] Setting default libvirt URI to qemu:///system
	I0602 19:55:42.955211   10972 docker.go:137] docker version: linux-20.10.16
	I0602 19:55:42.963950   10972 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0602 19:55:45.247683   10972 cli_runner.go:217] Completed: docker system info --format "{{json .}}": (2.2836727s)
	I0602 19:55:45.248573   10972 info.go:265] docker info: {ID:DTJ5:6HSD:HQYO:NIUY:6CLH:5UQL:6FBK:7BMO:GVE6:JAI7:26MB:GZA3 Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:89 OomKillDisable:true NGoroutines:65 SystemTime:2022-06-02 19:55:44.1046947 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:4 KernelVersion:5.10.102.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_
64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:53902323712 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.16 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16 Expected:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16} RuncCommit:{ID:v1.1.1-0-g52de29d Expected:v1.1.1-0-g52de29d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,
profile=default] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.6.0] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:C:\Program Files\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc.
Version:v0.17.0]] Warnings:<nil>}}
	I0602 19:55:45.252294   10972 out.go:177] * Using the docker driver based on user configuration
	I0602 19:55:45.255393   10972 start.go:284] selected driver: docker
	I0602 19:55:45.255467   10972 start.go:806] validating driver "docker" against <nil>
	I0602 19:55:45.255542   10972 start.go:817] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0602 19:55:45.369058   10972 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0602 19:55:47.696636   10972 cli_runner.go:217] Completed: docker system info --format "{{json .}}": (2.3275681s)
	I0602 19:55:47.696636   10972 info.go:265] docker info: {ID:DTJ5:6HSD:HQYO:NIUY:6CLH:5UQL:6FBK:7BMO:GVE6:JAI7:26MB:GZA3 Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:89 OomKillDisable:true NGoroutines:65 SystemTime:2022-06-02 19:55:46.516306 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:4 KernelVersion:5.10.102.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_6
4 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:53902323712 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.16 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16 Expected:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16} RuncCommit:{ID:v1.1.1-0-g52de29d Expected:v1.1.1-0-g52de29d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,p
rofile=default] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.6.0] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:C:\Program Files\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc.
Version:v0.17.0]] Warnings:<nil>}}
	I0602 19:55:47.697496   10972 start_flags.go:292] no existing cluster config was found, will generate one from the flags 
	I0602 19:55:47.698549   10972 start_flags.go:847] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0602 19:55:47.701916   10972 out.go:177] * Using Docker Desktop driver with the root privilege
	I0602 19:55:47.704556   10972 cni.go:95] Creating CNI manager for "kindnet"
	I0602 19:55:47.704556   10972 cni.go:225] auto-setting extra-config to "kubelet.cni-conf-dir=/etc/cni/net.mk"
	I0602 19:55:47.704556   10972 cni.go:230] extra-config set to "kubelet.cni-conf-dir=/etc/cni/net.mk"
	I0602 19:55:47.704556   10972 start_flags.go:301] Found "CNI" CNI - setting NetworkPlugin=cni
	I0602 19:55:47.704556   10972 start_flags.go:306] config:
	{Name:kindnet-20220602191600-12108 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.6 ClusterName:kindnet-20220602191600-12108 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.l
ocal ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:5m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube7:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0602 19:55:47.708780   10972 out.go:177] * Starting control plane node kindnet-20220602191600-12108 in cluster kindnet-20220602191600-12108
	I0602 19:55:47.708780   10972 cache.go:120] Beginning downloading kic base image for docker with docker
	I0602 19:55:47.713271   10972 out.go:177] * Pulling base image ...
	I0602 19:55:47.715859   10972 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496 in local docker daemon
	I0602 19:55:47.716910   10972 preload.go:132] Checking if preload exists for k8s version v1.23.6 and runtime docker
	I0602 19:55:47.717336   10972 preload.go:148] Found local preload: C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.23.6-docker-overlay2-amd64.tar.lz4
	I0602 19:55:47.717336   10972 cache.go:57] Caching tarball of preloaded images
	I0602 19:55:47.717404   10972 preload.go:174] Found C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.23.6-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0602 19:55:47.717945   10972 cache.go:60] Finished verifying existence of preloaded tar for  v1.23.6 on docker
	I0602 19:55:47.718175   10972 profile.go:148] Saving config to C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\kindnet-20220602191600-12108\config.json ...
	I0602 19:55:47.718515   10972 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\kindnet-20220602191600-12108\config.json: {Name:mk3838b4f0fa369d463199a32b5aceecddceed37 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0602 19:55:48.862667   10972 image.go:79] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496 in local docker daemon, skipping pull
	I0602 19:55:48.862667   10972 cache.go:141] gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496 exists in daemon, skipping load
	I0602 19:55:48.862667   10972 cache.go:206] Successfully downloaded all kic artifacts
	I0602 19:55:48.862667   10972 start.go:352] acquiring machines lock for kindnet-20220602191600-12108: {Name:mk5522a0bdc453abeb41f470c039a78ed1eb4b46 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0602 19:55:48.862667   10972 start.go:356] acquired machines lock for "kindnet-20220602191600-12108" in 0s
	I0602 19:55:48.863382   10972 start.go:91] Provisioning new machine with config: &{Name:kindnet-20220602191600-12108 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.6 ClusterName:kindnet-20220602191600-12108 Namespace:defa
ult APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.23.6 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:5m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube7:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize
:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false} &{Name: IP: Port:8443 KubernetesVersion:v1.23.6 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0602 19:55:48.863382   10972 start.go:131] createHost starting for "" (driver="docker")
	I0602 19:55:48.867655   10972 out.go:204] * Creating docker container (CPUs=2, Memory=2048MB) ...
	I0602 19:55:48.867655   10972 start.go:165] libmachine.API.Create for "kindnet-20220602191600-12108" (driver="docker")
	I0602 19:55:48.871575   10972 client.go:168] LocalClient.Create starting
	I0602 19:55:48.871575   10972 main.go:134] libmachine: Reading certificate data from C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem
	I0602 19:55:48.874268   10972 main.go:134] libmachine: Decoding PEM data...
	I0602 19:55:48.874268   10972 main.go:134] libmachine: Parsing certificate...
	I0602 19:55:48.874268   10972 main.go:134] libmachine: Reading certificate data from C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\cert.pem
	I0602 19:55:48.874894   10972 main.go:134] libmachine: Decoding PEM data...
	I0602 19:55:48.874894   10972 main.go:134] libmachine: Parsing certificate...
	I0602 19:55:48.892752   10972 cli_runner.go:164] Run: docker network inspect kindnet-20220602191600-12108 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0602 19:55:50.028843   10972 cli_runner.go:211] docker network inspect kindnet-20220602191600-12108 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0602 19:55:50.029045   10972 cli_runner.go:217] Completed: docker network inspect kindnet-20220602191600-12108 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}": (1.1360855s)
	I0602 19:55:50.037941   10972 network_create.go:272] running [docker network inspect kindnet-20220602191600-12108] to gather additional debugging logs...
	I0602 19:55:50.037941   10972 cli_runner.go:164] Run: docker network inspect kindnet-20220602191600-12108
	W0602 19:55:51.254897   10972 cli_runner.go:211] docker network inspect kindnet-20220602191600-12108 returned with exit code 1
	I0602 19:55:51.255143   10972 cli_runner.go:217] Completed: docker network inspect kindnet-20220602191600-12108: (1.2169508s)
	I0602 19:55:51.255207   10972 network_create.go:275] error running [docker network inspect kindnet-20220602191600-12108]: docker network inspect kindnet-20220602191600-12108: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: kindnet-20220602191600-12108
	I0602 19:55:51.255207   10972 network_create.go:277] output of [docker network inspect kindnet-20220602191600-12108]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: kindnet-20220602191600-12108
	
	** /stderr **
	I0602 19:55:51.263972   10972 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0602 19:55:52.479189   10972 cli_runner.go:217] Completed: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}": (1.2144767s)
	I0602 19:55:52.497425   10972 network.go:288] reserving subnet 192.168.49.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[] amended:true}} dirty:map[192.168.49.0:0xc00024c2e8] misses:0}
	I0602 19:55:52.497425   10972 network.go:235] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0602 19:55:52.497425   10972 network_create.go:115] attempt to create docker network kindnet-20220602191600-12108 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0602 19:55:52.512573   10972 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true kindnet-20220602191600-12108
	I0602 19:55:53.860640   10972 cli_runner.go:217] Completed: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true kindnet-20220602191600-12108: (1.3480619s)
	I0602 19:55:53.860640   10972 network_create.go:99] docker network kindnet-20220602191600-12108 192.168.49.0/24 created
	I0602 19:55:53.860640   10972 kic.go:106] calculated static IP "192.168.49.2" for the "kindnet-20220602191600-12108" container
	I0602 19:55:53.882380   10972 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0602 19:55:55.076025   10972 cli_runner.go:217] Completed: docker ps -a --format {{.Names}}: (1.1933105s)
	I0602 19:55:55.085436   10972 cli_runner.go:164] Run: docker volume create kindnet-20220602191600-12108 --label name.minikube.sigs.k8s.io=kindnet-20220602191600-12108 --label created_by.minikube.sigs.k8s.io=true
	I0602 19:55:56.157168   10972 cli_runner.go:217] Completed: docker volume create kindnet-20220602191600-12108 --label name.minikube.sigs.k8s.io=kindnet-20220602191600-12108 --label created_by.minikube.sigs.k8s.io=true: (1.0715567s)
	I0602 19:55:56.157168   10972 oci.go:103] Successfully created a docker volume kindnet-20220602191600-12108
	I0602 19:55:56.164519   10972 cli_runner.go:164] Run: docker run --rm --name kindnet-20220602191600-12108-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=kindnet-20220602191600-12108 --entrypoint /usr/bin/test -v kindnet-20220602191600-12108:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496 -d /var/lib
	I0602 19:55:59.307901   10972 cli_runner.go:217] Completed: docker run --rm --name kindnet-20220602191600-12108-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=kindnet-20220602191600-12108 --entrypoint /usr/bin/test -v kindnet-20220602191600-12108:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496 -d /var/lib: (3.1433679s)
	I0602 19:55:59.308120   10972 oci.go:107] Successfully prepared a docker volume kindnet-20220602191600-12108
	I0602 19:55:59.308120   10972 preload.go:132] Checking if preload exists for k8s version v1.23.6 and runtime docker
	I0602 19:55:59.308120   10972 kic.go:179] Starting extracting preloaded images to volume ...
	I0602 19:55:59.315166   10972 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.23.6-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v kindnet-20220602191600-12108:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496 -I lz4 -xf /preloaded.tar -C /extractDir

                                                
                                                
** /stderr **
net_test.go:103: failed start: exit status 1
--- FAIL: TestNetworkPlugins/group/kindnet/Start (20.56s)

                                                
                                    

Test pass (221/257)

Order passed test Duration
3 TestDownloadOnly/v1.16.0/json-events 18.11
4 TestDownloadOnly/v1.16.0/preload-exists 0
7 TestDownloadOnly/v1.16.0/kubectl 0
8 TestDownloadOnly/v1.16.0/LogsDuration 0.75
10 TestDownloadOnly/v1.23.6/json-events 13.01
11 TestDownloadOnly/v1.23.6/preload-exists 0.02
14 TestDownloadOnly/v1.23.6/kubectl 0
15 TestDownloadOnly/v1.23.6/LogsDuration 0.63
16 TestDownloadOnly/DeleteAll 11.18
17 TestDownloadOnly/DeleteAlwaysSucceeds 6.92
18 TestDownloadOnlyKic 45.51
19 TestBinaryMirror 16.28
20 TestOffline 223
22 TestAddons/Setup 473.76
26 TestAddons/parallel/MetricsServer 14.52
27 TestAddons/parallel/HelmTiller 60.53
29 TestAddons/parallel/CSI 88.25
31 TestAddons/serial/GCPAuth 27.85
32 TestAddons/StoppedEnableDisable 24.45
33 TestCertOptions 167.03
34 TestCertExpiration 390.05
35 TestDockerFlags 168.7
36 TestForceSystemdFlag 190.31
37 TestForceSystemdEnv 181.37
42 TestErrorSpam/setup 113.54
43 TestErrorSpam/start 22.32
44 TestErrorSpam/status 19.72
45 TestErrorSpam/pause 17.22
46 TestErrorSpam/unpause 17.49
47 TestErrorSpam/stop 32.54
50 TestFunctional/serial/CopySyncFile 0.04
51 TestFunctional/serial/StartWithProxy 127.05
52 TestFunctional/serial/AuditLog 0
53 TestFunctional/serial/SoftStart 34.91
54 TestFunctional/serial/KubeContext 0.18
55 TestFunctional/serial/KubectlGetPods 0.36
58 TestFunctional/serial/CacheCmd/cache/add_remote 18.3
59 TestFunctional/serial/CacheCmd/cache/add_local 9.27
60 TestFunctional/serial/CacheCmd/cache/delete_k8s.gcr.io/pause:3.3 0.37
61 TestFunctional/serial/CacheCmd/cache/list 0.35
62 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 6.33
63 TestFunctional/serial/CacheCmd/cache/cache_reload 25.02
64 TestFunctional/serial/CacheCmd/cache/delete 0.74
65 TestFunctional/serial/MinikubeKubectlCmd 2.07
66 TestFunctional/serial/MinikubeKubectlCmdDirectly 2
67 TestFunctional/serial/ExtraConfig 75.96
68 TestFunctional/serial/ComponentHealth 0.26
69 TestFunctional/serial/LogsCmd 7.56
70 TestFunctional/serial/LogsFileCmd 8.53
72 TestFunctional/parallel/ConfigCmd 2.31
74 TestFunctional/parallel/DryRun 12.4
75 TestFunctional/parallel/InternationalLanguage 5.59
76 TestFunctional/parallel/StatusCmd 20.28
81 TestFunctional/parallel/AddonsCmd 3.68
82 TestFunctional/parallel/PersistentVolumeClaim 50.52
84 TestFunctional/parallel/SSHCmd 15.03
85 TestFunctional/parallel/CpCmd 27.67
86 TestFunctional/parallel/MySQL 82.22
87 TestFunctional/parallel/FileSync 6.82
88 TestFunctional/parallel/CertSync 41.52
92 TestFunctional/parallel/NodeLabels 0.25
94 TestFunctional/parallel/NonActiveRuntimeDisabled 6.72
96 TestFunctional/parallel/ProfileCmd/profile_not_create 11.5
97 TestFunctional/parallel/DockerEnv/powershell 29.85
99 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0.01
101 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 16.73
102 TestFunctional/parallel/ProfileCmd/profile_list 7.28
103 TestFunctional/parallel/ProfileCmd/profile_json_output 8.01
105 TestFunctional/parallel/UpdateContextCmd/no_changes 4.07
106 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 4.13
107 TestFunctional/parallel/UpdateContextCmd/no_clusters 4.05
108 TestFunctional/parallel/ImageCommands/ImageListShort 4.36
109 TestFunctional/parallel/ImageCommands/ImageListTable 4.22
110 TestFunctional/parallel/ImageCommands/ImageListJson 4.23
111 TestFunctional/parallel/ImageCommands/ImageListYaml 4.36
112 TestFunctional/parallel/ImageCommands/ImageBuild 18.06
113 TestFunctional/parallel/ImageCommands/Setup 5.73
114 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 19.4
115 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 13.3
116 TestFunctional/parallel/Version/short 0.4
117 TestFunctional/parallel/Version/components 6.5
118 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 22.59
123 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.11
124 TestFunctional/parallel/ImageCommands/ImageSaveToFile 8.2
125 TestFunctional/parallel/ImageCommands/ImageRemove 8.58
126 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 12.58
127 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 12.38
128 TestFunctional/delete_addon-resizer_images 0.02
129 TestFunctional/delete_my-image_image 0.01
130 TestFunctional/delete_minikube_cached_images 0.01
133 TestIngressAddonLegacy/StartLegacyK8sCluster 133.93
135 TestIngressAddonLegacy/serial/ValidateIngressAddonActivation 49.89
136 TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation 4.67
140 TestJSONOutput/start/Command 128.29
141 TestJSONOutput/start/Audit 0
143 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
144 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
146 TestJSONOutput/pause/Command 6.11
147 TestJSONOutput/pause/Audit 0
149 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
150 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
152 TestJSONOutput/unpause/Command 5.75
153 TestJSONOutput/unpause/Audit 0
155 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
156 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
158 TestJSONOutput/stop/Command 17.87
159 TestJSONOutput/stop/Audit 0
161 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
162 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
163 TestErrorJSONOutput 7.24
165 TestKicCustomNetwork/create_custom_network 137.23
166 TestKicCustomNetwork/use_default_bridge_network 129.6
167 TestKicExistingNetwork 143.29
168 TestKicCustomSubnet 139.79
169 TestMainNoArgs 0.33
170 TestMinikubeProfile 295.95
173 TestMountStart/serial/StartWithMountFirst 49.48
174 TestMountStart/serial/VerifyMountFirst 5.97
175 TestMountStart/serial/StartWithMountSecond 49.82
176 TestMountStart/serial/VerifyMountSecond 6.08
177 TestMountStart/serial/DeleteFirst 18.97
178 TestMountStart/serial/VerifyMountPostDelete 6.08
179 TestMountStart/serial/Stop 8.38
180 TestMountStart/serial/RestartStopped 28.92
181 TestMountStart/serial/VerifyMountPostStop 6.4
184 TestMultiNode/serial/FreshStart2Nodes 261
185 TestMultiNode/serial/DeployApp2Nodes 25.28
186 TestMultiNode/serial/PingHostFrom2Pods 10.6
187 TestMultiNode/serial/AddNode 116.56
188 TestMultiNode/serial/ProfileList 6.42
189 TestMultiNode/serial/CopyFile 216.66
190 TestMultiNode/serial/StopNode 29.42
191 TestMultiNode/serial/StartAfterStop 61.39
192 TestMultiNode/serial/RestartKeepsNodes 218.87
193 TestMultiNode/serial/DeleteNode 45.55
194 TestMultiNode/serial/StopMultiNode 40.28
195 TestMultiNode/serial/RestartMultiNode 144.03
196 TestMultiNode/serial/ValidateNameConflict 140.12
200 TestPreload 349.33
201 TestScheduledStopWindows 219.79
205 TestInsufficientStorage 108.12
206 TestRunningBinaryUpgrade 375.11
209 TestMissingContainerUpgrade 466.39
213 TestStoppedBinaryUpgrade/Setup 0.63
215 TestNoKubernetes/serial/StartNoK8sWithVersion 0.57
221 TestPause/serial/Start 195.15
222 TestNoKubernetes/serial/StartWithK8s 188.24
223 TestStoppedBinaryUpgrade/Upgrade 413.14
224 TestNoKubernetes/serial/StartWithStopK8s 74.74
225 TestPause/serial/SecondStartNoReconfiguration 42.54
226 TestPause/serial/Pause 7.1
227 TestPause/serial/VerifyStatus 7.76
228 TestPause/serial/Unpause 7.06
229 TestPause/serial/PauseAgain 7.86
231 TestPause/serial/DeletePaused 27.96
232 TestPause/serial/VerifyDeletedResources 20.15
233 TestStoppedBinaryUpgrade/MinikubeLogs 10.95
246 TestStartStop/group/old-k8s-version/serial/FirstStart 212.56
248 TestStartStop/group/no-preload/serial/FirstStart 192.17
250 TestStartStop/group/embed-certs/serial/FirstStart 152.4
252 TestStartStop/group/default-k8s-different-port/serial/FirstStart 145.54
253 TestStartStop/group/embed-certs/serial/DeployApp 11.24
254 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 12.51
255 TestStartStop/group/embed-certs/serial/Stop 19.59
256 TestStartStop/group/no-preload/serial/DeployApp 12.14
257 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 6.12
258 TestStartStop/group/embed-certs/serial/SecondStart 412.76
259 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 6.37
260 TestStartStop/group/old-k8s-version/serial/DeployApp 12.3
261 TestStartStop/group/no-preload/serial/Stop 19.75
262 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 6.51
263 TestStartStop/group/old-k8s-version/serial/Stop 19.45
264 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 6.94
265 TestStartStop/group/no-preload/serial/SecondStart 454.1
266 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 7.15
267 TestStartStop/group/old-k8s-version/serial/SecondStart 477.73
268 TestStartStop/group/default-k8s-different-port/serial/DeployApp 11.32
269 TestStartStop/group/default-k8s-different-port/serial/EnableAddonWhileActive 6.44
270 TestStartStop/group/default-k8s-different-port/serial/Stop 19.02
271 TestStartStop/group/default-k8s-different-port/serial/EnableAddonAfterStop 6.41
272 TestStartStop/group/default-k8s-different-port/serial/SecondStart 419.15
273 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 39.1
274 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.54
275 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 7.89
276 TestStartStop/group/embed-certs/serial/Pause 56.5
277 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 48.11
278 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 5.05
279 TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop 51.1
280 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 11.55
281 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 9.52
282 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 8.6
283 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 8.68
284 TestStartStop/group/old-k8s-version/serial/Pause 45.7
285 TestStartStop/group/no-preload/serial/Pause 45.69
287 TestStartStop/group/newest-cni/serial/FirstStart 519.82
288 TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop 5.57
289 TestStartStop/group/default-k8s-different-port/serial/VerifyKubernetesImages 7.74
290 TestStartStop/group/default-k8s-different-port/serial/Pause 50.95
291 TestNetworkPlugins/group/auto/Start 763.55
294 TestStartStop/group/newest-cni/serial/DeployApp 0
295 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 6.51
296 TestStartStop/group/newest-cni/serial/Stop 19.58
297 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 6.46
298 TestStartStop/group/newest-cni/serial/SecondStart 84.45
299 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
300 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
301 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 8.24
304 TestNetworkPlugins/group/bridge/Start 131.71
305 TestNetworkPlugins/group/auto/KubeletFlags 7.12
306 TestNetworkPlugins/group/auto/NetCatPod 21.04
308 TestNetworkPlugins/group/bridge/KubeletFlags 7.25
309 TestNetworkPlugins/group/bridge/NetCatPod 21.9
310 TestNetworkPlugins/group/bridge/DNS 0.64
311 TestNetworkPlugins/group/bridge/Localhost 0.49
312 TestNetworkPlugins/group/bridge/HairPin 0.52
313 TestNetworkPlugins/group/enable-default-cni/Start 142.88
315 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 7.03
316 TestNetworkPlugins/group/enable-default-cni/NetCatPod 31.59
317 TestNetworkPlugins/group/enable-default-cni/DNS 0.67
318 TestNetworkPlugins/group/enable-default-cni/Localhost 0.67
319 TestNetworkPlugins/group/enable-default-cni/HairPin 0.59
x
+
TestDownloadOnly/v1.16.0/json-events (18.11s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/json-events
aaa_download_only_test.go:71: (dbg) Run:  out/minikube-windows-amd64.exe start -o=json --download-only -p download-only-20220602171204-12108 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=docker --driver=docker
aaa_download_only_test.go:71: (dbg) Done: out/minikube-windows-amd64.exe start -o=json --download-only -p download-only-20220602171204-12108 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=docker --driver=docker: (18.1082536s)
--- PASS: TestDownloadOnly/v1.16.0/json-events (18.11s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/preload-exists
--- PASS: TestDownloadOnly/v1.16.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/kubectl
--- PASS: TestDownloadOnly/v1.16.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/LogsDuration (0.75s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/LogsDuration
aaa_download_only_test.go:173: (dbg) Run:  out/minikube-windows-amd64.exe logs -p download-only-20220602171204-12108
aaa_download_only_test.go:173: (dbg) Non-zero exit: out/minikube-windows-amd64.exe logs -p download-only-20220602171204-12108: exit status 85 (746.9311ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|------|---------|------|---------|------------|----------|
	| Command | Args | Profile | User | Version | Start Time | End Time |
	|---------|------|---------|------|---------|------------|----------|
	|---------|------|---------|------|---------|------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2022/06/02 17:12:05
	Running on machine: minikube7
	Binary: Built with gc go1.18.2 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0602 17:12:05.799863   12332 out.go:296] Setting OutFile to fd 596 ...
	I0602 17:12:05.855845   12332 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0602 17:12:05.855845   12332 out.go:309] Setting ErrFile to fd 616...
	I0602 17:12:05.855845   12332 out.go:343] TERM=,COLORTERM=, which probably does not support color
	W0602 17:12:05.871850   12332 root.go:300] Error reading config file at C:\Users\jenkins.minikube7\minikube-integration\.minikube\config\config.json: open C:\Users\jenkins.minikube7\minikube-integration\.minikube\config\config.json: The system cannot find the path specified.
	I0602 17:12:05.876447   12332 out.go:303] Setting JSON to true
	I0602 17:12:05.878430   12332 start.go:115] hostinfo: {"hostname":"minikube7","uptime":53067,"bootTime":1654136858,"procs":192,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19044 Build 19044","kernelVersion":"10.0.19044 Build 19044","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f66ed2ea-6c04-4a6b-8eea-b2fb0953e990"}
	W0602 17:12:05.878430   12332 start.go:123] gopshost.Virtualization returned error: not implemented yet
	I0602 17:12:05.884661   12332 out.go:97] [download-only-20220602171204-12108] minikube v1.26.0-beta.1 on Microsoft Windows 10 Enterprise N 10.0.19044 Build 19044
	I0602 17:12:05.884661   12332 notify.go:193] Checking for updates...
	W0602 17:12:05.884661   12332 preload.go:295] Failed to list preload files: open C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\preloaded-tarball: The system cannot find the file specified.
	I0602 17:12:05.887645   12332 out.go:169] KUBECONFIG=C:\Users\jenkins.minikube7\minikube-integration\kubeconfig
	I0602 17:12:05.890608   12332 out.go:169] MINIKUBE_HOME=C:\Users\jenkins.minikube7\minikube-integration\.minikube
	I0602 17:12:05.893520   12332 out.go:169] MINIKUBE_LOCATION=14269
	I0602 17:12:05.897421   12332 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	W0602 17:12:05.902482   12332 out.go:272] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0602 17:12:05.903447   12332 driver.go:358] Setting default libvirt URI to qemu:///system
	I0602 17:12:08.387790   12332 docker.go:137] docker version: linux-20.10.16
	I0602 17:12:08.397171   12332 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0602 17:12:12.084372   12332 cli_runner.go:217] Completed: docker system info --format "{{json .}}": (3.6871842s)
	I0602 17:12:12.085449   12332 info.go:265] docker info: {ID:DTJ5:6HSD:HQYO:NIUY:6CLH:5UQL:6FBK:7BMO:GVE6:JAI7:26MB:GZA3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:0 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:44 OomKillDisable:true NGoroutines:46 SystemTime:2022-06-02 17:12:09.3804807 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.102.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_
64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:53902323712 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.16 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16 Expected:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16} RuncCommit:{ID:v1.1.1-0-g52de29d Expected:v1.1.1-0-g52de29d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,
profile=default] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.6.0] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:C:\Program Files\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc.
Version:v0.17.0]] Warnings:<nil>}}
	I0602 17:12:12.091811   12332 out.go:97] Using the docker driver based on user configuration
	I0602 17:12:12.091811   12332 start.go:284] selected driver: docker
	I0602 17:12:12.091811   12332 start.go:806] validating driver "docker" against <nil>
	I0602 17:12:12.115257   12332 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0602 17:12:14.086810   12332 cli_runner.go:217] Completed: docker system info --format "{{json .}}": (1.9715442s)
	I0602 17:12:14.087423   12332 info.go:265] docker info: {ID:DTJ5:6HSD:HQYO:NIUY:6CLH:5UQL:6FBK:7BMO:GVE6:JAI7:26MB:GZA3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:0 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:44 OomKillDisable:true NGoroutines:45 SystemTime:2022-06-02 17:12:13.1237013 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.102.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_
64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:53902323712 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.16 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16 Expected:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16} RuncCommit:{ID:v1.1.1-0-g52de29d Expected:v1.1.1-0-g52de29d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,
profile=default] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.6.0] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:C:\Program Files\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc.
Version:v0.17.0]] Warnings:<nil>}}
	I0602 17:12:14.087768   12332 start_flags.go:292] no existing cluster config was found, will generate one from the flags 
	I0602 17:12:14.151095   12332 start_flags.go:373] Using suggested 16300MB memory alloc based on sys=65534MB, container=51405MB
	I0602 17:12:14.151902   12332 start_flags.go:829] Wait components to verify : map[apiserver:true system_pods:true]
	I0602 17:12:14.171690   12332 out.go:169] Using Docker Desktop driver with the root privilege
	I0602 17:12:14.174765   12332 cni.go:95] Creating CNI manager for ""
	I0602 17:12:14.175296   12332 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0602 17:12:14.175296   12332 start_flags.go:306] config:
	{Name:download-only-20220602171204-12108 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496 Memory:16300 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:download-only-20220602171204-12108 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDom
ain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube7:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0602 17:12:14.177952   12332 out.go:97] Starting control plane node download-only-20220602171204-12108 in cluster download-only-20220602171204-12108
	I0602 17:12:14.178083   12332 cache.go:120] Beginning downloading kic base image for docker with docker
	I0602 17:12:14.180627   12332 out.go:97] Pulling base image ...
	I0602 17:12:14.180627   12332 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0602 17:12:14.180813   12332 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496 in local docker daemon
	I0602 17:12:14.219655   12332 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4
	I0602 17:12:14.220089   12332 cache.go:57] Caching tarball of preloaded images
	I0602 17:12:14.220389   12332 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0602 17:12:14.230973   12332 out.go:97] Downloading Kubernetes v1.16.0 preload ...
	I0602 17:12:14.230973   12332 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4 ...
	I0602 17:12:14.312184   12332 download.go:101] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4?checksum=md5:326f3ce331abb64565b50b8c9e791244 -> C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4
	I0602 17:12:15.332265   12332 cache.go:146] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496 to local cache
	I0602 17:12:15.332265   12332 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496.tar -> C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds_v0.0.31-1654032859-14252@sha256_6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496.tar
	I0602 17:12:15.332265   12332 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496.tar -> C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds_v0.0.31-1654032859-14252@sha256_6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496.tar
	I0602 17:12:15.332265   12332 image.go:59] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496 in local cache directory
	I0602 17:12:15.333254   12332 image.go:119] Writing gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496 to local cache
	I0602 17:12:17.287884   12332 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4 ...
	I0602 17:12:17.288882   12332 preload.go:256] verifying checksumm of C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4 ...
	I0602 17:12:18.329361   12332 cache.go:60] Finished verifying existence of preloaded tar for  v1.16.0 on docker
	I0602 17:12:18.329361   12332 profile.go:148] Saving config to C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\download-only-20220602171204-12108\config.json ...
	I0602 17:12:18.329361   12332 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\download-only-20220602171204-12108\config.json: {Name:mkd3ad3210e90c7f2cf72401022728fd1905a5e4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0602 17:12:18.331234   12332 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0602 17:12:18.332239   12332 download.go:101] Downloading: https://storage.googleapis.com/kubernetes-release/release/v1.16.0/bin/windows/amd64/kubectl.exe?checksum=file:https://storage.googleapis.com/kubernetes-release/release/v1.16.0/bin/windows/amd64/kubectl.exe.sha1 -> C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\windows\amd64\v1.16.0/kubectl.exe
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-20220602171204-12108"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:174: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.16.0/LogsDuration (0.75s)

                                                
                                    
x
+
TestDownloadOnly/v1.23.6/json-events (13.01s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.23.6/json-events
aaa_download_only_test.go:71: (dbg) Run:  out/minikube-windows-amd64.exe start -o=json --download-only -p download-only-20220602171204-12108 --force --alsologtostderr --kubernetes-version=v1.23.6 --container-runtime=docker --driver=docker
aaa_download_only_test.go:71: (dbg) Done: out/minikube-windows-amd64.exe start -o=json --download-only -p download-only-20220602171204-12108 --force --alsologtostderr --kubernetes-version=v1.23.6 --container-runtime=docker --driver=docker: (13.0058328s)
--- PASS: TestDownloadOnly/v1.23.6/json-events (13.01s)

                                                
                                    
x
+
TestDownloadOnly/v1.23.6/preload-exists (0.02s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.23.6/preload-exists
--- PASS: TestDownloadOnly/v1.23.6/preload-exists (0.02s)

                                                
                                    
x
+
TestDownloadOnly/v1.23.6/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.23.6/kubectl
--- PASS: TestDownloadOnly/v1.23.6/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.23.6/LogsDuration (0.63s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.23.6/LogsDuration
aaa_download_only_test.go:173: (dbg) Run:  out/minikube-windows-amd64.exe logs -p download-only-20220602171204-12108
aaa_download_only_test.go:173: (dbg) Non-zero exit: out/minikube-windows-amd64.exe logs -p download-only-20220602171204-12108: exit status 85 (624.2186ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|------|---------|------|---------|------------|----------|
	| Command | Args | Profile | User | Version | Start Time | End Time |
	|---------|------|---------|------|---------|------------|----------|
	|---------|------|---------|------|---------|------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2022/06/02 17:12:23
	Running on machine: minikube7
	Binary: Built with gc go1.18.2 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0602 17:12:23.266077    6596 out.go:296] Setting OutFile to fd 620 ...
	I0602 17:12:23.320752    6596 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0602 17:12:23.320752    6596 out.go:309] Setting ErrFile to fd 632...
	I0602 17:12:23.320752    6596 out.go:343] TERM=,COLORTERM=, which probably does not support color
	W0602 17:12:23.335003    6596 root.go:300] Error reading config file at C:\Users\jenkins.minikube7\minikube-integration\.minikube\config\config.json: open C:\Users\jenkins.minikube7\minikube-integration\.minikube\config\config.json: The system cannot find the file specified.
	I0602 17:12:23.336107    6596 out.go:303] Setting JSON to true
	I0602 17:12:23.339132    6596 start.go:115] hostinfo: {"hostname":"minikube7","uptime":53085,"bootTime":1654136858,"procs":191,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19044 Build 19044","kernelVersion":"10.0.19044 Build 19044","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f66ed2ea-6c04-4a6b-8eea-b2fb0953e990"}
	W0602 17:12:23.339255    6596 start.go:123] gopshost.Virtualization returned error: not implemented yet
	I0602 17:12:23.342699    6596 out.go:97] [download-only-20220602171204-12108] minikube v1.26.0-beta.1 on Microsoft Windows 10 Enterprise N 10.0.19044 Build 19044
	I0602 17:12:23.343723    6596 notify.go:193] Checking for updates...
	I0602 17:12:23.345629    6596 out.go:169] KUBECONFIG=C:\Users\jenkins.minikube7\minikube-integration\kubeconfig
	I0602 17:12:23.349337    6596 out.go:169] MINIKUBE_HOME=C:\Users\jenkins.minikube7\minikube-integration\.minikube
	I0602 17:12:23.352169    6596 out.go:169] MINIKUBE_LOCATION=14269
	I0602 17:12:23.355320    6596 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	W0602 17:12:23.360179    6596 out.go:272] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0602 17:12:23.361066    6596 config.go:178] Loaded profile config "download-only-20220602171204-12108": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.16.0
	W0602 17:12:23.361209    6596 start.go:714] api.Load failed for download-only-20220602171204-12108: filestore "download-only-20220602171204-12108": Docker machine "download-only-20220602171204-12108" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I0602 17:12:23.361209    6596 driver.go:358] Setting default libvirt URI to qemu:///system
	W0602 17:12:23.361209    6596 start.go:714] api.Load failed for download-only-20220602171204-12108: filestore "download-only-20220602171204-12108": Docker machine "download-only-20220602171204-12108" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I0602 17:12:25.811916    6596 docker.go:137] docker version: linux-20.10.16
	I0602 17:12:25.820484    6596 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0602 17:12:27.744638    6596 cli_runner.go:217] Completed: docker system info --format "{{json .}}": (1.924055s)
	I0602 17:12:27.751939    6596 info.go:265] docker info: {ID:DTJ5:6HSD:HQYO:NIUY:6CLH:5UQL:6FBK:7BMO:GVE6:JAI7:26MB:GZA3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:0 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:44 OomKillDisable:true NGoroutines:45 SystemTime:2022-06-02 17:12:26.7912341 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.102.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_
64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:53902323712 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.16 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16 Expected:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16} RuncCommit:{ID:v1.1.1-0-g52de29d Expected:v1.1.1-0-g52de29d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,
profile=default] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.6.0] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:C:\Program Files\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc.
Version:v0.17.0]] Warnings:<nil>}}
	I0602 17:12:27.755576    6596 out.go:97] Using the docker driver based on existing profile
	I0602 17:12:27.755669    6596 start.go:284] selected driver: docker
	I0602 17:12:27.755669    6596 start.go:806] validating driver "docker" against &{Name:download-only-20220602171204-12108 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496 Memory:16300 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:download-only-20220602171204-12108 Nam
espace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube7:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0602 17:12:27.777190    6596 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0602 17:12:29.726309    6596 cli_runner.go:217] Completed: docker system info --format "{{json .}}": (1.9491102s)
	I0602 17:12:29.726309    6596 info.go:265] docker info: {ID:DTJ5:6HSD:HQYO:NIUY:6CLH:5UQL:6FBK:7BMO:GVE6:JAI7:26MB:GZA3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:0 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:44 OomKillDisable:true NGoroutines:45 SystemTime:2022-06-02 17:12:28.7627596 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.102.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_
64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:53902323712 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.16 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16 Expected:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16} RuncCommit:{ID:v1.1.1-0-g52de29d Expected:v1.1.1-0-g52de29d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,
profile=default] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.6.0] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:C:\Program Files\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc.
Version:v0.17.0]] Warnings:<nil>}}
	I0602 17:12:29.771020    6596 cni.go:95] Creating CNI manager for ""
	I0602 17:12:29.771298    6596 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0602 17:12:29.771298    6596 start_flags.go:306] config:
	{Name:download-only-20220602171204-12108 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496 Memory:16300 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.6 ClusterName:download-only-20220602171204-12108 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDom
ain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube7:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0602 17:12:29.872450    6596 out.go:97] Starting control plane node download-only-20220602171204-12108 in cluster download-only-20220602171204-12108
	I0602 17:12:29.873052    6596 cache.go:120] Beginning downloading kic base image for docker with docker
	I0602 17:12:29.878483    6596 out.go:97] Pulling base image ...
	I0602 17:12:29.878561    6596 preload.go:132] Checking if preload exists for k8s version v1.23.6 and runtime docker
	I0602 17:12:29.878561    6596 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496 in local docker daemon
	I0602 17:12:29.919373    6596 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.23.6/preloaded-images-k8s-v18-v1.23.6-docker-overlay2-amd64.tar.lz4
	I0602 17:12:29.919373    6596 cache.go:57] Caching tarball of preloaded images
	I0602 17:12:29.920375    6596 preload.go:132] Checking if preload exists for k8s version v1.23.6 and runtime docker
	I0602 17:12:29.923394    6596 out.go:97] Downloading Kubernetes v1.23.6 preload ...
	I0602 17:12:29.923394    6596 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.23.6-docker-overlay2-amd64.tar.lz4 ...
	I0602 17:12:29.990033    6596 download.go:101] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.23.6/preloaded-images-k8s-v18-v1.23.6-docker-overlay2-amd64.tar.lz4?checksum=md5:a6c3f222f3cce2a88e27e126d64eb717 -> C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.23.6-docker-overlay2-amd64.tar.lz4
	I0602 17:12:31.058606    6596 cache.go:146] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496 to local cache
	I0602 17:12:31.058770    6596 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496.tar -> C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds_v0.0.31-1654032859-14252@sha256_6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496.tar
	I0602 17:12:31.058770    6596 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496.tar -> C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds_v0.0.31-1654032859-14252@sha256_6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496.tar
	I0602 17:12:31.058770    6596 image.go:59] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496 in local cache directory
	I0602 17:12:31.058770    6596 image.go:62] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496 in local cache directory, skipping pull
	I0602 17:12:31.059301    6596 image.go:103] gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496 exists in cache, skipping pull
	I0602 17:12:31.059301    6596 cache.go:149] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496 as a tarball
	I0602 17:12:33.057457    6596 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.23.6-docker-overlay2-amd64.tar.lz4 ...
	I0602 17:12:33.057457    6596 preload.go:256] verifying checksumm of C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.23.6-docker-overlay2-amd64.tar.lz4 ...
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-20220602171204-12108"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:174: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.23.6/LogsDuration (0.63s)

                                                
                                    
x
+
TestDownloadOnly/DeleteAll (11.18s)

                                                
                                                
=== RUN   TestDownloadOnly/DeleteAll
aaa_download_only_test.go:191: (dbg) Run:  out/minikube-windows-amd64.exe delete --all
aaa_download_only_test.go:191: (dbg) Done: out/minikube-windows-amd64.exe delete --all: (11.183845s)
--- PASS: TestDownloadOnly/DeleteAll (11.18s)

                                                
                                    
x
+
TestDownloadOnly/DeleteAlwaysSucceeds (6.92s)

                                                
                                                
=== RUN   TestDownloadOnly/DeleteAlwaysSucceeds
aaa_download_only_test.go:203: (dbg) Run:  out/minikube-windows-amd64.exe delete -p download-only-20220602171204-12108
aaa_download_only_test.go:203: (dbg) Done: out/minikube-windows-amd64.exe delete -p download-only-20220602171204-12108: (6.9167026s)
--- PASS: TestDownloadOnly/DeleteAlwaysSucceeds (6.92s)

                                                
                                    
x
+
TestDownloadOnlyKic (45.51s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:228: (dbg) Run:  out/minikube-windows-amd64.exe start --download-only -p download-docker-20220602171301-12108 --force --alsologtostderr --driver=docker
aaa_download_only_test.go:228: (dbg) Done: out/minikube-windows-amd64.exe start --download-only -p download-docker-20220602171301-12108 --force --alsologtostderr --driver=docker: (36.2149988s)
helpers_test.go:175: Cleaning up "download-docker-20220602171301-12108" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p download-docker-20220602171301-12108
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p download-docker-20220602171301-12108: (8.1609452s)
--- PASS: TestDownloadOnlyKic (45.51s)

                                                
                                    
x
+
TestBinaryMirror (16.28s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:310: (dbg) Run:  out/minikube-windows-amd64.exe start --download-only -p binary-mirror-20220602171347-12108 --alsologtostderr --binary-mirror http://127.0.0.1:50625 --driver=docker
aaa_download_only_test.go:310: (dbg) Done: out/minikube-windows-amd64.exe start --download-only -p binary-mirror-20220602171347-12108 --alsologtostderr --binary-mirror http://127.0.0.1:50625 --driver=docker: (8.1561331s)
helpers_test.go:175: Cleaning up "binary-mirror-20220602171347-12108" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p binary-mirror-20220602171347-12108
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p binary-mirror-20220602171347-12108: (7.8815272s)
--- PASS: TestBinaryMirror (16.28s)

                                                
                                    
x
+
TestOffline (223s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-windows-amd64.exe start -p offline-docker-20220602190816-12108 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=docker

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Done: out/minikube-windows-amd64.exe start -p offline-docker-20220602190816-12108 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=docker: (3m15.3723781s)
helpers_test.go:175: Cleaning up "offline-docker-20220602190816-12108" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p offline-docker-20220602190816-12108
E0602 19:11:57.277618   12108 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\addons-20220602171403-12108\client.crt: The system cannot find the path specified.
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p offline-docker-20220602190816-12108: (27.6245776s)
--- PASS: TestOffline (223.00s)

                                                
                                    
x
+
TestAddons/Setup (473.76s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:75: (dbg) Run:  out/minikube-windows-amd64.exe start -p addons-20220602171403-12108 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --driver=docker --addons=ingress --addons=ingress-dns --addons=helm-tiller
addons_test.go:75: (dbg) Done: out/minikube-windows-amd64.exe start -p addons-20220602171403-12108 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --driver=docker --addons=ingress --addons=ingress-dns --addons=helm-tiller: (7m53.7598717s)
--- PASS: TestAddons/Setup (473.76s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (14.52s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:357: metrics-server stabilized in 27.8338ms

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:359: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
helpers_test.go:342: "metrics-server-bd6f4dd56-kfld5" [63c97f67-c9fe-4c35-8ed4-1073aa18d8f6] Running

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:359: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.0646492s

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:365: (dbg) Run:  kubectl --context addons-20220602171403-12108 top pods -n kube-system

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:382: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-20220602171403-12108 addons disable metrics-server --alsologtostderr -v=1

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:382: (dbg) Done: out/minikube-windows-amd64.exe -p addons-20220602171403-12108 addons disable metrics-server --alsologtostderr -v=1: (9.0211458s)
--- PASS: TestAddons/parallel/MetricsServer (14.52s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (60.53s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:406: tiller-deploy stabilized in 27.8338ms

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:408: (dbg) TestAddons/parallel/HelmTiller: waiting 6m0s for pods matching "app=helm" in namespace "kube-system" ...

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
helpers_test.go:342: "tiller-deploy-6d67d5465d-m99rb" [58e78bf4-1baa-4e69-94e2-d97112bb9593] Running

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:408: (dbg) TestAddons/parallel/HelmTiller: app=helm healthy within 5.0640898s

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:423: (dbg) Run:  kubectl --context addons-20220602171403-12108 run --rm helm-test --restart=Never --image=alpine/helm:2.16.3 -it --namespace=kube-system -- version

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:423: (dbg) Done: kubectl --context addons-20220602171403-12108 run --rm helm-test --restart=Never --image=alpine/helm:2.16.3 -it --namespace=kube-system -- version: (37.9121262s)
addons_test.go:428: kubectl --context addons-20220602171403-12108 run --rm helm-test --restart=Never --image=alpine/helm:2.16.3 -it --namespace=kube-system -- version: unexpected stderr: Unable to use a TTY - input is not a terminal or the right kind of file
If you don't see a command prompt, try pressing enter.
Error attaching, falling back to logs: 
addons_test.go:423: (dbg) Run:  kubectl --context addons-20220602171403-12108 run --rm helm-test --restart=Never --image=alpine/helm:2.16.3 -it --namespace=kube-system -- version

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:423: (dbg) Done: kubectl --context addons-20220602171403-12108 run --rm helm-test --restart=Never --image=alpine/helm:2.16.3 -it --namespace=kube-system -- version: (10.609921s)
addons_test.go:440: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-20220602171403-12108 addons disable helm-tiller --alsologtostderr -v=1

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:440: (dbg) Done: out/minikube-windows-amd64.exe -p addons-20220602171403-12108 addons disable helm-tiller --alsologtostderr -v=1: (6.3044902s)
--- PASS: TestAddons/parallel/HelmTiller (60.53s)

                                                
                                    
x
+
TestAddons/parallel/CSI (88.25s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:511: csi-hostpath-driver pods stabilized in 43.9037ms
addons_test.go:514: (dbg) Run:  kubectl --context addons-20220602171403-12108 create -f testdata\csi-hostpath-driver\pvc.yaml

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:514: (dbg) Done: kubectl --context addons-20220602171403-12108 create -f testdata\csi-hostpath-driver\pvc.yaml: (1.6134947s)
addons_test.go:519: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:392: (dbg) Run:  kubectl --context addons-20220602171403-12108 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:524: (dbg) Run:  kubectl --context addons-20220602171403-12108 create -f testdata\csi-hostpath-driver\pv-pod.yaml
addons_test.go:529: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:342: "task-pv-pod" [dba61eed-c08d-4302-a3a8-a2dcace622ca] Pending
helpers_test.go:342: "task-pv-pod" [dba61eed-c08d-4302-a3a8-a2dcace622ca] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])

                                                
                                                
=== CONT  TestAddons/parallel/CSI
helpers_test.go:342: "task-pv-pod" [dba61eed-c08d-4302-a3a8-a2dcace622ca] Running
addons_test.go:529: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 37.1547836s
addons_test.go:534: (dbg) Run:  kubectl --context addons-20220602171403-12108 create -f testdata\csi-hostpath-driver\snapshot.yaml
addons_test.go:539: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:417: (dbg) Run:  kubectl --context addons-20220602171403-12108 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:425: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: 
helpers_test.go:417: (dbg) Run:  kubectl --context addons-20220602171403-12108 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:544: (dbg) Run:  kubectl --context addons-20220602171403-12108 delete pod task-pv-pod

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:544: (dbg) Done: kubectl --context addons-20220602171403-12108 delete pod task-pv-pod: (3.3175556s)
addons_test.go:550: (dbg) Run:  kubectl --context addons-20220602171403-12108 delete pvc hpvc
addons_test.go:556: (dbg) Run:  kubectl --context addons-20220602171403-12108 create -f testdata\csi-hostpath-driver\pvc-restore.yaml
addons_test.go:556: (dbg) Done: kubectl --context addons-20220602171403-12108 create -f testdata\csi-hostpath-driver\pvc-restore.yaml: (1.0070386s)
addons_test.go:561: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:392: (dbg) Run:  kubectl --context addons-20220602171403-12108 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:392: (dbg) Run:  kubectl --context addons-20220602171403-12108 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:566: (dbg) Run:  kubectl --context addons-20220602171403-12108 create -f testdata\csi-hostpath-driver\pv-pod-restore.yaml
addons_test.go:571: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:342: "task-pv-pod-restore" [75b5ff50-6124-4b29-bbd6-583fc119a575] Pending
helpers_test.go:342: "task-pv-pod-restore" [75b5ff50-6124-4b29-bbd6-583fc119a575] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])

                                                
                                                
=== CONT  TestAddons/parallel/CSI
helpers_test.go:342: "task-pv-pod-restore" [75b5ff50-6124-4b29-bbd6-583fc119a575] Running

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:571: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 16.083529s
addons_test.go:576: (dbg) Run:  kubectl --context addons-20220602171403-12108 delete pod task-pv-pod-restore

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:576: (dbg) Done: kubectl --context addons-20220602171403-12108 delete pod task-pv-pod-restore: (2.2164656s)
addons_test.go:580: (dbg) Run:  kubectl --context addons-20220602171403-12108 delete pvc hpvc-restore
addons_test.go:584: (dbg) Run:  kubectl --context addons-20220602171403-12108 delete volumesnapshot new-snapshot-demo
addons_test.go:588: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-20220602171403-12108 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:588: (dbg) Done: out/minikube-windows-amd64.exe -p addons-20220602171403-12108 addons disable csi-hostpath-driver --alsologtostderr -v=1: (13.9943232s)
addons_test.go:592: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-20220602171403-12108 addons disable volumesnapshots --alsologtostderr -v=1
addons_test.go:592: (dbg) Done: out/minikube-windows-amd64.exe -p addons-20220602171403-12108 addons disable volumesnapshots --alsologtostderr -v=1: (5.8506291s)
--- PASS: TestAddons/parallel/CSI (88.25s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth (27.85s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth
addons_test.go:603: (dbg) Run:  kubectl --context addons-20220602171403-12108 create -f testdata\busybox.yaml
addons_test.go:603: (dbg) Done: kubectl --context addons-20220602171403-12108 create -f testdata\busybox.yaml: (1.9528017s)
addons_test.go:609: (dbg) TestAddons/serial/GCPAuth: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:342: "busybox" [822360cc-dbc7-42db-8aeb-36fd25b83327] Pending
helpers_test.go:342: "busybox" [822360cc-dbc7-42db-8aeb-36fd25b83327] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:342: "busybox" [822360cc-dbc7-42db-8aeb-36fd25b83327] Running
addons_test.go:609: (dbg) TestAddons/serial/GCPAuth: integration-test=busybox healthy within 10.0831049s
addons_test.go:615: (dbg) Run:  kubectl --context addons-20220602171403-12108 exec busybox -- /bin/sh -c "printenv GOOGLE_APPLICATION_CREDENTIALS"
addons_test.go:628: (dbg) Run:  kubectl --context addons-20220602171403-12108 exec busybox -- /bin/sh -c "cat /google-app-creds.json"
addons_test.go:652: (dbg) Run:  kubectl --context addons-20220602171403-12108 exec busybox -- /bin/sh -c "printenv GOOGLE_CLOUD_PROJECT"
addons_test.go:665: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-20220602171403-12108 addons disable gcp-auth --alsologtostderr -v=1
addons_test.go:665: (dbg) Done: out/minikube-windows-amd64.exe -p addons-20220602171403-12108 addons disable gcp-auth --alsologtostderr -v=1: (14.2756912s)
--- PASS: TestAddons/serial/GCPAuth (27.85s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (24.45s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:132: (dbg) Run:  out/minikube-windows-amd64.exe stop -p addons-20220602171403-12108
addons_test.go:132: (dbg) Done: out/minikube-windows-amd64.exe stop -p addons-20220602171403-12108: (18.6001386s)
addons_test.go:136: (dbg) Run:  out/minikube-windows-amd64.exe addons enable dashboard -p addons-20220602171403-12108
addons_test.go:136: (dbg) Done: out/minikube-windows-amd64.exe addons enable dashboard -p addons-20220602171403-12108: (2.9357701s)
addons_test.go:140: (dbg) Run:  out/minikube-windows-amd64.exe addons disable dashboard -p addons-20220602171403-12108
addons_test.go:140: (dbg) Done: out/minikube-windows-amd64.exe addons disable dashboard -p addons-20220602171403-12108: (2.9106749s)
--- PASS: TestAddons/StoppedEnableDisable (24.45s)

                                                
                                    
x
+
TestCertOptions (167.03s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-windows-amd64.exe start -p cert-options-20220602191948-12108 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker --apiserver-name=localhost

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Done: out/minikube-windows-amd64.exe start -p cert-options-20220602191948-12108 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker --apiserver-name=localhost: (2m3.245624s)
cert_options_test.go:60: (dbg) Run:  out/minikube-windows-amd64.exe -p cert-options-20220602191948-12108 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
E0602 19:21:57.288494   12108 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\addons-20220602171403-12108\client.crt: The system cannot find the path specified.

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:60: (dbg) Done: out/minikube-windows-amd64.exe -p cert-options-20220602191948-12108 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt": (8.6471814s)
cert_options_test.go:100: (dbg) Run:  out/minikube-windows-amd64.exe ssh -p cert-options-20220602191948-12108 -- "sudo cat /etc/kubernetes/admin.conf"

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:100: (dbg) Done: out/minikube-windows-amd64.exe ssh -p cert-options-20220602191948-12108 -- "sudo cat /etc/kubernetes/admin.conf": (7.4828273s)
helpers_test.go:175: Cleaning up "cert-options-20220602191948-12108" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p cert-options-20220602191948-12108

                                                
                                                
=== CONT  TestCertOptions
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p cert-options-20220602191948-12108: (26.2012298s)
--- PASS: TestCertOptions (167.03s)

                                                
                                    
x
+
TestCertExpiration (390.05s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-windows-amd64.exe start -p cert-expiration-20220602191754-12108 --memory=2048 --cert-expiration=3m --driver=docker
E0602 19:19:06.671184   12108 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\functional-20220602172845-12108\client.crt: The system cannot find the path specified.

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Done: out/minikube-windows-amd64.exe start -p cert-expiration-20220602191754-12108 --memory=2048 --cert-expiration=3m --driver=docker: (2m11.2670812s)

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:131: (dbg) Run:  out/minikube-windows-amd64.exe start -p cert-expiration-20220602191754-12108 --memory=2048 --cert-expiration=8760h --driver=docker
E0602 19:23:20.503346   12108 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\addons-20220602171403-12108\client.crt: The system cannot find the path specified.
cert_options_test.go:131: (dbg) Done: out/minikube-windows-amd64.exe start -p cert-expiration-20220602191754-12108 --memory=2048 --cert-expiration=8760h --driver=docker: (46.0801326s)
helpers_test.go:175: Cleaning up "cert-expiration-20220602191754-12108" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p cert-expiration-20220602191754-12108
E0602 19:24:04.565185   12108 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\ingress-addon-legacy-20220602180932-12108\client.crt: The system cannot find the path specified.
E0602 19:24:06.674316   12108 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\functional-20220602172845-12108\client.crt: The system cannot find the path specified.
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p cert-expiration-20220602191754-12108: (32.6956146s)
--- PASS: TestCertExpiration (390.05s)

                                                
                                    
x
+
TestDockerFlags (168.7s)

                                                
                                                
=== RUN   TestDockerFlags
=== PAUSE TestDockerFlags

                                                
                                                

                                                
                                                
=== CONT  TestDockerFlags
docker_test.go:45: (dbg) Run:  out/minikube-windows-amd64.exe start -p docker-flags-20220602191946-12108 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=docker

                                                
                                                
=== CONT  TestDockerFlags
docker_test.go:45: (dbg) Done: out/minikube-windows-amd64.exe start -p docker-flags-20220602191946-12108 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=docker: (2m4.082439s)
docker_test.go:50: (dbg) Run:  out/minikube-windows-amd64.exe -p docker-flags-20220602191946-12108 ssh "sudo systemctl show docker --property=Environment --no-pager"

                                                
                                                
=== CONT  TestDockerFlags
docker_test.go:50: (dbg) Done: out/minikube-windows-amd64.exe -p docker-flags-20220602191946-12108 ssh "sudo systemctl show docker --property=Environment --no-pager": (8.4997246s)
docker_test.go:61: (dbg) Run:  out/minikube-windows-amd64.exe -p docker-flags-20220602191946-12108 ssh "sudo systemctl show docker --property=ExecStart --no-pager"

                                                
                                                
=== CONT  TestDockerFlags
docker_test.go:61: (dbg) Done: out/minikube-windows-amd64.exe -p docker-flags-20220602191946-12108 ssh "sudo systemctl show docker --property=ExecStart --no-pager": (7.8255068s)
helpers_test.go:175: Cleaning up "docker-flags-20220602191946-12108" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p docker-flags-20220602191946-12108

                                                
                                                
=== CONT  TestDockerFlags
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p docker-flags-20220602191946-12108: (28.2906143s)
--- PASS: TestDockerFlags (168.70s)

                                                
                                    
x
+
TestForceSystemdFlag (190.31s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:85: (dbg) Run:  out/minikube-windows-amd64.exe start -p force-systemd-flag-20220602191336-12108 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:85: (dbg) Done: out/minikube-windows-amd64.exe start -p force-systemd-flag-20220602191336-12108 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker: (2m33.8243433s)
docker_test.go:104: (dbg) Run:  out/minikube-windows-amd64.exe -p force-systemd-flag-20220602191336-12108 ssh "docker info --format {{.CgroupDriver}}"

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:104: (dbg) Done: out/minikube-windows-amd64.exe -p force-systemd-flag-20220602191336-12108 ssh "docker info --format {{.CgroupDriver}}": (9.0085132s)
helpers_test.go:175: Cleaning up "force-systemd-flag-20220602191336-12108" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p force-systemd-flag-20220602191336-12108

                                                
                                                
=== CONT  TestForceSystemdFlag
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p force-systemd-flag-20220602191336-12108: (27.4723712s)
--- PASS: TestForceSystemdFlag (190.31s)

                                                
                                    
x
+
TestForceSystemdEnv (181.37s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:150: (dbg) Run:  out/minikube-windows-amd64.exe start -p force-systemd-env-20220602191646-12108 --memory=2048 --alsologtostderr -v=5 --driver=docker
E0602 19:16:57.279344   12108 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\addons-20220602171403-12108\client.crt: The system cannot find the path specified.

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:150: (dbg) Done: out/minikube-windows-amd64.exe start -p force-systemd-env-20220602191646-12108 --memory=2048 --alsologtostderr -v=5 --driver=docker: (2m20.3701347s)
docker_test.go:104: (dbg) Run:  out/minikube-windows-amd64.exe -p force-systemd-env-20220602191646-12108 ssh "docker info --format {{.CgroupDriver}}"

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:104: (dbg) Done: out/minikube-windows-amd64.exe -p force-systemd-env-20220602191646-12108 ssh "docker info --format {{.CgroupDriver}}": (9.3908362s)
helpers_test.go:175: Cleaning up "force-systemd-env-20220602191646-12108" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p force-systemd-env-20220602191646-12108

                                                
                                                
=== CONT  TestForceSystemdEnv
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p force-systemd-env-20220602191646-12108: (31.603437s)
--- PASS: TestForceSystemdEnv (181.37s)

                                                
                                    
x
+
TestErrorSpam/setup (113.54s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:78: (dbg) Run:  out/minikube-windows-amd64.exe start -p nospam-20220602172442-12108 -n=1 --memory=2250 --wait=false --log_dir=C:\Users\jenkins.minikube7\AppData\Local\Temp\nospam-20220602172442-12108 --driver=docker
error_spam_test.go:78: (dbg) Done: out/minikube-windows-amd64.exe start -p nospam-20220602172442-12108 -n=1 --memory=2250 --wait=false --log_dir=C:\Users\jenkins.minikube7\AppData\Local\Temp\nospam-20220602172442-12108 --driver=docker: (1m53.5407653s)
error_spam_test.go:88: acceptable stderr: "! C:\\ProgramData\\chocolatey\\bin\\kubectl.exe is version 1.18.2, which may have incompatibilites with Kubernetes 1.23.6."
--- PASS: TestErrorSpam/setup (113.54s)

                                                
                                    
x
+
TestErrorSpam/start (22.32s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:213: Cleaning up 1 logfile(s) ...
error_spam_test.go:156: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-20220602172442-12108 --log_dir C:\Users\jenkins.minikube7\AppData\Local\Temp\nospam-20220602172442-12108 start --dry-run
error_spam_test.go:156: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-20220602172442-12108 --log_dir C:\Users\jenkins.minikube7\AppData\Local\Temp\nospam-20220602172442-12108 start --dry-run: (7.6061048s)
error_spam_test.go:156: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-20220602172442-12108 --log_dir C:\Users\jenkins.minikube7\AppData\Local\Temp\nospam-20220602172442-12108 start --dry-run
error_spam_test.go:156: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-20220602172442-12108 --log_dir C:\Users\jenkins.minikube7\AppData\Local\Temp\nospam-20220602172442-12108 start --dry-run: (7.4536379s)
error_spam_test.go:179: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-20220602172442-12108 --log_dir C:\Users\jenkins.minikube7\AppData\Local\Temp\nospam-20220602172442-12108 start --dry-run
E0602 17:26:57.250024   12108 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\addons-20220602171403-12108\client.crt: The system cannot find the path specified.
E0602 17:26:57.264404   12108 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\addons-20220602171403-12108\client.crt: The system cannot find the path specified.
E0602 17:26:57.279771   12108 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\addons-20220602171403-12108\client.crt: The system cannot find the path specified.
E0602 17:26:57.305473   12108 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\addons-20220602171403-12108\client.crt: The system cannot find the path specified.
E0602 17:26:57.358757   12108 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\addons-20220602171403-12108\client.crt: The system cannot find the path specified.
E0602 17:26:57.451781   12108 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\addons-20220602171403-12108\client.crt: The system cannot find the path specified.
E0602 17:26:57.623056   12108 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\addons-20220602171403-12108\client.crt: The system cannot find the path specified.
E0602 17:26:57.957854   12108 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\addons-20220602171403-12108\client.crt: The system cannot find the path specified.
error_spam_test.go:179: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-20220602172442-12108 --log_dir C:\Users\jenkins.minikube7\AppData\Local\Temp\nospam-20220602172442-12108 start --dry-run: (7.2534231s)
--- PASS: TestErrorSpam/start (22.32s)

                                                
                                    
x
+
TestErrorSpam/status (19.72s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:213: Cleaning up 0 logfile(s) ...
error_spam_test.go:156: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-20220602172442-12108 --log_dir C:\Users\jenkins.minikube7\AppData\Local\Temp\nospam-20220602172442-12108 status
E0602 17:26:58.600480   12108 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\addons-20220602171403-12108\client.crt: The system cannot find the path specified.
E0602 17:26:59.883862   12108 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\addons-20220602171403-12108\client.crt: The system cannot find the path specified.
E0602 17:27:02.451470   12108 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\addons-20220602171403-12108\client.crt: The system cannot find the path specified.
error_spam_test.go:156: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-20220602172442-12108 --log_dir C:\Users\jenkins.minikube7\AppData\Local\Temp\nospam-20220602172442-12108 status: (6.5235243s)
error_spam_test.go:156: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-20220602172442-12108 --log_dir C:\Users\jenkins.minikube7\AppData\Local\Temp\nospam-20220602172442-12108 status
E0602 17:27:07.583335   12108 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\addons-20220602171403-12108\client.crt: The system cannot find the path specified.
error_spam_test.go:156: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-20220602172442-12108 --log_dir C:\Users\jenkins.minikube7\AppData\Local\Temp\nospam-20220602172442-12108 status: (6.6104348s)
error_spam_test.go:179: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-20220602172442-12108 --log_dir C:\Users\jenkins.minikube7\AppData\Local\Temp\nospam-20220602172442-12108 status
E0602 17:27:17.825878   12108 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\addons-20220602171403-12108\client.crt: The system cannot find the path specified.
error_spam_test.go:179: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-20220602172442-12108 --log_dir C:\Users\jenkins.minikube7\AppData\Local\Temp\nospam-20220602172442-12108 status: (6.5831549s)
--- PASS: TestErrorSpam/status (19.72s)

                                                
                                    
x
+
TestErrorSpam/pause (17.22s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:213: Cleaning up 0 logfile(s) ...
error_spam_test.go:156: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-20220602172442-12108 --log_dir C:\Users\jenkins.minikube7\AppData\Local\Temp\nospam-20220602172442-12108 pause
error_spam_test.go:156: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-20220602172442-12108 --log_dir C:\Users\jenkins.minikube7\AppData\Local\Temp\nospam-20220602172442-12108 pause: (6.3388683s)
error_spam_test.go:156: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-20220602172442-12108 --log_dir C:\Users\jenkins.minikube7\AppData\Local\Temp\nospam-20220602172442-12108 pause
error_spam_test.go:156: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-20220602172442-12108 --log_dir C:\Users\jenkins.minikube7\AppData\Local\Temp\nospam-20220602172442-12108 pause: (5.4364199s)
error_spam_test.go:179: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-20220602172442-12108 --log_dir C:\Users\jenkins.minikube7\AppData\Local\Temp\nospam-20220602172442-12108 pause
error_spam_test.go:179: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-20220602172442-12108 --log_dir C:\Users\jenkins.minikube7\AppData\Local\Temp\nospam-20220602172442-12108 pause: (5.4431179s)
--- PASS: TestErrorSpam/pause (17.22s)

                                                
                                    
x
+
TestErrorSpam/unpause (17.49s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:213: Cleaning up 0 logfile(s) ...
error_spam_test.go:156: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-20220602172442-12108 --log_dir C:\Users\jenkins.minikube7\AppData\Local\Temp\nospam-20220602172442-12108 unpause
E0602 17:27:38.310610   12108 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\addons-20220602171403-12108\client.crt: The system cannot find the path specified.
error_spam_test.go:156: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-20220602172442-12108 --log_dir C:\Users\jenkins.minikube7\AppData\Local\Temp\nospam-20220602172442-12108 unpause: (6.1118769s)
error_spam_test.go:156: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-20220602172442-12108 --log_dir C:\Users\jenkins.minikube7\AppData\Local\Temp\nospam-20220602172442-12108 unpause
error_spam_test.go:156: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-20220602172442-12108 --log_dir C:\Users\jenkins.minikube7\AppData\Local\Temp\nospam-20220602172442-12108 unpause: (5.7868888s)
error_spam_test.go:179: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-20220602172442-12108 --log_dir C:\Users\jenkins.minikube7\AppData\Local\Temp\nospam-20220602172442-12108 unpause
error_spam_test.go:179: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-20220602172442-12108 --log_dir C:\Users\jenkins.minikube7\AppData\Local\Temp\nospam-20220602172442-12108 unpause: (5.5920568s)
--- PASS: TestErrorSpam/unpause (17.49s)

                                                
                                    
x
+
TestErrorSpam/stop (32.54s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:213: Cleaning up 0 logfile(s) ...
error_spam_test.go:156: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-20220602172442-12108 --log_dir C:\Users\jenkins.minikube7\AppData\Local\Temp\nospam-20220602172442-12108 stop
error_spam_test.go:156: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-20220602172442-12108 --log_dir C:\Users\jenkins.minikube7\AppData\Local\Temp\nospam-20220602172442-12108 stop: (17.7881718s)
error_spam_test.go:156: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-20220602172442-12108 --log_dir C:\Users\jenkins.minikube7\AppData\Local\Temp\nospam-20220602172442-12108 stop
error_spam_test.go:156: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-20220602172442-12108 --log_dir C:\Users\jenkins.minikube7\AppData\Local\Temp\nospam-20220602172442-12108 stop: (7.3680966s)
error_spam_test.go:179: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-20220602172442-12108 --log_dir C:\Users\jenkins.minikube7\AppData\Local\Temp\nospam-20220602172442-12108 stop
E0602 17:28:19.277532   12108 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\addons-20220602171403-12108\client.crt: The system cannot find the path specified.
error_spam_test.go:179: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-20220602172442-12108 --log_dir C:\Users\jenkins.minikube7\AppData\Local\Temp\nospam-20220602172442-12108 stop: (7.3779846s)
--- PASS: TestErrorSpam/stop (32.54s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1781: local sync path: C:\Users\jenkins.minikube7\minikube-integration\.minikube\files\etc\test\nested\copy\12108\hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.04s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (127.05s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2160: (dbg) Run:  out/minikube-windows-amd64.exe start -p functional-20220602172845-12108 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker
E0602 17:29:41.212840   12108 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\addons-20220602171403-12108\client.crt: The system cannot find the path specified.
functional_test.go:2160: (dbg) Done: out/minikube-windows-amd64.exe start -p functional-20220602172845-12108 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker: (2m7.0397827s)
--- PASS: TestFunctional/serial/StartWithProxy (127.05s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (34.91s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:651: (dbg) Run:  out/minikube-windows-amd64.exe start -p functional-20220602172845-12108 --alsologtostderr -v=8
functional_test.go:651: (dbg) Done: out/minikube-windows-amd64.exe start -p functional-20220602172845-12108 --alsologtostderr -v=8: (34.9083339s)
functional_test.go:655: soft start took 34.9097913s for "functional-20220602172845-12108" cluster.
--- PASS: TestFunctional/serial/SoftStart (34.91s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.18s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:673: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.18s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.36s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:688: (dbg) Run:  kubectl --context functional-20220602172845-12108 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.36s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (18.3s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1041: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20220602172845-12108 cache add k8s.gcr.io/pause:3.1
functional_test.go:1041: (dbg) Done: out/minikube-windows-amd64.exe -p functional-20220602172845-12108 cache add k8s.gcr.io/pause:3.1: (6.1091529s)
functional_test.go:1041: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20220602172845-12108 cache add k8s.gcr.io/pause:3.3
functional_test.go:1041: (dbg) Done: out/minikube-windows-amd64.exe -p functional-20220602172845-12108 cache add k8s.gcr.io/pause:3.3: (6.0729189s)
functional_test.go:1041: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20220602172845-12108 cache add k8s.gcr.io/pause:latest
functional_test.go:1041: (dbg) Done: out/minikube-windows-amd64.exe -p functional-20220602172845-12108 cache add k8s.gcr.io/pause:latest: (6.1192687s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (18.30s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (9.27s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1069: (dbg) Run:  docker build -t minikube-local-cache-test:functional-20220602172845-12108 C:\Users\jenkins.minikube7\AppData\Local\Temp\TestFunctionalserialCacheCmdcacheadd_local2994546486\001
functional_test.go:1069: (dbg) Done: docker build -t minikube-local-cache-test:functional-20220602172845-12108 C:\Users\jenkins.minikube7\AppData\Local\Temp\TestFunctionalserialCacheCmdcacheadd_local2994546486\001: (2.3958679s)
functional_test.go:1081: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20220602172845-12108 cache add minikube-local-cache-test:functional-20220602172845-12108
functional_test.go:1081: (dbg) Done: out/minikube-windows-amd64.exe -p functional-20220602172845-12108 cache add minikube-local-cache-test:functional-20220602172845-12108: (5.3981887s)
functional_test.go:1086: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20220602172845-12108 cache delete minikube-local-cache-test:functional-20220602172845-12108
functional_test.go:1075: (dbg) Run:  docker rmi minikube-local-cache-test:functional-20220602172845-12108
functional_test.go:1075: (dbg) Done: docker rmi minikube-local-cache-test:functional-20220602172845-12108: (1.0765114s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (9.27s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete_k8s.gcr.io/pause:3.3 (0.37s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete_k8s.gcr.io/pause:3.3
functional_test.go:1094: (dbg) Run:  out/minikube-windows-amd64.exe cache delete k8s.gcr.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/delete_k8s.gcr.io/pause:3.3 (0.37s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.35s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1102: (dbg) Run:  out/minikube-windows-amd64.exe cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.35s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (6.33s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1116: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20220602172845-12108 ssh sudo crictl images
E0602 17:31:57.262103   12108 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\addons-20220602171403-12108\client.crt: The system cannot find the path specified.
functional_test.go:1116: (dbg) Done: out/minikube-windows-amd64.exe -p functional-20220602172845-12108 ssh sudo crictl images: (6.329337s)
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (6.33s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (25.02s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1139: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20220602172845-12108 ssh sudo docker rmi k8s.gcr.io/pause:latest
functional_test.go:1139: (dbg) Done: out/minikube-windows-amd64.exe -p functional-20220602172845-12108 ssh sudo docker rmi k8s.gcr.io/pause:latest: (6.3813975s)
functional_test.go:1145: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20220602172845-12108 ssh sudo crictl inspecti k8s.gcr.io/pause:latest
functional_test.go:1145: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-20220602172845-12108 ssh sudo crictl inspecti k8s.gcr.io/pause:latest: exit status 1 (6.291973s)

                                                
                                                
-- stdout --
	FATA[0000] no such image "k8s.gcr.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1150: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20220602172845-12108 cache reload
functional_test.go:1150: (dbg) Done: out/minikube-windows-amd64.exe -p functional-20220602172845-12108 cache reload: (6.0272268s)
functional_test.go:1155: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20220602172845-12108 ssh sudo crictl inspecti k8s.gcr.io/pause:latest
E0602 17:32:25.054437   12108 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\addons-20220602171403-12108\client.crt: The system cannot find the path specified.
functional_test.go:1155: (dbg) Done: out/minikube-windows-amd64.exe -p functional-20220602172845-12108 ssh sudo crictl inspecti k8s.gcr.io/pause:latest: (6.3218318s)
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (25.02s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.74s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1164: (dbg) Run:  out/minikube-windows-amd64.exe cache delete k8s.gcr.io/pause:3.1
functional_test.go:1164: (dbg) Run:  out/minikube-windows-amd64.exe cache delete k8s.gcr.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.74s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (2.07s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:708: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20220602172845-12108 kubectl -- --context functional-20220602172845-12108 get pods
functional_test.go:708: (dbg) Done: out/minikube-windows-amd64.exe -p functional-20220602172845-12108 kubectl -- --context functional-20220602172845-12108 get pods: (2.0727441s)
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (2.07s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (2s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:733: (dbg) Run:  out\kubectl.exe --context functional-20220602172845-12108 get pods
functional_test.go:733: (dbg) Done: out\kubectl.exe --context functional-20220602172845-12108 get pods: (1.9962775s)
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (2.00s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (75.96s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:749: (dbg) Run:  out/minikube-windows-amd64.exe start -p functional-20220602172845-12108 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:749: (dbg) Done: out/minikube-windows-amd64.exe start -p functional-20220602172845-12108 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (1m15.9599894s)
functional_test.go:753: restart took 1m15.9602748s for "functional-20220602172845-12108" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (75.96s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.26s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:802: (dbg) Run:  kubectl --context functional-20220602172845-12108 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:817: etcd phase: Running
functional_test.go:827: etcd status: Ready
functional_test.go:817: kube-apiserver phase: Running
functional_test.go:827: kube-apiserver status: Ready
functional_test.go:817: kube-controller-manager phase: Running
functional_test.go:827: kube-controller-manager status: Ready
functional_test.go:817: kube-scheduler phase: Running
functional_test.go:827: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.26s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (7.56s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1228: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20220602172845-12108 logs
functional_test.go:1228: (dbg) Done: out/minikube-windows-amd64.exe -p functional-20220602172845-12108 logs: (7.5544216s)
--- PASS: TestFunctional/serial/LogsCmd (7.56s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (8.53s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1242: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20220602172845-12108 logs --file C:\Users\jenkins.minikube7\AppData\Local\Temp\TestFunctionalserialLogsFileCmd1562066452\001\logs.txt
functional_test.go:1242: (dbg) Done: out/minikube-windows-amd64.exe -p functional-20220602172845-12108 logs --file C:\Users\jenkins.minikube7\AppData\Local\Temp\TestFunctionalserialLogsFileCmd1562066452\001\logs.txt: (8.5277703s)
--- PASS: TestFunctional/serial/LogsFileCmd (8.53s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (2.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1191: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20220602172845-12108 config unset cpus

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1191: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20220602172845-12108 config get cpus
functional_test.go:1191: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-20220602172845-12108 config get cpus: exit status 14 (363.2809ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1191: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20220602172845-12108 config set cpus 2
functional_test.go:1191: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20220602172845-12108 config get cpus
functional_test.go:1191: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20220602172845-12108 config unset cpus
functional_test.go:1191: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20220602172845-12108 config get cpus
functional_test.go:1191: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-20220602172845-12108 config get cpus: exit status 14 (386.0803ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (2.31s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (12.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:966: (dbg) Run:  out/minikube-windows-amd64.exe start -p functional-20220602172845-12108 --dry-run --memory 250MB --alsologtostderr --driver=docker

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:966: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p functional-20220602172845-12108 --dry-run --memory 250MB --alsologtostderr --driver=docker: exit status 23 (5.234702s)

                                                
                                                
-- stdout --
	* [functional-20220602172845-12108] minikube v1.26.0-beta.1 on Microsoft Windows 10 Enterprise N 10.0.19044 Build 19044
	  - KUBECONFIG=C:\Users\jenkins.minikube7\minikube-integration\kubeconfig
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube7\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=14269
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0602 17:35:30.388007    1708 out.go:296] Setting OutFile to fd 596 ...
	I0602 17:35:30.441995    1708 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0602 17:35:30.441995    1708 out.go:309] Setting ErrFile to fd 708...
	I0602 17:35:30.441995    1708 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0602 17:35:30.474041    1708 out.go:303] Setting JSON to false
	I0602 17:35:30.480082    1708 start.go:115] hostinfo: {"hostname":"minikube7","uptime":54472,"bootTime":1654136858,"procs":195,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19044 Build 19044","kernelVersion":"10.0.19044 Build 19044","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f66ed2ea-6c04-4a6b-8eea-b2fb0953e990"}
	W0602 17:35:30.480082    1708 start.go:123] gopshost.Virtualization returned error: not implemented yet
	I0602 17:35:30.483847    1708 out.go:177] * [functional-20220602172845-12108] minikube v1.26.0-beta.1 on Microsoft Windows 10 Enterprise N 10.0.19044 Build 19044
	I0602 17:35:30.487267    1708 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube7\minikube-integration\kubeconfig
	I0602 17:35:30.490241    1708 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube7\minikube-integration\.minikube
	I0602 17:35:30.493807    1708 out.go:177]   - MINIKUBE_LOCATION=14269
	I0602 17:35:30.496808    1708 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0602 17:35:30.500126    1708 config.go:178] Loaded profile config "functional-20220602172845-12108": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.6
	I0602 17:35:30.500762    1708 driver.go:358] Setting default libvirt URI to qemu:///system
	I0602 17:35:33.168999    1708 docker.go:137] docker version: linux-20.10.16
	I0602 17:35:33.176954    1708 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0602 17:35:35.276491    1708 cli_runner.go:217] Completed: docker system info --format "{{json .}}": (2.0993563s)
	I0602 17:35:35.277366    1708 info.go:265] docker info: {ID:DTJ5:6HSD:HQYO:NIUY:6CLH:5UQL:6FBK:7BMO:GVE6:JAI7:26MB:GZA3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:59 OomKillDisable:true NGoroutines:54 SystemTime:2022-06-02 17:35:34.2800977 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:4 KernelVersion:5.10.102.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_
64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:53902323712 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.16 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16 Expected:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16} RuncCommit:{ID:v1.1.1-0-g52de29d Expected:v1.1.1-0-g52de29d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,
profile=default] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.6.0] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:C:\Program Files\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc.
Version:v0.17.0]] Warnings:<nil>}}
	I0602 17:35:35.282703    1708 out.go:177] * Using the docker driver based on existing profile
	I0602 17:35:35.285056    1708 start.go:284] selected driver: docker
	I0602 17:35:35.285056    1708 start.go:806] validating driver "docker" against &{Name:functional-20220602172845-12108 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.6 ClusterName:functional-20220602172845-12108 Namespace:
default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.23.6 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false helm-tiller:false ingress:false ingress-dns:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false reg
istry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube7:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0602 17:35:35.285056    1708 start.go:817] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0602 17:35:35.343013    1708 out.go:177] 
	W0602 17:35:35.344965    1708 out.go:239] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0602 17:35:35.348120    1708 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:983: (dbg) Run:  out/minikube-windows-amd64.exe start -p functional-20220602172845-12108 --dry-run --alsologtostderr -v=1 --driver=docker

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:983: (dbg) Done: out/minikube-windows-amd64.exe start -p functional-20220602172845-12108 --dry-run --alsologtostderr -v=1 --driver=docker: (7.1675111s)
--- PASS: TestFunctional/parallel/DryRun (12.40s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (5.59s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1012: (dbg) Run:  out/minikube-windows-amd64.exe start -p functional-20220602172845-12108 --dry-run --memory 250MB --alsologtostderr --driver=docker

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1012: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p functional-20220602172845-12108 --dry-run --memory 250MB --alsologtostderr --driver=docker: exit status 23 (5.5851812s)

                                                
                                                
-- stdout --
	* [functional-20220602172845-12108] minikube v1.26.0-beta.1 sur Microsoft Windows 10 Enterprise N 10.0.19044 Build 19044
	  - KUBECONFIG=C:\Users\jenkins.minikube7\minikube-integration\kubeconfig
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube7\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=14269
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0602 17:35:07.969254    3712 out.go:296] Setting OutFile to fd 988 ...
	I0602 17:35:08.030409    3712 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0602 17:35:08.030478    3712 out.go:309] Setting ErrFile to fd 768...
	I0602 17:35:08.030478    3712 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0602 17:35:08.045783    3712 out.go:303] Setting JSON to false
	I0602 17:35:08.048354    3712 start.go:115] hostinfo: {"hostname":"minikube7","uptime":54450,"bootTime":1654136858,"procs":191,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19044 Build 19044","kernelVersion":"10.0.19044 Build 19044","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f66ed2ea-6c04-4a6b-8eea-b2fb0953e990"}
	W0602 17:35:08.048354    3712 start.go:123] gopshost.Virtualization returned error: not implemented yet
	I0602 17:35:08.053290    3712 out.go:177] * [functional-20220602172845-12108] minikube v1.26.0-beta.1 sur Microsoft Windows 10 Enterprise N 10.0.19044 Build 19044
	I0602 17:35:08.055729    3712 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube7\minikube-integration\kubeconfig
	I0602 17:35:08.058119    3712 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube7\minikube-integration\.minikube
	I0602 17:35:08.061928    3712 out.go:177]   - MINIKUBE_LOCATION=14269
	I0602 17:35:08.064325    3712 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0602 17:35:08.067097    3712 config.go:178] Loaded profile config "functional-20220602172845-12108": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.6
	I0602 17:35:08.068198    3712 driver.go:358] Setting default libvirt URI to qemu:///system
	I0602 17:35:10.836477    3712 docker.go:137] docker version: linux-20.10.16
	I0602 17:35:10.843495    3712 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0602 17:35:12.959625    3712 cli_runner.go:217] Completed: docker system info --format "{{json .}}": (2.1161204s)
	I0602 17:35:12.960616    3712 info.go:265] docker info: {ID:DTJ5:6HSD:HQYO:NIUY:6CLH:5UQL:6FBK:7BMO:GVE6:JAI7:26MB:GZA3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:59 OomKillDisable:true NGoroutines:54 SystemTime:2022-06-02 17:35:11.8841634 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:4 KernelVersion:5.10.102.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_
64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:53902323712 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.16 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16 Expected:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16} RuncCommit:{ID:v1.1.1-0-g52de29d Expected:v1.1.1-0-g52de29d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,
profile=default] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.6.0] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:C:\Program Files\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc.
Version:v0.17.0]] Warnings:<nil>}}
	I0602 17:35:12.966656    3712 out.go:177] * Utilisation du pilote docker basé sur le profil existant
	I0602 17:35:12.970590    3712 start.go:284] selected driver: docker
	I0602 17:35:12.970590    3712 start.go:806] validating driver "docker" against &{Name:functional-20220602172845-12108 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.6 ClusterName:functional-20220602172845-12108 Namespace:
default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.23.6 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false helm-tiller:false ingress:false ingress-dns:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false reg
istry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube7:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0602 17:35:12.971637    3712 start.go:817] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0602 17:35:13.271134    3712 out.go:177] 
	W0602 17:35:13.273570    3712 out.go:239] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0602 17:35:13.274810    3712 out.go:177] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (5.59s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (20.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:846: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20220602172845-12108 status

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:846: (dbg) Done: out/minikube-windows-amd64.exe -p functional-20220602172845-12108 status: (6.8583794s)
functional_test.go:852: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20220602172845-12108 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:852: (dbg) Done: out/minikube-windows-amd64.exe -p functional-20220602172845-12108 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}: (6.7928017s)
functional_test.go:864: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20220602172845-12108 status -o json

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:864: (dbg) Done: out/minikube-windows-amd64.exe -p functional-20220602172845-12108 status -o json: (6.6272795s)
--- PASS: TestFunctional/parallel/StatusCmd (20.28s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (3.68s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1619: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20220602172845-12108 addons list

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1619: (dbg) Done: out/minikube-windows-amd64.exe -p functional-20220602172845-12108 addons list: (3.2684804s)
functional_test.go:1631: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20220602172845-12108 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (3.68s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (50.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:342: "storage-provisioner" [45e29e52-20fa-4ea9-948b-20dcfac60228] Running

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.0329516s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-20220602172845-12108 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-20220602172845-12108 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-20220602172845-12108 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-20220602172845-12108 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:342: "sp-pod" [13c07143-bcda-415b-987d-4813238cdbe3] Pending
helpers_test.go:342: "sp-pod" [13c07143-bcda-415b-987d-4813238cdbe3] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
helpers_test.go:342: "sp-pod" [13c07143-bcda-415b-987d-4813238cdbe3] Running

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 28.0948778s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-20220602172845-12108 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:100: (dbg) Done: kubectl --context functional-20220602172845-12108 exec sp-pod -- touch /tmp/mount/foo: (1.1287004s)
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-20220602172845-12108 delete -f testdata/storage-provisioner/pod.yaml

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:106: (dbg) Done: kubectl --context functional-20220602172845-12108 delete -f testdata/storage-provisioner/pod.yaml: (4.4934673s)
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-20220602172845-12108 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:342: "sp-pod" [54ac2c9b-7834-43cc-9659-4796f4b3a5c4] Pending
helpers_test.go:342: "sp-pod" [54ac2c9b-7834-43cc-9659-4796f4b3a5c4] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
helpers_test.go:342: "sp-pod" [54ac2c9b-7834-43cc-9659-4796f4b3a5c4] Running

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 9.0904192s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-20220602172845-12108 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (50.52s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (15.03s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1654: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20220602172845-12108 ssh "echo hello"

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1654: (dbg) Done: out/minikube-windows-amd64.exe -p functional-20220602172845-12108 ssh "echo hello": (7.7581754s)
functional_test.go:1671: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20220602172845-12108 ssh "cat /etc/hostname"

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1671: (dbg) Done: out/minikube-windows-amd64.exe -p functional-20220602172845-12108 ssh "cat /etc/hostname": (7.26785s)
--- PASS: TestFunctional/parallel/SSHCmd (15.03s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (27.67s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:554: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20220602172845-12108 cp testdata\cp-test.txt /home/docker/cp-test.txt

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:554: (dbg) Done: out/minikube-windows-amd64.exe -p functional-20220602172845-12108 cp testdata\cp-test.txt /home/docker/cp-test.txt: (6.0022529s)
helpers_test.go:532: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20220602172845-12108 ssh -n functional-20220602172845-12108 "sudo cat /home/docker/cp-test.txt"

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:532: (dbg) Done: out/minikube-windows-amd64.exe -p functional-20220602172845-12108 ssh -n functional-20220602172845-12108 "sudo cat /home/docker/cp-test.txt": (8.5920455s)
helpers_test.go:554: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20220602172845-12108 cp functional-20220602172845-12108:/home/docker/cp-test.txt C:\Users\jenkins.minikube7\AppData\Local\Temp\TestFunctionalparallelCpCmd2696599892\001\cp-test.txt

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:554: (dbg) Done: out/minikube-windows-amd64.exe -p functional-20220602172845-12108 cp functional-20220602172845-12108:/home/docker/cp-test.txt C:\Users\jenkins.minikube7\AppData\Local\Temp\TestFunctionalparallelCpCmd2696599892\001\cp-test.txt: (6.548634s)
helpers_test.go:532: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20220602172845-12108 ssh -n functional-20220602172845-12108 "sudo cat /home/docker/cp-test.txt"

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:532: (dbg) Done: out/minikube-windows-amd64.exe -p functional-20220602172845-12108 ssh -n functional-20220602172845-12108 "sudo cat /home/docker/cp-test.txt": (6.5241389s)
--- PASS: TestFunctional/parallel/CpCmd (27.67s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (82.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1719: (dbg) Run:  kubectl --context functional-20220602172845-12108 replace --force -f testdata\mysql.yaml
functional_test.go:1725: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
helpers_test.go:342: "mysql-b87c45988-mbb25" [f72f89e4-b36a-4fdd-8ebe-b615c45f18a4] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
helpers_test.go:342: "mysql-b87c45988-mbb25" [f72f89e4-b36a-4fdd-8ebe-b615c45f18a4] Running

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1725: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 51.056257s
functional_test.go:1733: (dbg) Run:  kubectl --context functional-20220602172845-12108 exec mysql-b87c45988-mbb25 -- mysql -ppassword -e "show databases;"
functional_test.go:1733: (dbg) Non-zero exit: kubectl --context functional-20220602172845-12108 exec mysql-b87c45988-mbb25 -- mysql -ppassword -e "show databases;": exit status 1 (525.6992ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1733: (dbg) Run:  kubectl --context functional-20220602172845-12108 exec mysql-b87c45988-mbb25 -- mysql -ppassword -e "show databases;"
functional_test.go:1733: (dbg) Non-zero exit: kubectl --context functional-20220602172845-12108 exec mysql-b87c45988-mbb25 -- mysql -ppassword -e "show databases;": exit status 1 (636.6703ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1733: (dbg) Run:  kubectl --context functional-20220602172845-12108 exec mysql-b87c45988-mbb25 -- mysql -ppassword -e "show databases;"

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1733: (dbg) Non-zero exit: kubectl --context functional-20220602172845-12108 exec mysql-b87c45988-mbb25 -- mysql -ppassword -e "show databases;": exit status 1 (853.555ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1733: (dbg) Run:  kubectl --context functional-20220602172845-12108 exec mysql-b87c45988-mbb25 -- mysql -ppassword -e "show databases;"
functional_test.go:1733: (dbg) Non-zero exit: kubectl --context functional-20220602172845-12108 exec mysql-b87c45988-mbb25 -- mysql -ppassword -e "show databases;": exit status 1 (727.7883ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1733: (dbg) Run:  kubectl --context functional-20220602172845-12108 exec mysql-b87c45988-mbb25 -- mysql -ppassword -e "show databases;"
functional_test.go:1733: (dbg) Non-zero exit: kubectl --context functional-20220602172845-12108 exec mysql-b87c45988-mbb25 -- mysql -ppassword -e "show databases;": exit status 1 (867.1248ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1733: (dbg) Run:  kubectl --context functional-20220602172845-12108 exec mysql-b87c45988-mbb25 -- mysql -ppassword -e "show databases;"
functional_test.go:1733: (dbg) Non-zero exit: kubectl --context functional-20220602172845-12108 exec mysql-b87c45988-mbb25 -- mysql -ppassword -e "show databases;": exit status 1 (568.28ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1733: (dbg) Run:  kubectl --context functional-20220602172845-12108 exec mysql-b87c45988-mbb25 -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (82.22s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (6.82s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1855: Checking for existence of /etc/test/nested/copy/12108/hosts within VM
functional_test.go:1857: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20220602172845-12108 ssh "sudo cat /etc/test/nested/copy/12108/hosts"

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1857: (dbg) Done: out/minikube-windows-amd64.exe -p functional-20220602172845-12108 ssh "sudo cat /etc/test/nested/copy/12108/hosts": (6.8186827s)
functional_test.go:1862: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (6.82s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (41.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1898: Checking for existence of /etc/ssl/certs/12108.pem within VM
functional_test.go:1899: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20220602172845-12108 ssh "sudo cat /etc/ssl/certs/12108.pem"

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1899: (dbg) Done: out/minikube-windows-amd64.exe -p functional-20220602172845-12108 ssh "sudo cat /etc/ssl/certs/12108.pem": (6.4982122s)
functional_test.go:1898: Checking for existence of /usr/share/ca-certificates/12108.pem within VM
functional_test.go:1899: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20220602172845-12108 ssh "sudo cat /usr/share/ca-certificates/12108.pem"

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1899: (dbg) Done: out/minikube-windows-amd64.exe -p functional-20220602172845-12108 ssh "sudo cat /usr/share/ca-certificates/12108.pem": (6.4964175s)
functional_test.go:1898: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1899: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20220602172845-12108 ssh "sudo cat /etc/ssl/certs/51391683.0"

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1899: (dbg) Done: out/minikube-windows-amd64.exe -p functional-20220602172845-12108 ssh "sudo cat /etc/ssl/certs/51391683.0": (6.4261763s)
functional_test.go:1925: Checking for existence of /etc/ssl/certs/121082.pem within VM
functional_test.go:1926: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20220602172845-12108 ssh "sudo cat /etc/ssl/certs/121082.pem"

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1926: (dbg) Done: out/minikube-windows-amd64.exe -p functional-20220602172845-12108 ssh "sudo cat /etc/ssl/certs/121082.pem": (6.5876596s)
functional_test.go:1925: Checking for existence of /usr/share/ca-certificates/121082.pem within VM
functional_test.go:1926: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20220602172845-12108 ssh "sudo cat /usr/share/ca-certificates/121082.pem"

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1926: (dbg) Done: out/minikube-windows-amd64.exe -p functional-20220602172845-12108 ssh "sudo cat /usr/share/ca-certificates/121082.pem": (8.2195417s)
functional_test.go:1925: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:1926: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20220602172845-12108 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1926: (dbg) Done: out/minikube-windows-amd64.exe -p functional-20220602172845-12108 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0": (7.2872487s)
--- PASS: TestFunctional/parallel/CertSync (41.52s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:214: (dbg) Run:  kubectl --context functional-20220602172845-12108 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.25s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (6.72s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:1953: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20220602172845-12108 ssh "sudo systemctl is-active crio"

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:1953: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-20220602172845-12108 ssh "sudo systemctl is-active crio": exit status 1 (6.7243181s)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (6.72s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (11.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create

                                                
                                                
=== CONT  TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1265: (dbg) Run:  out/minikube-windows-amd64.exe profile lis

                                                
                                                
=== CONT  TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1265: (dbg) Done: out/minikube-windows-amd64.exe profile lis: (3.3555113s)
functional_test.go:1270: (dbg) Run:  out/minikube-windows-amd64.exe profile list --output json

                                                
                                                
=== CONT  TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1270: (dbg) Done: out/minikube-windows-amd64.exe profile list --output json: (8.1403651s)
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (11.50s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv/powershell (29.85s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv/powershell

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv/powershell
functional_test.go:491: (dbg) Run:  powershell.exe -NoProfile -NonInteractive "out/minikube-windows-amd64.exe -p functional-20220602172845-12108 docker-env | Invoke-Expression ; out/minikube-windows-amd64.exe status -p functional-20220602172845-12108"

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv/powershell
functional_test.go:491: (dbg) Done: powershell.exe -NoProfile -NonInteractive "out/minikube-windows-amd64.exe -p functional-20220602172845-12108 docker-env | Invoke-Expression ; out/minikube-windows-amd64.exe status -p functional-20220602172845-12108": (18.041754s)
functional_test.go:514: (dbg) Run:  powershell.exe -NoProfile -NonInteractive "out/minikube-windows-amd64.exe -p functional-20220602172845-12108 docker-env | Invoke-Expression ; docker images"

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv/powershell
functional_test.go:514: (dbg) Done: powershell.exe -NoProfile -NonInteractive "out/minikube-windows-amd64.exe -p functional-20220602172845-12108 docker-env | Invoke-Expression ; docker images": (11.7987349s)
--- PASS: TestFunctional/parallel/DockerEnv/powershell (29.85s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:127: (dbg) daemon: [out/minikube-windows-amd64.exe -p functional-20220602172845-12108 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (16.73s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:147: (dbg) Run:  kubectl --context functional-20220602172845-12108 apply -f testdata\testsvc.yaml
functional_test_tunnel_test.go:151: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:342: "nginx-svc" [c2249378-caeb-4efe-adc9-871b18ef43f0] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])

                                                
                                                
=== CONT  TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
helpers_test.go:342: "nginx-svc" [c2249378-caeb-4efe-adc9-871b18ef43f0] Running

                                                
                                                
=== CONT  TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:151: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 16.0981382s
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (16.73s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (7.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1305: (dbg) Run:  out/minikube-windows-amd64.exe profile list

                                                
                                                
=== CONT  TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1305: (dbg) Done: out/minikube-windows-amd64.exe profile list: (6.8580554s)
functional_test.go:1310: Took "6.858149s" to run "out/minikube-windows-amd64.exe profile list"
functional_test.go:1319: (dbg) Run:  out/minikube-windows-amd64.exe profile list -l
functional_test.go:1324: Took "418.5076ms" to run "out/minikube-windows-amd64.exe profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (7.28s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (8.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1356: (dbg) Run:  out/minikube-windows-amd64.exe profile list -o json

                                                
                                                
=== CONT  TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1356: (dbg) Done: out/minikube-windows-amd64.exe profile list -o json: (7.5483343s)
functional_test.go:1361: Took "7.5483343s" to run "out/minikube-windows-amd64.exe profile list -o json"
functional_test.go:1369: (dbg) Run:  out/minikube-windows-amd64.exe profile list -o json --light
functional_test.go:1374: Took "464.0204ms" to run "out/minikube-windows-amd64.exe profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (8.01s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (4.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2045: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20220602172845-12108 update-context --alsologtostderr -v=2

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2045: (dbg) Done: out/minikube-windows-amd64.exe -p functional-20220602172845-12108 update-context --alsologtostderr -v=2: (4.0700354s)
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (4.07s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (4.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2045: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20220602172845-12108 update-context --alsologtostderr -v=2
functional_test.go:2045: (dbg) Done: out/minikube-windows-amd64.exe -p functional-20220602172845-12108 update-context --alsologtostderr -v=2: (4.1313004s)
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (4.13s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (4.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2045: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20220602172845-12108 update-context --alsologtostderr -v=2

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2045: (dbg) Done: out/minikube-windows-amd64.exe -p functional-20220602172845-12108 update-context --alsologtostderr -v=2: (4.0433863s)
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (4.05s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (4.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:256: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20220602172845-12108 image ls --format short

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:256: (dbg) Done: out/minikube-windows-amd64.exe -p functional-20220602172845-12108 image ls --format short: (4.3617456s)
functional_test.go:261: (dbg) Stdout: out/minikube-windows-amd64.exe -p functional-20220602172845-12108 image ls --format short:
k8s.gcr.io/pause:latest
k8s.gcr.io/pause:3.6
k8s.gcr.io/pause:3.3
k8s.gcr.io/pause:3.1
k8s.gcr.io/kube-scheduler:v1.23.6
k8s.gcr.io/kube-proxy:v1.23.6
k8s.gcr.io/kube-controller-manager:v1.23.6
k8s.gcr.io/kube-apiserver:v1.23.6
k8s.gcr.io/etcd:3.5.1-0
k8s.gcr.io/echoserver:1.8
k8s.gcr.io/coredns/coredns:v1.8.6
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/google-containers/addon-resizer:functional-20220602172845-12108
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/library/mysql:5.7
docker.io/library/minikube-local-cache-test:functional-20220602172845-12108
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (4.36s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (4.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:256: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20220602172845-12108 image ls --format table
functional_test.go:256: (dbg) Done: out/minikube-windows-amd64.exe -p functional-20220602172845-12108 image ls --format table: (4.2196151s)

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:261: (dbg) Stdout: out/minikube-windows-amd64.exe -p functional-20220602172845-12108 image ls --format table:
|---------------------------------------------|---------------------------------|---------------|--------|
|                    Image                    |               Tag               |   Image ID    |  Size  |
|---------------------------------------------|---------------------------------|---------------|--------|
| k8s.gcr.io/pause                            | 3.6                             | 6270bb605e12e | 683kB  |
| gcr.io/k8s-minikube/storage-provisioner     | v5                              | 6e38f40d628db | 31.5MB |
| k8s.gcr.io/kube-scheduler                   | v1.23.6                         | 595f327f224a4 | 53.5MB |
| k8s.gcr.io/etcd                             | 3.5.1-0                         | 25f8c7f3da61c | 293MB  |
| k8s.gcr.io/coredns/coredns                  | v1.8.6                          | a4ca41631cc7a | 46.8MB |
| docker.io/library/minikube-local-cache-test | functional-20220602172845-12108 | 6b609920fa1c8 | 30B    |
| docker.io/library/nginx                     | latest                          | 0e901e68141fd | 142MB  |
| k8s.gcr.io/pause                            | latest                          | 350b164e7ae1d | 240kB  |
| k8s.gcr.io/pause                            | 3.3                             | 0184c1613d929 | 683kB  |
| k8s.gcr.io/pause                            | 3.1                             | da86e6ba6ca19 | 742kB  |
| k8s.gcr.io/echoserver                       | 1.8                             | 82e4c8a736a4f | 95.4MB |
| docker.io/library/nginx                     | alpine                          | b1c3acb288825 | 23.4MB |
| k8s.gcr.io/kube-controller-manager          | v1.23.6                         | df7b72818ad2e | 125MB  |
| gcr.io/google-containers/addon-resizer      | functional-20220602172845-12108 | ffd4cfbbe753e | 32.9MB |
| docker.io/library/mysql                     | 5.7                             | 2a0961b7de03c | 462MB  |
| k8s.gcr.io/kube-apiserver                   | v1.23.6                         | 8fa62c12256df | 135MB  |
| k8s.gcr.io/kube-proxy                       | v1.23.6                         | 4c03754524064 | 112MB  |
|---------------------------------------------|---------------------------------|---------------|--------|
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (4.22s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (4.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:256: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20220602172845-12108 image ls --format json

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:256: (dbg) Done: out/minikube-windows-amd64.exe -p functional-20220602172845-12108 image ls --format json: (4.2257944s)

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:261: (dbg) Stdout: out/minikube-windows-amd64.exe -p functional-20220602172845-12108 image ls --format json:
[{"id":"4c037545240644e87d79f6b4071331f9adea6176339c98e529b4af8af00d4e47","repoDigests":[],"repoTags":["k8s.gcr.io/kube-proxy:v1.23.6"],"size":"112000000"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":[],"repoTags":["k8s.gcr.io/pause:3.3"],"size":"683000"},{"id":"6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee","repoDigests":[],"repoTags":["k8s.gcr.io/pause:3.6"],"size":"683000"},{"id":"ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91","repoDigests":[],"repoTags":["gcr.io/google-containers/addon-resizer:functional-20220602172845-12108"],"size":"32900000"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":[],"repoTags":["k8s.gcr.io/pause:latest"],"size":"240000"},{"id":"6b609920fa1c88dc8d9d8ec7797def1b69fef2cfbf7b5c2dab12f40b8931f992","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-20220602172845-12108"],"size":"30"},{"id":"2a0961b7de03c7b11afd13fec09d0d30
992b6e0b4f947a4aba4273723778ed95","repoDigests":[],"repoTags":["docker.io/library/mysql:5.7"],"size":"462000000"},{"id":"df7b72818ad2e4f1f204c7ffb51239de67f49c6b22671c70354ee5d65ac37657","repoDigests":[],"repoTags":["k8s.gcr.io/kube-controller-manager:v1.23.6"],"size":"125000000"},{"id":"595f327f224a42213913a39d224c8aceb96c81ad3909ae13f6045f570aafe8f0","repoDigests":[],"repoTags":["k8s.gcr.io/kube-scheduler:v1.23.6"],"size":"53500000"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31500000"},{"id":"25f8c7f3da61c2a810effe5fa779cf80ca171afb0adf94c7cb51eb9a8546629d","repoDigests":[],"repoTags":["k8s.gcr.io/etcd:3.5.1-0"],"size":"293000000"},{"id":"a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03","repoDigests":[],"repoTags":["k8s.gcr.io/coredns/coredns:v1.8.6"],"size":"46800000"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":[],"repoTags":["k8s.gcr
.io/pause:3.1"],"size":"742000"},{"id":"82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410","repoDigests":[],"repoTags":["k8s.gcr.io/echoserver:1.8"],"size":"95400000"},{"id":"0e901e68141fd02f237cf63eb842529f8a9500636a9419e3cf4fb986b8fe3d5d","repoDigests":[],"repoTags":["docker.io/library/nginx:latest"],"size":"142000000"},{"id":"b1c3acb28882519cf6d3a4d7fe2b21d0ae20bde9cfd2c08a7de057f8cfccff15","repoDigests":[],"repoTags":["docker.io/library/nginx:alpine"],"size":"23400000"},{"id":"8fa62c12256df9d9d0c3f1cf90856e27d90f209f42271c2f19326a705342c3b6","repoDigests":[],"repoTags":["k8s.gcr.io/kube-apiserver:v1.23.6"],"size":"135000000"}]
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (4.23s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (4.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:256: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20220602172845-12108 image ls --format yaml
functional_test.go:256: (dbg) Done: out/minikube-windows-amd64.exe -p functional-20220602172845-12108 image ls --format yaml: (4.3548378s)
functional_test.go:261: (dbg) Stdout: out/minikube-windows-amd64.exe -p functional-20220602172845-12108 image ls --format yaml:
- id: df7b72818ad2e4f1f204c7ffb51239de67f49c6b22671c70354ee5d65ac37657
repoDigests: []
repoTags:
- k8s.gcr.io/kube-controller-manager:v1.23.6
size: "125000000"
- id: 82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410
repoDigests: []
repoTags:
- k8s.gcr.io/echoserver:1.8
size: "95400000"
- id: 6b609920fa1c88dc8d9d8ec7797def1b69fef2cfbf7b5c2dab12f40b8931f992
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-20220602172845-12108
size: "30"
- id: 8fa62c12256df9d9d0c3f1cf90856e27d90f209f42271c2f19326a705342c3b6
repoDigests: []
repoTags:
- k8s.gcr.io/kube-apiserver:v1.23.6
size: "135000000"
- id: a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03
repoDigests: []
repoTags:
- k8s.gcr.io/coredns/coredns:v1.8.6
size: "46800000"
- id: 6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee
repoDigests: []
repoTags:
- k8s.gcr.io/pause:3.6
size: "683000"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31500000"
- id: ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91
repoDigests: []
repoTags:
- gcr.io/google-containers/addon-resizer:functional-20220602172845-12108
size: "32900000"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests: []
repoTags:
- k8s.gcr.io/pause:latest
size: "240000"
- id: 0e901e68141fd02f237cf63eb842529f8a9500636a9419e3cf4fb986b8fe3d5d
repoDigests: []
repoTags:
- docker.io/library/nginx:latest
size: "142000000"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests: []
repoTags:
- k8s.gcr.io/pause:3.1
size: "742000"
- id: b1c3acb28882519cf6d3a4d7fe2b21d0ae20bde9cfd2c08a7de057f8cfccff15
repoDigests: []
repoTags:
- docker.io/library/nginx:alpine
size: "23400000"
- id: 4c037545240644e87d79f6b4071331f9adea6176339c98e529b4af8af00d4e47
repoDigests: []
repoTags:
- k8s.gcr.io/kube-proxy:v1.23.6
size: "112000000"
- id: 595f327f224a42213913a39d224c8aceb96c81ad3909ae13f6045f570aafe8f0
repoDigests: []
repoTags:
- k8s.gcr.io/kube-scheduler:v1.23.6
size: "53500000"
- id: 25f8c7f3da61c2a810effe5fa779cf80ca171afb0adf94c7cb51eb9a8546629d
repoDigests: []
repoTags:
- k8s.gcr.io/etcd:3.5.1-0
size: "293000000"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests: []
repoTags:
- k8s.gcr.io/pause:3.3
size: "683000"
- id: 2a0961b7de03c7b11afd13fec09d0d30992b6e0b4f947a4aba4273723778ed95
repoDigests: []
repoTags:
- docker.io/library/mysql:5.7
size: "462000000"

                                                
                                                
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (4.36s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (18.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:303: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20220602172845-12108 ssh pgrep buildkitd
functional_test.go:303: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-20220602172845-12108 ssh pgrep buildkitd: exit status 1 (6.1586298s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:310: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20220602172845-12108 image build -t localhost/my-image:functional-20220602172845-12108 testdata\build
functional_test.go:310: (dbg) Done: out/minikube-windows-amd64.exe -p functional-20220602172845-12108 image build -t localhost/my-image:functional-20220602172845-12108 testdata\build: (7.7782614s)
functional_test.go:315: (dbg) Stdout: out/minikube-windows-amd64.exe -p functional-20220602172845-12108 image build -t localhost/my-image:functional-20220602172845-12108 testdata\build:
Sending build context to Docker daemon  3.072kB

Step 1/3 : FROM gcr.io/k8s-minikube/busybox
latest: Pulling from k8s-minikube/busybox
5cc84ad355aa: Pulling fs layer
5cc84ad355aa: Download complete
5cc84ad355aa: Pull complete
Digest: sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
Status: Downloaded newer image for gcr.io/k8s-minikube/busybox:latest
---> beae173ccac6
Step 2/3 : RUN true
---> Running in 1beaa5aad4da
Removing intermediate container 1beaa5aad4da
---> 89b69c4163e9
Step 3/3 : ADD content.txt /
---> ac00339347b4
Successfully built ac00339347b4
Successfully tagged localhost/my-image:functional-20220602172845-12108
functional_test.go:443: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20220602172845-12108 image ls
functional_test.go:443: (dbg) Done: out/minikube-windows-amd64.exe -p functional-20220602172845-12108 image ls: (4.1267836s)
E0602 17:41:57.255365   12108 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\addons-20220602171403-12108\client.crt: The system cannot find the path specified.
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (18.06s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (5.73s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:337: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.8

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/Setup
functional_test.go:337: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.8: (4.5660531s)
functional_test.go:342: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.8 gcr.io/google-containers/addon-resizer:functional-20220602172845-12108

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/Setup
functional_test.go:342: (dbg) Done: docker tag gcr.io/google-containers/addon-resizer:1.8.8 gcr.io/google-containers/addon-resizer:functional-20220602172845-12108: (1.1457405s)
--- PASS: TestFunctional/parallel/ImageCommands/Setup (5.73s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (19.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:350: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20220602172845-12108 image load --daemon gcr.io/google-containers/addon-resizer:functional-20220602172845-12108

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:350: (dbg) Done: out/minikube-windows-amd64.exe -p functional-20220602172845-12108 image load --daemon gcr.io/google-containers/addon-resizer:functional-20220602172845-12108: (15.1013146s)
functional_test.go:443: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20220602172845-12108 image ls

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:443: (dbg) Done: out/minikube-windows-amd64.exe -p functional-20220602172845-12108 image ls: (4.2985441s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (19.40s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (13.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:360: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20220602172845-12108 image load --daemon gcr.io/google-containers/addon-resizer:functional-20220602172845-12108
E0602 17:36:57.261777   12108 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\addons-20220602171403-12108\client.crt: The system cannot find the path specified.

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:360: (dbg) Done: out/minikube-windows-amd64.exe -p functional-20220602172845-12108 image load --daemon gcr.io/google-containers/addon-resizer:functional-20220602172845-12108: (9.0138687s)
functional_test.go:443: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20220602172845-12108 image ls

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:443: (dbg) Done: out/minikube-windows-amd64.exe -p functional-20220602172845-12108 image ls: (4.2877048s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (13.30s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2182: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20220602172845-12108 version --short
--- PASS: TestFunctional/parallel/Version/short (0.40s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (6.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2196: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20220602172845-12108 version -o=json --components

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2196: (dbg) Done: out/minikube-windows-amd64.exe -p functional-20220602172845-12108 version -o=json --components: (6.4954392s)
--- PASS: TestFunctional/parallel/Version/components (6.50s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (22.59s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:230: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.9

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:230: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.9: (4.5000369s)
functional_test.go:235: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.9 gcr.io/google-containers/addon-resizer:functional-20220602172845-12108
functional_test.go:235: (dbg) Done: docker tag gcr.io/google-containers/addon-resizer:1.8.9 gcr.io/google-containers/addon-resizer:functional-20220602172845-12108: (1.145633s)
functional_test.go:240: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20220602172845-12108 image load --daemon gcr.io/google-containers/addon-resizer:functional-20220602172845-12108

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:240: (dbg) Done: out/minikube-windows-amd64.exe -p functional-20220602172845-12108 image load --daemon gcr.io/google-containers/addon-resizer:functional-20220602172845-12108: (12.6440219s)
functional_test.go:443: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20220602172845-12108 image ls

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:443: (dbg) Done: out/minikube-windows-amd64.exe -p functional-20220602172845-12108 image ls: (4.2839063s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (22.59s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:369: (dbg) stopping [out/minikube-windows-amd64.exe -p functional-20220602172845-12108 tunnel --alsologtostderr] ...
helpers_test.go:506: unable to kill pid 11048: TerminateProcess: Access is denied.
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (8.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:375: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20220602172845-12108 image save gcr.io/google-containers/addon-resizer:functional-20220602172845-12108 C:\jenkins\workspace\Docker_Windows_integration\addon-resizer-save.tar

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:375: (dbg) Done: out/minikube-windows-amd64.exe -p functional-20220602172845-12108 image save gcr.io/google-containers/addon-resizer:functional-20220602172845-12108 C:\jenkins\workspace\Docker_Windows_integration\addon-resizer-save.tar: (8.2008342s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (8.20s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (8.58s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:387: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20220602172845-12108 image rm gcr.io/google-containers/addon-resizer:functional-20220602172845-12108
functional_test.go:387: (dbg) Done: out/minikube-windows-amd64.exe -p functional-20220602172845-12108 image rm gcr.io/google-containers/addon-resizer:functional-20220602172845-12108: (4.2647568s)
functional_test.go:443: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20220602172845-12108 image ls

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:443: (dbg) Done: out/minikube-windows-amd64.exe -p functional-20220602172845-12108 image ls: (4.319306s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (8.58s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (12.58s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:404: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20220602172845-12108 image load C:\jenkins\workspace\Docker_Windows_integration\addon-resizer-save.tar
functional_test.go:404: (dbg) Done: out/minikube-windows-amd64.exe -p functional-20220602172845-12108 image load C:\jenkins\workspace\Docker_Windows_integration\addon-resizer-save.tar: (8.5115436s)
functional_test.go:443: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20220602172845-12108 image ls
functional_test.go:443: (dbg) Done: out/minikube-windows-amd64.exe -p functional-20220602172845-12108 image ls: (4.0692227s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (12.58s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (12.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:414: (dbg) Run:  docker rmi gcr.io/google-containers/addon-resizer:functional-20220602172845-12108
functional_test.go:414: (dbg) Done: docker rmi gcr.io/google-containers/addon-resizer:functional-20220602172845-12108: (1.0936985s)
functional_test.go:419: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20220602172845-12108 image save --daemon gcr.io/google-containers/addon-resizer:functional-20220602172845-12108
functional_test.go:419: (dbg) Done: out/minikube-windows-amd64.exe -p functional-20220602172845-12108 image save --daemon gcr.io/google-containers/addon-resizer:functional-20220602172845-12108: (10.2395406s)
functional_test.go:424: (dbg) Run:  docker image inspect gcr.io/google-containers/addon-resizer:functional-20220602172845-12108
functional_test.go:424: (dbg) Done: docker image inspect gcr.io/google-containers/addon-resizer:functional-20220602172845-12108: (1.0352933s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (12.38s)

                                                
                                    
x
+
TestFunctional/delete_addon-resizer_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_addon-resizer_images
functional_test.go:185: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:185: (dbg) Non-zero exit: docker rmi -f gcr.io/google-containers/addon-resizer:1.8.8: context deadline exceeded (0s)
functional_test.go:187: failed to remove image "gcr.io/google-containers/addon-resizer:1.8.8" from docker images. args "docker rmi -f gcr.io/google-containers/addon-resizer:1.8.8": context deadline exceeded
functional_test.go:185: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:functional-20220602172845-12108
functional_test.go:185: (dbg) Non-zero exit: docker rmi -f gcr.io/google-containers/addon-resizer:functional-20220602172845-12108: context deadline exceeded (53.5µs)
functional_test.go:187: failed to remove image "gcr.io/google-containers/addon-resizer:functional-20220602172845-12108" from docker images. args "docker rmi -f gcr.io/google-containers/addon-resizer:functional-20220602172845-12108": context deadline exceeded
--- PASS: TestFunctional/delete_addon-resizer_images (0.02s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.01s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:193: (dbg) Run:  docker rmi -f localhost/my-image:functional-20220602172845-12108
functional_test.go:193: (dbg) Non-zero exit: docker rmi -f localhost/my-image:functional-20220602172845-12108: context deadline exceeded (603.7µs)
functional_test.go:195: failed to remove image my-image from docker images. args "docker rmi -f localhost/my-image:functional-20220602172845-12108": context deadline exceeded
--- PASS: TestFunctional/delete_my-image_image (0.01s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.01s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:201: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-20220602172845-12108
functional_test.go:201: (dbg) Non-zero exit: docker rmi -f minikube-local-cache-test:functional-20220602172845-12108: context deadline exceeded (95µs)
functional_test.go:203: failed to remove image minikube local cache test images from docker. args "docker rmi -f minikube-local-cache-test:functional-20220602172845-12108": context deadline exceeded
--- PASS: TestFunctional/delete_minikube_cached_images (0.01s)

                                                
                                    
x
+
TestIngressAddonLegacy/StartLegacyK8sCluster (133.93s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/StartLegacyK8sCluster
ingress_addon_legacy_test.go:39: (dbg) Run:  out/minikube-windows-amd64.exe start -p ingress-addon-legacy-20220602180932-12108 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=docker
ingress_addon_legacy_test.go:39: (dbg) Done: out/minikube-windows-amd64.exe start -p ingress-addon-legacy-20220602180932-12108 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=docker: (2m13.9296975s)
--- PASS: TestIngressAddonLegacy/StartLegacyK8sCluster (133.93s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (49.89s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddonActivation
ingress_addon_legacy_test.go:70: (dbg) Run:  out/minikube-windows-amd64.exe -p ingress-addon-legacy-20220602180932-12108 addons enable ingress --alsologtostderr -v=5
E0602 18:11:57.262695   12108 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\addons-20220602171403-12108\client.crt: The system cannot find the path specified.
ingress_addon_legacy_test.go:70: (dbg) Done: out/minikube-windows-amd64.exe -p ingress-addon-legacy-20220602180932-12108 addons enable ingress --alsologtostderr -v=5: (49.8885542s)
--- PASS: TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (49.89s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (4.67s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation
ingress_addon_legacy_test.go:79: (dbg) Run:  out/minikube-windows-amd64.exe -p ingress-addon-legacy-20220602180932-12108 addons enable ingress-dns --alsologtostderr -v=5
ingress_addon_legacy_test.go:79: (dbg) Done: out/minikube-windows-amd64.exe -p ingress-addon-legacy-20220602180932-12108 addons enable ingress-dns --alsologtostderr -v=5: (4.6677285s)
--- PASS: TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (4.67s)

                                                
                                    
x
+
TestJSONOutput/start/Command (128.29s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-windows-amd64.exe start -p json-output-20220602181351-12108 --output=json --user=testUser --memory=2200 --wait=true --driver=docker
E0602 18:14:06.657172   12108 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\functional-20220602172845-12108\client.crt: The system cannot find the path specified.
E0602 18:14:06.672361   12108 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\functional-20220602172845-12108\client.crt: The system cannot find the path specified.
E0602 18:14:06.687650   12108 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\functional-20220602172845-12108\client.crt: The system cannot find the path specified.
E0602 18:14:06.718847   12108 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\functional-20220602172845-12108\client.crt: The system cannot find the path specified.
E0602 18:14:06.766131   12108 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\functional-20220602172845-12108\client.crt: The system cannot find the path specified.
E0602 18:14:06.862462   12108 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\functional-20220602172845-12108\client.crt: The system cannot find the path specified.
E0602 18:14:07.032731   12108 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\functional-20220602172845-12108\client.crt: The system cannot find the path specified.
E0602 18:14:07.357880   12108 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\functional-20220602172845-12108\client.crt: The system cannot find the path specified.
E0602 18:14:08.011826   12108 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\functional-20220602172845-12108\client.crt: The system cannot find the path specified.
E0602 18:14:09.298912   12108 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\functional-20220602172845-12108\client.crt: The system cannot find the path specified.
E0602 18:14:11.868108   12108 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\functional-20220602172845-12108\client.crt: The system cannot find the path specified.
E0602 18:14:17.002968   12108 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\functional-20220602172845-12108\client.crt: The system cannot find the path specified.
E0602 18:14:27.250001   12108 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\functional-20220602172845-12108\client.crt: The system cannot find the path specified.
E0602 18:14:47.739720   12108 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\functional-20220602172845-12108\client.crt: The system cannot find the path specified.
E0602 18:15:28.715316   12108 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\functional-20220602172845-12108\client.crt: The system cannot find the path specified.
json_output_test.go:63: (dbg) Done: out/minikube-windows-amd64.exe start -p json-output-20220602181351-12108 --output=json --user=testUser --memory=2200 --wait=true --driver=docker: (2m8.2846327s)
--- PASS: TestJSONOutput/start/Command (128.29s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (6.11s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-windows-amd64.exe pause -p json-output-20220602181351-12108 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-windows-amd64.exe pause -p json-output-20220602181351-12108 --output=json --user=testUser: (6.1100932s)
--- PASS: TestJSONOutput/pause/Command (6.11s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (5.75s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-windows-amd64.exe unpause -p json-output-20220602181351-12108 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-windows-amd64.exe unpause -p json-output-20220602181351-12108 --output=json --user=testUser: (5.7481439s)
--- PASS: TestJSONOutput/unpause/Command (5.75s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (17.87s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-windows-amd64.exe stop -p json-output-20220602181351-12108 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-windows-amd64.exe stop -p json-output-20220602181351-12108 --output=json --user=testUser: (17.8722504s)
--- PASS: TestJSONOutput/stop/Command (17.87s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (7.24s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:149: (dbg) Run:  out/minikube-windows-amd64.exe start -p json-output-error-20220602181649-12108 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:149: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p json-output-error-20220602181649-12108 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (378.1098ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"cf07b661-00f5-4fb7-a8b0-8999d56d3fc5","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-20220602181649-12108] minikube v1.26.0-beta.1 on Microsoft Windows 10 Enterprise N 10.0.19044 Build 19044","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"4d22226d-9f23-488f-a052-86143feaf2e9","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=C:\\Users\\jenkins.minikube7\\minikube-integration\\kubeconfig"}}
	{"specversion":"1.0","id":"e26b46b4-ce2a-4795-9126-5880511c5684","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=C:\\Users\\jenkins.minikube7\\minikube-integration\\.minikube"}}
	{"specversion":"1.0","id":"7473564d-1eea-4cd4-9447-12764354ac06","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=14269"}}
	{"specversion":"1.0","id":"2e5658d0-84b1-4e80-aa11-50b54e86b3a5","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"5c864088-7b18-40f4-9ff1-206d205947db","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on windows/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-20220602181649-12108" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p json-output-error-20220602181649-12108
E0602 18:16:50.636793   12108 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\functional-20220602172845-12108\client.crt: The system cannot find the path specified.
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p json-output-error-20220602181649-12108: (6.866157s)
--- PASS: TestErrorJSONOutput (7.24s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (137.23s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-windows-amd64.exe start -p docker-network-20220602181656-12108 --network=
E0602 18:16:57.269314   12108 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\addons-20220602171403-12108\client.crt: The system cannot find the path specified.
E0602 18:17:40.991497   12108 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\ingress-addon-legacy-20220602180932-12108\client.crt: The system cannot find the path specified.
E0602 18:17:41.006051   12108 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\ingress-addon-legacy-20220602180932-12108\client.crt: The system cannot find the path specified.
E0602 18:17:41.021556   12108 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\ingress-addon-legacy-20220602180932-12108\client.crt: The system cannot find the path specified.
E0602 18:17:41.053233   12108 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\ingress-addon-legacy-20220602180932-12108\client.crt: The system cannot find the path specified.
E0602 18:17:41.099263   12108 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\ingress-addon-legacy-20220602180932-12108\client.crt: The system cannot find the path specified.
E0602 18:17:41.192659   12108 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\ingress-addon-legacy-20220602180932-12108\client.crt: The system cannot find the path specified.
E0602 18:17:41.363103   12108 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\ingress-addon-legacy-20220602180932-12108\client.crt: The system cannot find the path specified.
E0602 18:17:41.689773   12108 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\ingress-addon-legacy-20220602180932-12108\client.crt: The system cannot find the path specified.
E0602 18:17:42.342301   12108 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\ingress-addon-legacy-20220602180932-12108\client.crt: The system cannot find the path specified.
E0602 18:17:43.969522   12108 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\ingress-addon-legacy-20220602180932-12108\client.crt: The system cannot find the path specified.
E0602 18:17:46.545337   12108 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\ingress-addon-legacy-20220602180932-12108\client.crt: The system cannot find the path specified.
E0602 18:17:51.674551   12108 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\ingress-addon-legacy-20220602180932-12108\client.crt: The system cannot find the path specified.
E0602 18:18:01.921754   12108 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\ingress-addon-legacy-20220602180932-12108\client.crt: The system cannot find the path specified.
E0602 18:18:22.409473   12108 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\ingress-addon-legacy-20220602180932-12108\client.crt: The system cannot find the path specified.
kic_custom_network_test.go:57: (dbg) Done: out/minikube-windows-amd64.exe start -p docker-network-20220602181656-12108 --network=: (1m55.3156259s)
kic_custom_network_test.go:122: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:122: (dbg) Done: docker network ls --format {{.Name}}: (1.0358522s)
helpers_test.go:175: Cleaning up "docker-network-20220602181656-12108" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p docker-network-20220602181656-12108
E0602 18:19:03.384766   12108 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\ingress-addon-legacy-20220602180932-12108\client.crt: The system cannot find the path specified.
E0602 18:19:06.650139   12108 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\functional-20220602172845-12108\client.crt: The system cannot find the path specified.
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p docker-network-20220602181656-12108: (20.8664925s)
--- PASS: TestKicCustomNetwork/create_custom_network (137.23s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (129.6s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-windows-amd64.exe start -p docker-network-20220602181913-12108 --network=bridge
E0602 18:19:34.478613   12108 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\functional-20220602172845-12108\client.crt: The system cannot find the path specified.
E0602 18:20:25.317922   12108 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\ingress-addon-legacy-20220602180932-12108\client.crt: The system cannot find the path specified.
kic_custom_network_test.go:57: (dbg) Done: out/minikube-windows-amd64.exe start -p docker-network-20220602181913-12108 --network=bridge: (1m52.4020232s)
kic_custom_network_test.go:122: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:122: (dbg) Done: docker network ls --format {{.Name}}: (1.0248126s)
helpers_test.go:175: Cleaning up "docker-network-20220602181913-12108" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p docker-network-20220602181913-12108
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p docker-network-20220602181913-12108: (16.1632648s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (129.60s)

                                                
                                    
x
+
TestKicExistingNetwork (143.29s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:122: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:122: (dbg) Done: docker network ls --format {{.Name}}: (1.0266617s)
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-windows-amd64.exe start -p existing-network-20220602182127-12108 --network=existing-network
E0602 18:21:57.271476   12108 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\addons-20220602171403-12108\client.crt: The system cannot find the path specified.
E0602 18:22:40.984462   12108 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\ingress-addon-legacy-20220602180932-12108\client.crt: The system cannot find the path specified.
E0602 18:23:09.174209   12108 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\ingress-addon-legacy-20220602180932-12108\client.crt: The system cannot find the path specified.
kic_custom_network_test.go:93: (dbg) Done: out/minikube-windows-amd64.exe start -p existing-network-20220602182127-12108 --network=existing-network: (1m55.6316955s)
helpers_test.go:175: Cleaning up "existing-network-20220602182127-12108" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p existing-network-20220602182127-12108
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p existing-network-20220602182127-12108: (21.3167512s)
--- PASS: TestKicExistingNetwork (143.29s)

                                                
                                    
x
+
TestKicCustomSubnet (139.79s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-windows-amd64.exe start -p custom-subnet-20220602182346-12108 --subnet=192.168.60.0/24
E0602 18:24:06.659323   12108 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\functional-20220602172845-12108\client.crt: The system cannot find the path specified.
kic_custom_network_test.go:112: (dbg) Done: out/minikube-windows-amd64.exe start -p custom-subnet-20220602182346-12108 --subnet=192.168.60.0/24: (1m57.8538008s)
kic_custom_network_test.go:133: (dbg) Run:  docker network inspect custom-subnet-20220602182346-12108 --format "{{(index .IPAM.Config 0).Subnet}}"
kic_custom_network_test.go:133: (dbg) Done: docker network inspect custom-subnet-20220602182346-12108 --format "{{(index .IPAM.Config 0).Subnet}}": (1.0653442s)
helpers_test.go:175: Cleaning up "custom-subnet-20220602182346-12108" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p custom-subnet-20220602182346-12108
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p custom-subnet-20220602182346-12108: (20.8620297s)
--- PASS: TestKicCustomSubnet (139.79s)

                                                
                                    
x
+
TestMainNoArgs (0.33s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-windows-amd64.exe
--- PASS: TestMainNoArgs (0.33s)

                                                
                                    
x
+
TestMinikubeProfile (295.95s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-windows-amd64.exe start -p first-20220602182606-12108 --driver=docker
E0602 18:26:57.270826   12108 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\addons-20220602171403-12108\client.crt: The system cannot find the path specified.
E0602 18:27:40.980940   12108 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\ingress-addon-legacy-20220602180932-12108\client.crt: The system cannot find the path specified.
minikube_profile_test.go:44: (dbg) Done: out/minikube-windows-amd64.exe start -p first-20220602182606-12108 --driver=docker: (1m54.3240129s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-windows-amd64.exe start -p second-20220602182606-12108 --driver=docker
E0602 18:29:06.655831   12108 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\functional-20220602172845-12108\client.crt: The system cannot find the path specified.
minikube_profile_test.go:44: (dbg) Done: out/minikube-windows-amd64.exe start -p second-20220602182606-12108 --driver=docker: (1m51.9377578s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-windows-amd64.exe profile first-20220602182606-12108
minikube_profile_test.go:51: (dbg) Done: out/minikube-windows-amd64.exe profile first-20220602182606-12108: (2.9392508s)
minikube_profile_test.go:55: (dbg) Run:  out/minikube-windows-amd64.exe profile list -ojson
minikube_profile_test.go:55: (dbg) Done: out/minikube-windows-amd64.exe profile list -ojson: (10.2270695s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-windows-amd64.exe profile second-20220602182606-12108
minikube_profile_test.go:51: (dbg) Done: out/minikube-windows-amd64.exe profile second-20220602182606-12108: (2.9342645s)
minikube_profile_test.go:55: (dbg) Run:  out/minikube-windows-amd64.exe profile list -ojson
minikube_profile_test.go:55: (dbg) Done: out/minikube-windows-amd64.exe profile list -ojson: (10.0201193s)
helpers_test.go:175: Cleaning up "second-20220602182606-12108" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p second-20220602182606-12108
E0602 18:30:29.856958   12108 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\functional-20220602172845-12108\client.crt: The system cannot find the path specified.
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p second-20220602182606-12108: (23.1690384s)
helpers_test.go:175: Cleaning up "first-20220602182606-12108" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p first-20220602182606-12108
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p first-20220602182606-12108: (20.3957015s)
--- PASS: TestMinikubeProfile (295.95s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (49.48s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-windows-amd64.exe start -p mount-start-1-20220602183102-12108 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker
mount_start_test.go:98: (dbg) Done: out/minikube-windows-amd64.exe start -p mount-start-1-20220602183102-12108 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker: (48.4753668s)
--- PASS: TestMountStart/serial/StartWithMountFirst (49.48s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (5.97s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-windows-amd64.exe -p mount-start-1-20220602183102-12108 ssh -- ls /minikube-host
E0602 18:31:57.280120   12108 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\addons-20220602171403-12108\client.crt: The system cannot find the path specified.
mount_start_test.go:114: (dbg) Done: out/minikube-windows-amd64.exe -p mount-start-1-20220602183102-12108 ssh -- ls /minikube-host: (5.9715111s)
--- PASS: TestMountStart/serial/VerifyMountFirst (5.97s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (49.82s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-windows-amd64.exe start -p mount-start-2-20220602183102-12108 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker
E0602 18:32:40.986885   12108 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\ingress-addon-legacy-20220602180932-12108\client.crt: The system cannot find the path specified.
mount_start_test.go:98: (dbg) Done: out/minikube-windows-amd64.exe start -p mount-start-2-20220602183102-12108 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker: (48.8080726s)
--- PASS: TestMountStart/serial/StartWithMountSecond (49.82s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (6.08s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-windows-amd64.exe -p mount-start-2-20220602183102-12108 ssh -- ls /minikube-host
mount_start_test.go:114: (dbg) Done: out/minikube-windows-amd64.exe -p mount-start-2-20220602183102-12108 ssh -- ls /minikube-host: (6.078441s)
--- PASS: TestMountStart/serial/VerifyMountSecond (6.08s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (18.97s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-windows-amd64.exe delete -p mount-start-1-20220602183102-12108 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-windows-amd64.exe delete -p mount-start-1-20220602183102-12108 --alsologtostderr -v=5: (18.9717015s)
--- PASS: TestMountStart/serial/DeleteFirst (18.97s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (6.08s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-windows-amd64.exe -p mount-start-2-20220602183102-12108 ssh -- ls /minikube-host
mount_start_test.go:114: (dbg) Done: out/minikube-windows-amd64.exe -p mount-start-2-20220602183102-12108 ssh -- ls /minikube-host: (6.0762836s)
--- PASS: TestMountStart/serial/VerifyMountPostDelete (6.08s)

                                                
                                    
x
+
TestMountStart/serial/Stop (8.38s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-windows-amd64.exe stop -p mount-start-2-20220602183102-12108
E0602 18:33:20.456684   12108 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\addons-20220602171403-12108\client.crt: The system cannot find the path specified.
mount_start_test.go:155: (dbg) Done: out/minikube-windows-amd64.exe stop -p mount-start-2-20220602183102-12108: (8.377072s)
--- PASS: TestMountStart/serial/Stop (8.38s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (28.92s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-windows-amd64.exe start -p mount-start-2-20220602183102-12108
mount_start_test.go:166: (dbg) Done: out/minikube-windows-amd64.exe start -p mount-start-2-20220602183102-12108: (27.9225262s)
--- PASS: TestMountStart/serial/RestartStopped (28.92s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (6.4s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-windows-amd64.exe -p mount-start-2-20220602183102-12108 ssh -- ls /minikube-host
mount_start_test.go:114: (dbg) Done: out/minikube-windows-amd64.exe -p mount-start-2-20220602183102-12108 ssh -- ls /minikube-host: (6.4008735s)
--- PASS: TestMountStart/serial/VerifyMountPostStop (6.40s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (261s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:83: (dbg) Run:  out/minikube-windows-amd64.exe start -p multinode-20220602183426-12108 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker
E0602 18:36:57.270014   12108 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\addons-20220602171403-12108\client.crt: The system cannot find the path specified.
E0602 18:37:40.992856   12108 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\ingress-addon-legacy-20220602180932-12108\client.crt: The system cannot find the path specified.
multinode_test.go:83: (dbg) Done: out/minikube-windows-amd64.exe start -p multinode-20220602183426-12108 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker: (4m11.1948196s)
multinode_test.go:89: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-20220602183426-12108 status --alsologtostderr
multinode_test.go:89: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-20220602183426-12108 status --alsologtostderr: (9.804374s)
--- PASS: TestMultiNode/serial/FreshStart2Nodes (261.00s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (25.28s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:479: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-20220602183426-12108 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:479: (dbg) Done: out/minikube-windows-amd64.exe kubectl -p multinode-20220602183426-12108 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml: (2.6103762s)
multinode_test.go:484: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-20220602183426-12108 -- rollout status deployment/busybox
multinode_test.go:484: (dbg) Done: out/minikube-windows-amd64.exe kubectl -p multinode-20220602183426-12108 -- rollout status deployment/busybox: (3.5891877s)
multinode_test.go:490: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-20220602183426-12108 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:490: (dbg) Done: out/minikube-windows-amd64.exe kubectl -p multinode-20220602183426-12108 -- get pods -o jsonpath='{.items[*].status.podIP}': (1.8978725s)
multinode_test.go:502: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-20220602183426-12108 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:502: (dbg) Done: out/minikube-windows-amd64.exe kubectl -p multinode-20220602183426-12108 -- get pods -o jsonpath='{.items[*].metadata.name}': (1.9360151s)
multinode_test.go:510: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-20220602183426-12108 -- exec busybox-7978565885-cq5mh -- nslookup kubernetes.io
multinode_test.go:510: (dbg) Done: out/minikube-windows-amd64.exe kubectl -p multinode-20220602183426-12108 -- exec busybox-7978565885-cq5mh -- nslookup kubernetes.io: (3.3400346s)
multinode_test.go:510: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-20220602183426-12108 -- exec busybox-7978565885-w86bp -- nslookup kubernetes.io
multinode_test.go:510: (dbg) Done: out/minikube-windows-amd64.exe kubectl -p multinode-20220602183426-12108 -- exec busybox-7978565885-w86bp -- nslookup kubernetes.io: (3.2818007s)
multinode_test.go:520: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-20220602183426-12108 -- exec busybox-7978565885-cq5mh -- nslookup kubernetes.default
multinode_test.go:520: (dbg) Done: out/minikube-windows-amd64.exe kubectl -p multinode-20220602183426-12108 -- exec busybox-7978565885-cq5mh -- nslookup kubernetes.default: (2.1789974s)
multinode_test.go:520: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-20220602183426-12108 -- exec busybox-7978565885-w86bp -- nslookup kubernetes.default
E0602 18:39:06.662955   12108 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\functional-20220602172845-12108\client.crt: The system cannot find the path specified.
multinode_test.go:520: (dbg) Done: out/minikube-windows-amd64.exe kubectl -p multinode-20220602183426-12108 -- exec busybox-7978565885-w86bp -- nslookup kubernetes.default: (2.1954874s)
multinode_test.go:528: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-20220602183426-12108 -- exec busybox-7978565885-cq5mh -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:528: (dbg) Done: out/minikube-windows-amd64.exe kubectl -p multinode-20220602183426-12108 -- exec busybox-7978565885-cq5mh -- nslookup kubernetes.default.svc.cluster.local: (2.1735958s)
multinode_test.go:528: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-20220602183426-12108 -- exec busybox-7978565885-w86bp -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:528: (dbg) Done: out/minikube-windows-amd64.exe kubectl -p multinode-20220602183426-12108 -- exec busybox-7978565885-w86bp -- nslookup kubernetes.default.svc.cluster.local: (2.0750284s)
--- PASS: TestMultiNode/serial/DeployApp2Nodes (25.28s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (10.6s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:538: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-20220602183426-12108 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:538: (dbg) Done: out/minikube-windows-amd64.exe kubectl -p multinode-20220602183426-12108 -- get pods -o jsonpath='{.items[*].metadata.name}': (1.9529779s)
multinode_test.go:546: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-20220602183426-12108 -- exec busybox-7978565885-cq5mh -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:546: (dbg) Done: out/minikube-windows-amd64.exe kubectl -p multinode-20220602183426-12108 -- exec busybox-7978565885-cq5mh -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3": (2.1620129s)
multinode_test.go:554: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-20220602183426-12108 -- exec busybox-7978565885-cq5mh -- sh -c "ping -c 1 192.168.65.2"
multinode_test.go:554: (dbg) Done: out/minikube-windows-amd64.exe kubectl -p multinode-20220602183426-12108 -- exec busybox-7978565885-cq5mh -- sh -c "ping -c 1 192.168.65.2": (2.1421337s)
multinode_test.go:546: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-20220602183426-12108 -- exec busybox-7978565885-w86bp -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:546: (dbg) Done: out/minikube-windows-amd64.exe kubectl -p multinode-20220602183426-12108 -- exec busybox-7978565885-w86bp -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3": (2.1472495s)
multinode_test.go:554: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-20220602183426-12108 -- exec busybox-7978565885-w86bp -- sh -c "ping -c 1 192.168.65.2"
multinode_test.go:554: (dbg) Done: out/minikube-windows-amd64.exe kubectl -p multinode-20220602183426-12108 -- exec busybox-7978565885-w86bp -- sh -c "ping -c 1 192.168.65.2": (2.1965756s)
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (10.60s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (116.56s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:108: (dbg) Run:  out/minikube-windows-amd64.exe node add -p multinode-20220602183426-12108 -v 3 --alsologtostderr
multinode_test.go:108: (dbg) Done: out/minikube-windows-amd64.exe node add -p multinode-20220602183426-12108 -v 3 --alsologtostderr: (1m43.0991707s)
multinode_test.go:114: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-20220602183426-12108 status --alsologtostderr
multinode_test.go:114: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-20220602183426-12108 status --alsologtostderr: (13.4593566s)
--- PASS: TestMultiNode/serial/AddNode (116.56s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (6.42s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:130: (dbg) Run:  out/minikube-windows-amd64.exe profile list --output json
multinode_test.go:130: (dbg) Done: out/minikube-windows-amd64.exe profile list --output json: (6.4145603s)
--- PASS: TestMultiNode/serial/ProfileList (6.42s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (216.66s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:171: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-20220602183426-12108 status --output json --alsologtostderr
multinode_test.go:171: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-20220602183426-12108 status --output json --alsologtostderr: (13.2067742s)
helpers_test.go:554: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-20220602183426-12108 cp testdata\cp-test.txt multinode-20220602183426-12108:/home/docker/cp-test.txt
helpers_test.go:554: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-20220602183426-12108 cp testdata\cp-test.txt multinode-20220602183426-12108:/home/docker/cp-test.txt: (6.3045922s)
helpers_test.go:532: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-20220602183426-12108 ssh -n multinode-20220602183426-12108 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:532: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-20220602183426-12108 ssh -n multinode-20220602183426-12108 "sudo cat /home/docker/cp-test.txt": (6.3142053s)
helpers_test.go:554: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-20220602183426-12108 cp multinode-20220602183426-12108:/home/docker/cp-test.txt C:\Users\jenkins.minikube7\AppData\Local\Temp\TestMultiNodeserialCopyFile4215155118\001\cp-test_multinode-20220602183426-12108.txt
E0602 18:41:57.271797   12108 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\addons-20220602171403-12108\client.crt: The system cannot find the path specified.
helpers_test.go:554: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-20220602183426-12108 cp multinode-20220602183426-12108:/home/docker/cp-test.txt C:\Users\jenkins.minikube7\AppData\Local\Temp\TestMultiNodeserialCopyFile4215155118\001\cp-test_multinode-20220602183426-12108.txt: (6.1739696s)
helpers_test.go:532: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-20220602183426-12108 ssh -n multinode-20220602183426-12108 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:532: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-20220602183426-12108 ssh -n multinode-20220602183426-12108 "sudo cat /home/docker/cp-test.txt": (6.3535976s)
helpers_test.go:554: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-20220602183426-12108 cp multinode-20220602183426-12108:/home/docker/cp-test.txt multinode-20220602183426-12108-m02:/home/docker/cp-test_multinode-20220602183426-12108_multinode-20220602183426-12108-m02.txt
helpers_test.go:554: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-20220602183426-12108 cp multinode-20220602183426-12108:/home/docker/cp-test.txt multinode-20220602183426-12108-m02:/home/docker/cp-test_multinode-20220602183426-12108_multinode-20220602183426-12108-m02.txt: (8.6389897s)
helpers_test.go:532: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-20220602183426-12108 ssh -n multinode-20220602183426-12108 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:532: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-20220602183426-12108 ssh -n multinode-20220602183426-12108 "sudo cat /home/docker/cp-test.txt": (6.2816701s)
helpers_test.go:532: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-20220602183426-12108 ssh -n multinode-20220602183426-12108-m02 "sudo cat /home/docker/cp-test_multinode-20220602183426-12108_multinode-20220602183426-12108-m02.txt"
helpers_test.go:532: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-20220602183426-12108 ssh -n multinode-20220602183426-12108-m02 "sudo cat /home/docker/cp-test_multinode-20220602183426-12108_multinode-20220602183426-12108-m02.txt": (6.2452644s)
helpers_test.go:554: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-20220602183426-12108 cp multinode-20220602183426-12108:/home/docker/cp-test.txt multinode-20220602183426-12108-m03:/home/docker/cp-test_multinode-20220602183426-12108_multinode-20220602183426-12108-m03.txt
helpers_test.go:554: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-20220602183426-12108 cp multinode-20220602183426-12108:/home/docker/cp-test.txt multinode-20220602183426-12108-m03:/home/docker/cp-test_multinode-20220602183426-12108_multinode-20220602183426-12108-m03.txt: (8.5402297s)
helpers_test.go:532: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-20220602183426-12108 ssh -n multinode-20220602183426-12108 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:532: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-20220602183426-12108 ssh -n multinode-20220602183426-12108 "sudo cat /home/docker/cp-test.txt": (6.2810943s)
helpers_test.go:532: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-20220602183426-12108 ssh -n multinode-20220602183426-12108-m03 "sudo cat /home/docker/cp-test_multinode-20220602183426-12108_multinode-20220602183426-12108-m03.txt"
E0602 18:42:41.000268   12108 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\ingress-addon-legacy-20220602180932-12108\client.crt: The system cannot find the path specified.
helpers_test.go:532: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-20220602183426-12108 ssh -n multinode-20220602183426-12108-m03 "sudo cat /home/docker/cp-test_multinode-20220602183426-12108_multinode-20220602183426-12108-m03.txt": (6.1966733s)
helpers_test.go:554: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-20220602183426-12108 cp testdata\cp-test.txt multinode-20220602183426-12108-m02:/home/docker/cp-test.txt
helpers_test.go:554: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-20220602183426-12108 cp testdata\cp-test.txt multinode-20220602183426-12108-m02:/home/docker/cp-test.txt: (6.3737707s)
helpers_test.go:532: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-20220602183426-12108 ssh -n multinode-20220602183426-12108-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:532: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-20220602183426-12108 ssh -n multinode-20220602183426-12108-m02 "sudo cat /home/docker/cp-test.txt": (6.335031s)
helpers_test.go:554: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-20220602183426-12108 cp multinode-20220602183426-12108-m02:/home/docker/cp-test.txt C:\Users\jenkins.minikube7\AppData\Local\Temp\TestMultiNodeserialCopyFile4215155118\001\cp-test_multinode-20220602183426-12108-m02.txt
helpers_test.go:554: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-20220602183426-12108 cp multinode-20220602183426-12108-m02:/home/docker/cp-test.txt C:\Users\jenkins.minikube7\AppData\Local\Temp\TestMultiNodeserialCopyFile4215155118\001\cp-test_multinode-20220602183426-12108-m02.txt: (6.3315506s)
helpers_test.go:532: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-20220602183426-12108 ssh -n multinode-20220602183426-12108-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:532: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-20220602183426-12108 ssh -n multinode-20220602183426-12108-m02 "sudo cat /home/docker/cp-test.txt": (6.261655s)
helpers_test.go:554: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-20220602183426-12108 cp multinode-20220602183426-12108-m02:/home/docker/cp-test.txt multinode-20220602183426-12108:/home/docker/cp-test_multinode-20220602183426-12108-m02_multinode-20220602183426-12108.txt
helpers_test.go:554: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-20220602183426-12108 cp multinode-20220602183426-12108-m02:/home/docker/cp-test.txt multinode-20220602183426-12108:/home/docker/cp-test_multinode-20220602183426-12108-m02_multinode-20220602183426-12108.txt: (8.745977s)
helpers_test.go:532: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-20220602183426-12108 ssh -n multinode-20220602183426-12108-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:532: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-20220602183426-12108 ssh -n multinode-20220602183426-12108-m02 "sudo cat /home/docker/cp-test.txt": (6.2994911s)
helpers_test.go:532: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-20220602183426-12108 ssh -n multinode-20220602183426-12108 "sudo cat /home/docker/cp-test_multinode-20220602183426-12108-m02_multinode-20220602183426-12108.txt"
helpers_test.go:532: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-20220602183426-12108 ssh -n multinode-20220602183426-12108 "sudo cat /home/docker/cp-test_multinode-20220602183426-12108-m02_multinode-20220602183426-12108.txt": (6.3518938s)
helpers_test.go:554: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-20220602183426-12108 cp multinode-20220602183426-12108-m02:/home/docker/cp-test.txt multinode-20220602183426-12108-m03:/home/docker/cp-test_multinode-20220602183426-12108-m02_multinode-20220602183426-12108-m03.txt
helpers_test.go:554: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-20220602183426-12108 cp multinode-20220602183426-12108-m02:/home/docker/cp-test.txt multinode-20220602183426-12108-m03:/home/docker/cp-test_multinode-20220602183426-12108-m02_multinode-20220602183426-12108-m03.txt: (8.6840296s)
helpers_test.go:532: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-20220602183426-12108 ssh -n multinode-20220602183426-12108-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:532: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-20220602183426-12108 ssh -n multinode-20220602183426-12108-m02 "sudo cat /home/docker/cp-test.txt": (6.346975s)
helpers_test.go:532: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-20220602183426-12108 ssh -n multinode-20220602183426-12108-m03 "sudo cat /home/docker/cp-test_multinode-20220602183426-12108-m02_multinode-20220602183426-12108-m03.txt"
helpers_test.go:532: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-20220602183426-12108 ssh -n multinode-20220602183426-12108-m03 "sudo cat /home/docker/cp-test_multinode-20220602183426-12108-m02_multinode-20220602183426-12108-m03.txt": (6.30622s)
helpers_test.go:554: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-20220602183426-12108 cp testdata\cp-test.txt multinode-20220602183426-12108-m03:/home/docker/cp-test.txt
helpers_test.go:554: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-20220602183426-12108 cp testdata\cp-test.txt multinode-20220602183426-12108-m03:/home/docker/cp-test.txt: (6.2986744s)
helpers_test.go:532: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-20220602183426-12108 ssh -n multinode-20220602183426-12108-m03 "sudo cat /home/docker/cp-test.txt"
E0602 18:44:06.660354   12108 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\functional-20220602172845-12108\client.crt: The system cannot find the path specified.
helpers_test.go:532: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-20220602183426-12108 ssh -n multinode-20220602183426-12108-m03 "sudo cat /home/docker/cp-test.txt": (6.3009573s)
helpers_test.go:554: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-20220602183426-12108 cp multinode-20220602183426-12108-m03:/home/docker/cp-test.txt C:\Users\jenkins.minikube7\AppData\Local\Temp\TestMultiNodeserialCopyFile4215155118\001\cp-test_multinode-20220602183426-12108-m03.txt
helpers_test.go:554: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-20220602183426-12108 cp multinode-20220602183426-12108-m03:/home/docker/cp-test.txt C:\Users\jenkins.minikube7\AppData\Local\Temp\TestMultiNodeserialCopyFile4215155118\001\cp-test_multinode-20220602183426-12108-m03.txt: (6.4072311s)
helpers_test.go:532: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-20220602183426-12108 ssh -n multinode-20220602183426-12108-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:532: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-20220602183426-12108 ssh -n multinode-20220602183426-12108-m03 "sudo cat /home/docker/cp-test.txt": (6.3490332s)
helpers_test.go:554: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-20220602183426-12108 cp multinode-20220602183426-12108-m03:/home/docker/cp-test.txt multinode-20220602183426-12108:/home/docker/cp-test_multinode-20220602183426-12108-m03_multinode-20220602183426-12108.txt
helpers_test.go:554: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-20220602183426-12108 cp multinode-20220602183426-12108-m03:/home/docker/cp-test.txt multinode-20220602183426-12108:/home/docker/cp-test_multinode-20220602183426-12108-m03_multinode-20220602183426-12108.txt: (8.8021729s)
helpers_test.go:532: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-20220602183426-12108 ssh -n multinode-20220602183426-12108-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:532: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-20220602183426-12108 ssh -n multinode-20220602183426-12108-m03 "sudo cat /home/docker/cp-test.txt": (6.3477523s)
helpers_test.go:532: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-20220602183426-12108 ssh -n multinode-20220602183426-12108 "sudo cat /home/docker/cp-test_multinode-20220602183426-12108-m03_multinode-20220602183426-12108.txt"
helpers_test.go:532: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-20220602183426-12108 ssh -n multinode-20220602183426-12108 "sudo cat /home/docker/cp-test_multinode-20220602183426-12108-m03_multinode-20220602183426-12108.txt": (6.2567196s)
helpers_test.go:554: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-20220602183426-12108 cp multinode-20220602183426-12108-m03:/home/docker/cp-test.txt multinode-20220602183426-12108-m02:/home/docker/cp-test_multinode-20220602183426-12108-m03_multinode-20220602183426-12108-m02.txt
helpers_test.go:554: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-20220602183426-12108 cp multinode-20220602183426-12108-m03:/home/docker/cp-test.txt multinode-20220602183426-12108-m02:/home/docker/cp-test_multinode-20220602183426-12108-m03_multinode-20220602183426-12108-m02.txt: (8.6989672s)
helpers_test.go:532: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-20220602183426-12108 ssh -n multinode-20220602183426-12108-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:532: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-20220602183426-12108 ssh -n multinode-20220602183426-12108-m03 "sudo cat /home/docker/cp-test.txt": (6.2838156s)
helpers_test.go:532: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-20220602183426-12108 ssh -n multinode-20220602183426-12108-m02 "sudo cat /home/docker/cp-test_multinode-20220602183426-12108-m03_multinode-20220602183426-12108-m02.txt"
helpers_test.go:532: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-20220602183426-12108 ssh -n multinode-20220602183426-12108-m02 "sudo cat /home/docker/cp-test_multinode-20220602183426-12108-m03_multinode-20220602183426-12108-m02.txt": (6.3267354s)
--- PASS: TestMultiNode/serial/CopyFile (216.66s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (29.42s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:208: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-20220602183426-12108 node stop m03
multinode_test.go:208: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-20220602183426-12108 node stop m03: (7.5836837s)
multinode_test.go:214: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-20220602183426-12108 status
multinode_test.go:214: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p multinode-20220602183426-12108 status: exit status 7 (10.9857082s)

                                                
                                                
-- stdout --
	multinode-20220602183426-12108
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-20220602183426-12108-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-20220602183426-12108-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:221: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-20220602183426-12108 status --alsologtostderr
multinode_test.go:221: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p multinode-20220602183426-12108 status --alsologtostderr: exit status 7 (10.8458807s)

                                                
                                                
-- stdout --
	multinode-20220602183426-12108
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-20220602183426-12108-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-20220602183426-12108-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0602 18:45:21.821452   11128 out.go:296] Setting OutFile to fd 464 ...
	I0602 18:45:21.887080   11128 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0602 18:45:21.887080   11128 out.go:309] Setting ErrFile to fd 668...
	I0602 18:45:21.887246   11128 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0602 18:45:21.894728   11128 out.go:303] Setting JSON to false
	I0602 18:45:21.894728   11128 mustload.go:65] Loading cluster: multinode-20220602183426-12108
	I0602 18:45:21.894728   11128 config.go:178] Loaded profile config "multinode-20220602183426-12108": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.6
	I0602 18:45:21.894728   11128 status.go:253] checking status of multinode-20220602183426-12108 ...
	I0602 18:45:21.910658   11128 cli_runner.go:164] Run: docker container inspect multinode-20220602183426-12108 --format={{.State.Status}}
	I0602 18:45:24.369823   11128 cli_runner.go:217] Completed: docker container inspect multinode-20220602183426-12108 --format={{.State.Status}}: (2.4591542s)
	I0602 18:45:24.369823   11128 status.go:328] multinode-20220602183426-12108 host status = "Running" (err=<nil>)
	I0602 18:45:24.369823   11128 host.go:66] Checking if "multinode-20220602183426-12108" exists ...
	I0602 18:45:24.378481   11128 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-20220602183426-12108
	I0602 18:45:25.430448   11128 cli_runner.go:217] Completed: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-20220602183426-12108: (1.0519631s)
	I0602 18:45:25.430809   11128 host.go:66] Checking if "multinode-20220602183426-12108" exists ...
	I0602 18:45:25.443587   11128 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0602 18:45:25.446527   11128 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220602183426-12108
	I0602 18:45:26.523768   11128 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220602183426-12108: (1.0770774s)
	I0602 18:45:26.524466   11128 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:52355 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\multinode-20220602183426-12108\id_rsa Username:docker}
	I0602 18:45:26.654597   11128 ssh_runner.go:235] Completed: sh -c "df -h /var | awk 'NR==2{print $5}'": (1.2109676s)
	I0602 18:45:26.667313   11128 ssh_runner.go:195] Run: systemctl --version
	I0602 18:45:26.696723   11128 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0602 18:45:26.736924   11128 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" multinode-20220602183426-12108
	I0602 18:45:27.796301   11128 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" multinode-20220602183426-12108: (1.0592109s)
	I0602 18:45:27.797934   11128 kubeconfig.go:92] found "multinode-20220602183426-12108" server: "https://127.0.0.1:52354"
	I0602 18:45:27.797986   11128 api_server.go:165] Checking apiserver status ...
	I0602 18:45:27.812529   11128 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0602 18:45:27.857402   11128 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1758/cgroup
	I0602 18:45:27.894401   11128 api_server.go:181] apiserver freezer: "7:freezer:/docker/666016d0ef9948e81f51558fa7f69391573ac95895ad6485bfdd5181ff5a5a3d/kubepods/burstable/pod848f1228e732a9b8c64c727b2d0cc843/0898148cc6d47dbfaf8833d98420e82acce4ec82b950db517eefcb4ffecf85ad"
	I0602 18:45:27.904578   11128 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/666016d0ef9948e81f51558fa7f69391573ac95895ad6485bfdd5181ff5a5a3d/kubepods/burstable/pod848f1228e732a9b8c64c727b2d0cc843/0898148cc6d47dbfaf8833d98420e82acce4ec82b950db517eefcb4ffecf85ad/freezer.state
	I0602 18:45:27.928136   11128 api_server.go:203] freezer state: "THAWED"
	I0602 18:45:27.928168   11128 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:52354/healthz ...
	I0602 18:45:27.949603   11128 api_server.go:266] https://127.0.0.1:52354/healthz returned 200:
	ok
	I0602 18:45:27.949603   11128 status.go:419] multinode-20220602183426-12108 apiserver status = Running (err=<nil>)
	I0602 18:45:27.949603   11128 status.go:255] multinode-20220602183426-12108 status: &{Name:multinode-20220602183426-12108 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0602 18:45:27.949603   11128 status.go:253] checking status of multinode-20220602183426-12108-m02 ...
	I0602 18:45:27.965072   11128 cli_runner.go:164] Run: docker container inspect multinode-20220602183426-12108-m02 --format={{.State.Status}}
	I0602 18:45:29.023585   11128 cli_runner.go:217] Completed: docker container inspect multinode-20220602183426-12108-m02 --format={{.State.Status}}: (1.0583918s)
	I0602 18:45:29.023680   11128 status.go:328] multinode-20220602183426-12108-m02 host status = "Running" (err=<nil>)
	I0602 18:45:29.023680   11128 host.go:66] Checking if "multinode-20220602183426-12108-m02" exists ...
	I0602 18:45:29.033773   11128 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-20220602183426-12108-m02
	I0602 18:45:30.088112   11128 cli_runner.go:217] Completed: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-20220602183426-12108-m02: (1.0541627s)
	I0602 18:45:30.088112   11128 host.go:66] Checking if "multinode-20220602183426-12108-m02" exists ...
	I0602 18:45:30.097433   11128 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0602 18:45:30.100442   11128 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220602183426-12108-m02
	I0602 18:45:31.161167   11128 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220602183426-12108-m02: (1.0607204s)
	I0602 18:45:31.161751   11128 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:52410 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\multinode-20220602183426-12108-m02\id_rsa Username:docker}
	I0602 18:45:31.290665   11128 ssh_runner.go:235] Completed: sh -c "df -h /var | awk 'NR==2{print $5}'": (1.1932266s)
	I0602 18:45:31.303284   11128 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0602 18:45:31.335346   11128 status.go:255] multinode-20220602183426-12108-m02 status: &{Name:multinode-20220602183426-12108-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0602 18:45:31.335465   11128 status.go:253] checking status of multinode-20220602183426-12108-m03 ...
	I0602 18:45:31.348514   11128 cli_runner.go:164] Run: docker container inspect multinode-20220602183426-12108-m03 --format={{.State.Status}}
	I0602 18:45:32.400227   11128 cli_runner.go:217] Completed: docker container inspect multinode-20220602183426-12108-m03 --format={{.State.Status}}: (1.0517081s)
	I0602 18:45:32.400227   11128 status.go:328] multinode-20220602183426-12108-m03 host status = "Stopped" (err=<nil>)
	I0602 18:45:32.400227   11128 status.go:341] host is not running, skipping remaining checks
	I0602 18:45:32.400227   11128 status.go:255] multinode-20220602183426-12108-m03 status: &{Name:multinode-20220602183426-12108-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (29.42s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (61.39s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:242: (dbg) Run:  docker version -f {{.Server.Version}}
multinode_test.go:242: (dbg) Done: docker version -f {{.Server.Version}}: (1.1137214s)
multinode_test.go:252: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-20220602183426-12108 node start m03 --alsologtostderr
multinode_test.go:252: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-20220602183426-12108 node start m03 --alsologtostderr: (46.7368002s)
multinode_test.go:259: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-20220602183426-12108 status
multinode_test.go:259: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-20220602183426-12108 status: (13.2527576s)
multinode_test.go:273: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (61.39s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (218.87s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:281: (dbg) Run:  out/minikube-windows-amd64.exe node list -p multinode-20220602183426-12108
multinode_test.go:288: (dbg) Run:  out/minikube-windows-amd64.exe stop -p multinode-20220602183426-12108
E0602 18:46:57.274051   12108 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\addons-20220602171403-12108\client.crt: The system cannot find the path specified.
E0602 18:47:09.863062   12108 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\functional-20220602172845-12108\client.crt: The system cannot find the path specified.
multinode_test.go:288: (dbg) Done: out/minikube-windows-amd64.exe stop -p multinode-20220602183426-12108: (37.934056s)
multinode_test.go:293: (dbg) Run:  out/minikube-windows-amd64.exe start -p multinode-20220602183426-12108 --wait=true -v=8 --alsologtostderr
E0602 18:47:40.990288   12108 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\ingress-addon-legacy-20220602180932-12108\client.crt: The system cannot find the path specified.
E0602 18:49:06.663232   12108 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\functional-20220602172845-12108\client.crt: The system cannot find the path specified.
E0602 18:50:00.467933   12108 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\addons-20220602171403-12108\client.crt: The system cannot find the path specified.
multinode_test.go:293: (dbg) Done: out/minikube-windows-amd64.exe start -p multinode-20220602183426-12108 --wait=true -v=8 --alsologtostderr: (3m0.2197888s)
multinode_test.go:298: (dbg) Run:  out/minikube-windows-amd64.exe node list -p multinode-20220602183426-12108
--- PASS: TestMultiNode/serial/RestartKeepsNodes (218.87s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (45.55s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:392: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-20220602183426-12108 node delete m03
E0602 18:50:44.549214   12108 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\ingress-addon-legacy-20220602180932-12108\client.crt: The system cannot find the path specified.
multinode_test.go:392: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-20220602183426-12108 node delete m03: (33.9961699s)
multinode_test.go:398: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-20220602183426-12108 status --alsologtostderr
multinode_test.go:398: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-20220602183426-12108 status --alsologtostderr: (9.8877457s)
multinode_test.go:412: (dbg) Run:  docker volume ls
multinode_test.go:412: (dbg) Done: docker volume ls: (1.0945076s)
multinode_test.go:422: (dbg) Run:  kubectl get nodes
multinode_test.go:430: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (45.55s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (40.28s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:312: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-20220602183426-12108 stop
multinode_test.go:312: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-20220602183426-12108 stop: (32.7047018s)
multinode_test.go:318: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-20220602183426-12108 status
multinode_test.go:318: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p multinode-20220602183426-12108 status: exit status 7 (3.8359092s)

                                                
                                                
-- stdout --
	multinode-20220602183426-12108
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-20220602183426-12108-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:325: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-20220602183426-12108 status --alsologtostderr
multinode_test.go:325: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p multinode-20220602183426-12108 status --alsologtostderr: exit status 7 (3.7404089s)

                                                
                                                
-- stdout --
	multinode-20220602183426-12108
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-20220602183426-12108-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0602 18:51:35.024928    7928 out.go:296] Setting OutFile to fd 776 ...
	I0602 18:51:35.080929    7928 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0602 18:51:35.080929    7928 out.go:309] Setting ErrFile to fd 708...
	I0602 18:51:35.080929    7928 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0602 18:51:35.091632    7928 out.go:303] Setting JSON to false
	I0602 18:51:35.092609    7928 mustload.go:65] Loading cluster: multinode-20220602183426-12108
	I0602 18:51:35.092829    7928 config.go:178] Loaded profile config "multinode-20220602183426-12108": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.6
	I0602 18:51:35.092829    7928 status.go:253] checking status of multinode-20220602183426-12108 ...
	I0602 18:51:35.106887    7928 cli_runner.go:164] Run: docker container inspect multinode-20220602183426-12108 --format={{.State.Status}}
	I0602 18:51:37.478843    7928 cli_runner.go:217] Completed: docker container inspect multinode-20220602183426-12108 --format={{.State.Status}}: (2.3719454s)
	I0602 18:51:37.478843    7928 status.go:328] multinode-20220602183426-12108 host status = "Stopped" (err=<nil>)
	I0602 18:51:37.478843    7928 status.go:341] host is not running, skipping remaining checks
	I0602 18:51:37.478843    7928 status.go:255] multinode-20220602183426-12108 status: &{Name:multinode-20220602183426-12108 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0602 18:51:37.478843    7928 status.go:253] checking status of multinode-20220602183426-12108-m02 ...
	I0602 18:51:37.494331    7928 cli_runner.go:164] Run: docker container inspect multinode-20220602183426-12108-m02 --format={{.State.Status}}
	I0602 18:51:38.499185    7928 cli_runner.go:217] Completed: docker container inspect multinode-20220602183426-12108-m02 --format={{.State.Status}}: (1.0048504s)
	I0602 18:51:38.499185    7928 status.go:328] multinode-20220602183426-12108-m02 host status = "Stopped" (err=<nil>)
	I0602 18:51:38.499185    7928 status.go:341] host is not running, skipping remaining checks
	I0602 18:51:38.499185    7928 status.go:255] multinode-20220602183426-12108-m02 status: &{Name:multinode-20220602183426-12108-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (40.28s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (144.03s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:342: (dbg) Run:  docker version -f {{.Server.Version}}
multinode_test.go:342: (dbg) Done: docker version -f {{.Server.Version}}: (1.0903758s)
multinode_test.go:352: (dbg) Run:  out/minikube-windows-amd64.exe start -p multinode-20220602183426-12108 --wait=true -v=8 --alsologtostderr --driver=docker
E0602 18:51:57.276165   12108 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\addons-20220602171403-12108\client.crt: The system cannot find the path specified.
E0602 18:52:40.996064   12108 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\ingress-addon-legacy-20220602180932-12108\client.crt: The system cannot find the path specified.
multinode_test.go:352: (dbg) Done: out/minikube-windows-amd64.exe start -p multinode-20220602183426-12108 --wait=true -v=8 --alsologtostderr --driver=docker: (2m12.3302708s)
multinode_test.go:358: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-20220602183426-12108 status --alsologtostderr
multinode_test.go:358: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-20220602183426-12108 status --alsologtostderr: (10.0213234s)
multinode_test.go:372: (dbg) Run:  kubectl get nodes
multinode_test.go:380: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (144.03s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (140.12s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:441: (dbg) Run:  out/minikube-windows-amd64.exe node list -p multinode-20220602183426-12108
multinode_test.go:450: (dbg) Run:  out/minikube-windows-amd64.exe start -p multinode-20220602183426-12108-m02 --driver=docker
multinode_test.go:450: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p multinode-20220602183426-12108-m02 --driver=docker: exit status 14 (423.4394ms)

                                                
                                                
-- stdout --
	* [multinode-20220602183426-12108-m02] minikube v1.26.0-beta.1 on Microsoft Windows 10 Enterprise N 10.0.19044 Build 19044
	  - KUBECONFIG=C:\Users\jenkins.minikube7\minikube-integration\kubeconfig
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube7\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=14269
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-20220602183426-12108-m02' is duplicated with machine name 'multinode-20220602183426-12108-m02' in profile 'multinode-20220602183426-12108'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:458: (dbg) Run:  out/minikube-windows-amd64.exe start -p multinode-20220602183426-12108-m03 --driver=docker
E0602 18:54:06.660469   12108 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\functional-20220602172845-12108\client.crt: The system cannot find the path specified.
multinode_test.go:458: (dbg) Done: out/minikube-windows-amd64.exe start -p multinode-20220602183426-12108-m03 --driver=docker: (1m52.2927166s)
multinode_test.go:465: (dbg) Run:  out/minikube-windows-amd64.exe node add -p multinode-20220602183426-12108
multinode_test.go:465: (dbg) Non-zero exit: out/minikube-windows-amd64.exe node add -p multinode-20220602183426-12108: exit status 80 (5.6580703s)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-20220602183426-12108
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: Node multinode-20220602183426-12108-m03 already exists in multinode-20220602183426-12108-m03 profile
	* 
	╭───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                       │
	│    * If the above advice does not help, please let us know:                                                           │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                         │
	│                                                                                                                       │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                              │
	│    * Please also attach the following file to the GitHub issue:                                                       │
	│    * - C:\Users\jenkins.minikube7\AppData\Local\Temp\minikube_node_f30df829a49c27e09829ed66f8254940e71c1eac_15.log    │
	│                                                                                                                       │
	╰───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:470: (dbg) Run:  out/minikube-windows-amd64.exe delete -p multinode-20220602183426-12108-m03
multinode_test.go:470: (dbg) Done: out/minikube-windows-amd64.exe delete -p multinode-20220602183426-12108-m03: (21.3832567s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (140.12s)

                                                
                                    
x
+
TestPreload (349.33s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:48: (dbg) Run:  out/minikube-windows-amd64.exe start -p test-preload-20220602185659-12108 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker --kubernetes-version=v1.17.0
E0602 18:57:40.991114   12108 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\ingress-addon-legacy-20220602180932-12108\client.crt: The system cannot find the path specified.
E0602 18:59:06.662555   12108 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\functional-20220602172845-12108\client.crt: The system cannot find the path specified.
preload_test.go:48: (dbg) Done: out/minikube-windows-amd64.exe start -p test-preload-20220602185659-12108 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker --kubernetes-version=v1.17.0: (2m44.1768848s)
preload_test.go:61: (dbg) Run:  out/minikube-windows-amd64.exe ssh -p test-preload-20220602185659-12108 -- docker pull gcr.io/k8s-minikube/busybox
preload_test.go:61: (dbg) Done: out/minikube-windows-amd64.exe ssh -p test-preload-20220602185659-12108 -- docker pull gcr.io/k8s-minikube/busybox: (7.6701323s)
preload_test.go:71: (dbg) Run:  out/minikube-windows-amd64.exe start -p test-preload-20220602185659-12108 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker --kubernetes-version=v1.17.3
E0602 19:01:57.283953   12108 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\addons-20220602171403-12108\client.crt: The system cannot find the path specified.
preload_test.go:71: (dbg) Done: out/minikube-windows-amd64.exe start -p test-preload-20220602185659-12108 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker --kubernetes-version=v1.17.3: (2m26.1786592s)
preload_test.go:80: (dbg) Run:  out/minikube-windows-amd64.exe ssh -p test-preload-20220602185659-12108 -- docker images
preload_test.go:80: (dbg) Done: out/minikube-windows-amd64.exe ssh -p test-preload-20220602185659-12108 -- docker images: (6.7125869s)
helpers_test.go:175: Cleaning up "test-preload-20220602185659-12108" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p test-preload-20220602185659-12108
E0602 19:02:40.989137   12108 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\ingress-addon-legacy-20220602180932-12108\client.crt: The system cannot find the path specified.
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p test-preload-20220602185659-12108: (24.5877215s)
--- PASS: TestPreload (349.33s)

                                                
                                    
x
+
TestScheduledStopWindows (219.79s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-windows-amd64.exe start -p scheduled-stop-20220602190248-12108 --memory=2048 --driver=docker
E0602 19:03:49.874856   12108 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\functional-20220602172845-12108\client.crt: The system cannot find the path specified.
E0602 19:04:06.671714   12108 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\functional-20220602172845-12108\client.crt: The system cannot find the path specified.
scheduled_stop_test.go:128: (dbg) Done: out/minikube-windows-amd64.exe start -p scheduled-stop-20220602190248-12108 --memory=2048 --driver=docker: (1m50.7530237s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-windows-amd64.exe stop -p scheduled-stop-20220602190248-12108 --schedule 5m
scheduled_stop_test.go:137: (dbg) Done: out/minikube-windows-amd64.exe stop -p scheduled-stop-20220602190248-12108 --schedule 5m: (6.1154611s)
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.TimeToStop}} -p scheduled-stop-20220602190248-12108 -n scheduled-stop-20220602190248-12108
scheduled_stop_test.go:191: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.TimeToStop}} -p scheduled-stop-20220602190248-12108 -n scheduled-stop-20220602190248-12108: (6.65278s)
scheduled_stop_test.go:54: (dbg) Run:  out/minikube-windows-amd64.exe ssh -p scheduled-stop-20220602190248-12108 -- sudo systemctl show minikube-scheduled-stop --no-page
scheduled_stop_test.go:54: (dbg) Done: out/minikube-windows-amd64.exe ssh -p scheduled-stop-20220602190248-12108 -- sudo systemctl show minikube-scheduled-stop --no-page: (6.2513743s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-windows-amd64.exe stop -p scheduled-stop-20220602190248-12108 --schedule 5s
scheduled_stop_test.go:137: (dbg) Done: out/minikube-windows-amd64.exe stop -p scheduled-stop-20220602190248-12108 --schedule 5s: (4.7470477s)
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-windows-amd64.exe status -p scheduled-stop-20220602190248-12108
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status -p scheduled-stop-20220602190248-12108: exit status 7 (2.7851367s)

                                                
                                                
-- stdout --
	scheduled-stop-20220602190248-12108
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p scheduled-stop-20220602190248-12108 -n scheduled-stop-20220602190248-12108
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p scheduled-stop-20220602190248-12108 -n scheduled-stop-20220602190248-12108: exit status 7 (2.7795896s)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-20220602190248-12108" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p scheduled-stop-20220602190248-12108
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p scheduled-stop-20220602190248-12108: (19.6891746s)
--- PASS: TestScheduledStopWindows (219.79s)

                                                
                                    
x
+
TestInsufficientStorage (108.12s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-windows-amd64.exe start -p insufficient-storage-20220602190628-12108 --memory=2048 --output=json --wait=true --driver=docker
E0602 19:06:40.487017   12108 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\addons-20220602171403-12108\client.crt: The system cannot find the path specified.
E0602 19:06:57.276462   12108 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\addons-20220602171403-12108\client.crt: The system cannot find the path specified.
E0602 19:07:24.558346   12108 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\ingress-addon-legacy-20220602180932-12108\client.crt: The system cannot find the path specified.
E0602 19:07:40.998273   12108 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\ingress-addon-legacy-20220602180932-12108\client.crt: The system cannot find the path specified.
status_test.go:50: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p insufficient-storage-20220602190628-12108 --memory=2048 --output=json --wait=true --driver=docker: exit status 26 (1m16.5860516s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"5ed3e675-1be0-4fe3-b61f-ab63e44301f4","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-20220602190628-12108] minikube v1.26.0-beta.1 on Microsoft Windows 10 Enterprise N 10.0.19044 Build 19044","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"d6783ad5-1e80-49ae-8085-3a67ee2e854d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=C:\\Users\\jenkins.minikube7\\minikube-integration\\kubeconfig"}}
	{"specversion":"1.0","id":"f58743a9-99b6-471e-8081-a7c1e9b49c3c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=C:\\Users\\jenkins.minikube7\\minikube-integration\\.minikube"}}
	{"specversion":"1.0","id":"361a74fa-e08e-4d47-a08f-9a4ced499ac9","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=14269"}}
	{"specversion":"1.0","id":"4545678e-80db-49f6-9fb6-0e8263c6d671","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"a279d361-9e11-41c8-bdd4-c7fb0d472730","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"ec3f33ac-f39f-4226-8af7-48641475d017","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"7d7dfda8-a874-4ab0-946a-847927657776","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"df41061c-26f4-4d64-a545-885659648df4","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker Desktop driver with the root privilege"}}
	{"specversion":"1.0","id":"1d501347-cb95-4f14-984d-ec665fd8a634","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting control plane node insufficient-storage-20220602190628-12108 in cluster insufficient-storage-20220602190628-12108","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"646ec91c-e20d-4f40-90c4-d18f4e590746","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"b13feb12-8b99-448b-a643-b4e18d39d2eb","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=2048MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"8692ecd2-9e4b-4fe3-9d0b-c7def461f7c8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\t\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100%% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-windows-amd64.exe status -p insufficient-storage-20220602190628-12108 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status -p insufficient-storage-20220602190628-12108 --output=json --layout=cluster: exit status 7 (6.170536s)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-20220602190628-12108","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=2048MB) ...","BinaryVersion":"v1.26.0-beta.1","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-20220602190628-12108","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0602 19:07:51.269259    2084 status.go:413] kubeconfig endpoint: extract IP: "insufficient-storage-20220602190628-12108" does not appear in C:\Users\jenkins.minikube7\minikube-integration\kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-windows-amd64.exe status -p insufficient-storage-20220602190628-12108 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status -p insufficient-storage-20220602190628-12108 --output=json --layout=cluster: exit status 7 (6.0258821s)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-20220602190628-12108","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.26.0-beta.1","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-20220602190628-12108","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0602 19:07:57.298101    9832 status.go:413] kubeconfig endpoint: extract IP: "insufficient-storage-20220602190628-12108" does not appear in C:\Users\jenkins.minikube7\minikube-integration\kubeconfig
	E0602 19:07:57.343911    9832 status.go:557] unable to read event log: stat: CreateFile C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\insufficient-storage-20220602190628-12108\events.json: The system cannot find the file specified.

                                                
                                                
** /stderr **
helpers_test.go:175: Cleaning up "insufficient-storage-20220602190628-12108" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p insufficient-storage-20220602190628-12108
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p insufficient-storage-20220602190628-12108: (19.3376708s)
--- PASS: TestInsufficientStorage (108.12s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (375.11s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:127: (dbg) Run:  C:\Users\jenkins.minikube7\AppData\Local\Temp\minikube-v1.9.0.357661199.exe start -p running-upgrade-20220602191616-12108 --memory=2200 --vm-driver=docker

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:127: (dbg) Done: C:\Users\jenkins.minikube7\AppData\Local\Temp\minikube-v1.9.0.357661199.exe start -p running-upgrade-20220602191616-12108 --memory=2200 --vm-driver=docker: (4m9.3524285s)
version_upgrade_test.go:137: (dbg) Run:  out/minikube-windows-amd64.exe start -p running-upgrade-20220602191616-12108 --memory=2200 --alsologtostderr -v=1 --driver=docker
E0602 19:20:29.884142   12108 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\functional-20220602172845-12108\client.crt: The system cannot find the path specified.

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:137: (dbg) Done: out/minikube-windows-amd64.exe start -p running-upgrade-20220602191616-12108 --memory=2200 --alsologtostderr -v=1 --driver=docker: (1m36.1524577s)
helpers_test.go:175: Cleaning up "running-upgrade-20220602191616-12108" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p running-upgrade-20220602191616-12108

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p running-upgrade-20220602191616-12108: (29.0439522s)
--- PASS: TestRunningBinaryUpgrade (375.11s)

                                                
                                    
x
+
TestMissingContainerUpgrade (466.39s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:316: (dbg) Run:  C:\Users\jenkins.minikube7\AppData\Local\Temp\minikube-v1.9.1.2745068710.exe start -p missing-upgrade-20220602191159-12108 --memory=2200 --driver=docker

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:316: (dbg) Done: C:\Users\jenkins.minikube7\AppData\Local\Temp\minikube-v1.9.1.2745068710.exe start -p missing-upgrade-20220602191159-12108 --memory=2200 --driver=docker: (4m15.61721s)
version_upgrade_test.go:325: (dbg) Run:  docker stop missing-upgrade-20220602191159-12108

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:325: (dbg) Done: docker stop missing-upgrade-20220602191159-12108: (12.8521708s)
version_upgrade_test.go:330: (dbg) Run:  docker rm missing-upgrade-20220602191159-12108

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:330: (dbg) Done: docker rm missing-upgrade-20220602191159-12108: (1.1851047s)
version_upgrade_test.go:336: (dbg) Run:  out/minikube-windows-amd64.exe start -p missing-upgrade-20220602191159-12108 --memory=2200 --alsologtostderr -v=1 --driver=docker

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:336: (dbg) Done: out/minikube-windows-amd64.exe start -p missing-upgrade-20220602191159-12108 --memory=2200 --alsologtostderr -v=1 --driver=docker: (2m44.2150088s)
helpers_test.go:175: Cleaning up "missing-upgrade-20220602191159-12108" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p missing-upgrade-20220602191159-12108

                                                
                                                
=== CONT  TestMissingContainerUpgrade
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p missing-upgrade-20220602191159-12108: (31.421472s)
--- PASS: TestMissingContainerUpgrade (466.39s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (0.63s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (0.63s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.57s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion

                                                
                                                
=== CONT  TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-windows-amd64.exe start -p NoKubernetes-20220602190816-12108 --no-kubernetes --kubernetes-version=1.20 --driver=docker

                                                
                                                
=== CONT  TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p NoKubernetes-20220602190816-12108 --no-kubernetes --kubernetes-version=1.20 --driver=docker: exit status 14 (568.0745ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-20220602190816-12108] minikube v1.26.0-beta.1 on Microsoft Windows 10 Enterprise N 10.0.19044 Build 19044
	  - KUBECONFIG=C:\Users\jenkins.minikube7\minikube-integration\kubeconfig
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube7\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=14269
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.57s)

                                                
                                    
x
+
TestPause/serial/Start (195.15s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-windows-amd64.exe start -p pause-20220602190816-12108 --memory=2048 --install-addons=false --wait=all --driver=docker

                                                
                                                
=== CONT  TestPause/serial/Start
pause_test.go:80: (dbg) Done: out/minikube-windows-amd64.exe start -p pause-20220602190816-12108 --memory=2048 --install-addons=false --wait=all --driver=docker: (3m15.149404s)
--- PASS: TestPause/serial/Start (195.15s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (188.24s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-windows-amd64.exe start -p NoKubernetes-20220602190816-12108 --driver=docker

                                                
                                                
=== CONT  TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Done: out/minikube-windows-amd64.exe start -p NoKubernetes-20220602190816-12108 --driver=docker: (2m59.4934849s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-windows-amd64.exe -p NoKubernetes-20220602190816-12108 status -o json
no_kubernetes_test.go:200: (dbg) Done: out/minikube-windows-amd64.exe -p NoKubernetes-20220602190816-12108 status -o json: (8.7501169s)
--- PASS: TestNoKubernetes/serial/StartWithK8s (188.24s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (413.14s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:190: (dbg) Run:  C:\Users\jenkins.minikube7\AppData\Local\Temp\minikube-v1.9.0.2305731917.exe start -p stopped-upgrade-20220602190816-12108 --memory=2200 --vm-driver=docker
E0602 19:09:06.671439   12108 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\functional-20220602172845-12108\client.crt: The system cannot find the path specified.

                                                
                                                
=== CONT  TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:190: (dbg) Done: C:\Users\jenkins.minikube7\AppData\Local\Temp\minikube-v1.9.0.2305731917.exe start -p stopped-upgrade-20220602190816-12108 --memory=2200 --vm-driver=docker: (4m54.1886793s)
version_upgrade_test.go:199: (dbg) Run:  C:\Users\jenkins.minikube7\AppData\Local\Temp\minikube-v1.9.0.2305731917.exe -p stopped-upgrade-20220602190816-12108 stop

                                                
                                                
=== CONT  TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:199: (dbg) Done: C:\Users\jenkins.minikube7\AppData\Local\Temp\minikube-v1.9.0.2305731917.exe -p stopped-upgrade-20220602190816-12108 stop: (22.3864793s)
version_upgrade_test.go:205: (dbg) Run:  out/minikube-windows-amd64.exe start -p stopped-upgrade-20220602190816-12108 --memory=2200 --alsologtostderr -v=1 --driver=docker

                                                
                                                
=== CONT  TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:205: (dbg) Done: out/minikube-windows-amd64.exe start -p stopped-upgrade-20220602190816-12108 --memory=2200 --alsologtostderr -v=1 --driver=docker: (1m36.5623119s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (413.14s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (74.74s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-windows-amd64.exe start -p NoKubernetes-20220602190816-12108 --no-kubernetes --driver=docker

                                                
                                                
=== CONT  TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Done: out/minikube-windows-amd64.exe start -p NoKubernetes-20220602190816-12108 --no-kubernetes --driver=docker: (42.13946s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-windows-amd64.exe -p NoKubernetes-20220602190816-12108 status -o json

                                                
                                                
=== CONT  TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p NoKubernetes-20220602190816-12108 status -o json: exit status 2 (7.8838186s)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-20220602190816-12108","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-windows-amd64.exe delete -p NoKubernetes-20220602190816-12108

                                                
                                                
=== CONT  TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:124: (dbg) Done: out/minikube-windows-amd64.exe delete -p NoKubernetes-20220602190816-12108: (24.7141614s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (74.74s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (42.54s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-windows-amd64.exe start -p pause-20220602190816-12108 --alsologtostderr -v=1 --driver=docker

                                                
                                                
=== CONT  TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Done: out/minikube-windows-amd64.exe start -p pause-20220602190816-12108 --alsologtostderr -v=1 --driver=docker: (42.5232225s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (42.54s)

                                                
                                    
x
+
TestPause/serial/Pause (7.1s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-windows-amd64.exe pause -p pause-20220602190816-12108 --alsologtostderr -v=5

                                                
                                                
=== CONT  TestPause/serial/Pause
pause_test.go:110: (dbg) Done: out/minikube-windows-amd64.exe pause -p pause-20220602190816-12108 --alsologtostderr -v=5: (7.0963705s)
--- PASS: TestPause/serial/Pause (7.10s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (7.76s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-windows-amd64.exe status -p pause-20220602190816-12108 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status -p pause-20220602190816-12108 --output=json --layout=cluster: exit status 2 (7.7579187s)

                                                
                                                
-- stdout --
	{"Name":"pause-20220602190816-12108","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 14 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.26.0-beta.1","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-20220602190816-12108","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (7.76s)

                                                
                                    
x
+
TestPause/serial/Unpause (7.06s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-windows-amd64.exe unpause -p pause-20220602190816-12108 --alsologtostderr -v=5
pause_test.go:121: (dbg) Done: out/minikube-windows-amd64.exe unpause -p pause-20220602190816-12108 --alsologtostderr -v=5: (7.0630145s)
--- PASS: TestPause/serial/Unpause (7.06s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (7.86s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-windows-amd64.exe pause -p pause-20220602190816-12108 --alsologtostderr -v=5

                                                
                                                
=== CONT  TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Done: out/minikube-windows-amd64.exe pause -p pause-20220602190816-12108 --alsologtostderr -v=5: (7.8613766s)
--- PASS: TestPause/serial/PauseAgain (7.86s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (27.96s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-windows-amd64.exe delete -p pause-20220602190816-12108 --alsologtostderr -v=5

                                                
                                                
=== CONT  TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Done: out/minikube-windows-amd64.exe delete -p pause-20220602190816-12108 --alsologtostderr -v=5: (27.9648461s)
--- PASS: TestPause/serial/DeletePaused (27.96s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (20.15s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-windows-amd64.exe profile list --output json

                                                
                                                
=== CONT  TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Done: out/minikube-windows-amd64.exe profile list --output json: (16.1092741s)
pause_test.go:168: (dbg) Run:  docker ps -a
pause_test.go:168: (dbg) Done: docker ps -a: (1.3910398s)
pause_test.go:173: (dbg) Run:  docker volume inspect pause-20220602190816-12108
pause_test.go:173: (dbg) Non-zero exit: docker volume inspect pause-20220602190816-12108: exit status 1 (1.2694022s)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error: No such volume: pause-20220602190816-12108

                                                
                                                
** /stderr **
pause_test.go:178: (dbg) Run:  docker network ls
pause_test.go:178: (dbg) Done: docker network ls: (1.3436914s)
--- PASS: TestPause/serial/VerifyDeletedResources (20.15s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (10.95s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:213: (dbg) Run:  out/minikube-windows-amd64.exe logs -p stopped-upgrade-20220602190816-12108
version_upgrade_test.go:213: (dbg) Done: out/minikube-windows-amd64.exe logs -p stopped-upgrade-20220602190816-12108: (10.9521723s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (10.95s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (212.56s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:188: (dbg) Run:  out/minikube-windows-amd64.exe start -p old-k8s-version-20220602192231-12108 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker --kubernetes-version=v1.16.0

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:188: (dbg) Done: out/minikube-windows-amd64.exe start -p old-k8s-version-20220602192231-12108 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker --kubernetes-version=v1.16.0: (3m32.5600707s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (212.56s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (192.17s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:188: (dbg) Run:  out/minikube-windows-amd64.exe start -p no-preload-20220602192234-12108 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker --kubernetes-version=v1.23.6

                                                
                                                
=== CONT  TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:188: (dbg) Done: out/minikube-windows-amd64.exe start -p no-preload-20220602192234-12108 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker --kubernetes-version=v1.23.6: (3m12.1688941s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (192.17s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (152.4s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:188: (dbg) Run:  out/minikube-windows-amd64.exe start -p embed-certs-20220602192235-12108 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker --kubernetes-version=v1.23.6
E0602 19:22:41.003398   12108 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\ingress-addon-legacy-20220602180932-12108\client.crt: The system cannot find the path specified.

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:188: (dbg) Done: out/minikube-windows-amd64.exe start -p embed-certs-20220602192235-12108 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker --kubernetes-version=v1.23.6: (2m32.3988591s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (152.40s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/FirstStart (145.54s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/FirstStart
start_stop_delete_test.go:188: (dbg) Run:  out/minikube-windows-amd64.exe start -p default-k8s-different-port-20220602192441-12108 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker --kubernetes-version=v1.23.6

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/FirstStart
start_stop_delete_test.go:188: (dbg) Done: out/minikube-windows-amd64.exe start -p default-k8s-different-port-20220602192441-12108 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker --kubernetes-version=v1.23.6: (2m25.5357354s)
--- PASS: TestStartStop/group/default-k8s-different-port/serial/FirstStart (145.54s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (11.24s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:198: (dbg) Run:  kubectl --context embed-certs-20220602192235-12108 create -f testdata\busybox.yaml
start_stop_delete_test.go:198: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:342: "busybox" [e1e1541b-15e0-4ec3-b584-e0b7759fce43] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:342: "busybox" [e1e1541b-15e0-4ec3-b584-e0b7759fce43] Running
start_stop_delete_test.go:198: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 10.0336088s
start_stop_delete_test.go:198: (dbg) Run:  kubectl --context embed-certs-20220602192235-12108 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (11.24s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (12.51s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:207: (dbg) Run:  out/minikube-windows-amd64.exe addons enable metrics-server -p embed-certs-20220602192235-12108 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:207: (dbg) Done: out/minikube-windows-amd64.exe addons enable metrics-server -p embed-certs-20220602192235-12108 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (7.5824284s)
start_stop_delete_test.go:217: (dbg) Run:  kubectl --context embed-certs-20220602192235-12108 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:217: (dbg) Done: kubectl --context embed-certs-20220602192235-12108 describe deploy/metrics-server -n kube-system: (4.9231922s)
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (12.51s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (19.59s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:230: (dbg) Run:  out/minikube-windows-amd64.exe stop -p embed-certs-20220602192235-12108 --alsologtostderr -v=3

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:230: (dbg) Done: out/minikube-windows-amd64.exe stop -p embed-certs-20220602192235-12108 --alsologtostderr -v=3: (19.5850618s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (19.59s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (12.14s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:198: (dbg) Run:  kubectl --context no-preload-20220602192234-12108 create -f testdata\busybox.yaml
start_stop_delete_test.go:198: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:342: "busybox" [299d0ad6-04de-49c0-bef5-d0fdd5f22cd9] Pending
helpers_test.go:342: "busybox" [299d0ad6-04de-49c0-bef5-d0fdd5f22cd9] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])

                                                
                                                
=== CONT  TestStartStop/group/no-preload/serial/DeployApp
helpers_test.go:342: "busybox" [299d0ad6-04de-49c0-bef5-d0fdd5f22cd9] Running

                                                
                                                
=== CONT  TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:198: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 11.0708843s
start_stop_delete_test.go:198: (dbg) Run:  kubectl --context no-preload-20220602192234-12108 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (12.14s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (6.12s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:241: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p embed-certs-20220602192235-12108 -n embed-certs-20220602192235-12108

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:241: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p embed-certs-20220602192235-12108 -n embed-certs-20220602192235-12108: exit status 7 (3.0921306s)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:241: status error: exit status 7 (may be ok)
start_stop_delete_test.go:248: (dbg) Run:  out/minikube-windows-amd64.exe addons enable dashboard -p embed-certs-20220602192235-12108 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4
start_stop_delete_test.go:248: (dbg) Done: out/minikube-windows-amd64.exe addons enable dashboard -p embed-certs-20220602192235-12108 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4: (3.0238632s)
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (6.12s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (412.76s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:258: (dbg) Run:  out/minikube-windows-amd64.exe start -p embed-certs-20220602192235-12108 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker --kubernetes-version=v1.23.6

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:258: (dbg) Done: out/minikube-windows-amd64.exe start -p embed-certs-20220602192235-12108 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker --kubernetes-version=v1.23.6: (6m43.7017696s)
start_stop_delete_test.go:264: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p embed-certs-20220602192235-12108 -n embed-certs-20220602192235-12108
E0602 19:32:41.006321   12108 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\ingress-addon-legacy-20220602180932-12108\client.crt: The system cannot find the path specified.
start_stop_delete_test.go:264: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.Host}} -p embed-certs-20220602192235-12108 -n embed-certs-20220602192235-12108: (9.0566816s)
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (412.76s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (6.37s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:207: (dbg) Run:  out/minikube-windows-amd64.exe addons enable metrics-server -p no-preload-20220602192234-12108 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain

                                                
                                                
=== CONT  TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:207: (dbg) Done: out/minikube-windows-amd64.exe addons enable metrics-server -p no-preload-20220602192234-12108 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (5.8839542s)
start_stop_delete_test.go:217: (dbg) Run:  kubectl --context no-preload-20220602192234-12108 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (6.37s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (12.3s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:198: (dbg) Run:  kubectl --context old-k8s-version-20220602192231-12108 create -f testdata\busybox.yaml
start_stop_delete_test.go:198: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:342: "busybox" [c7a64a81-a965-4057-b889-68fc0eeea6c7] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/DeployApp
helpers_test.go:342: "busybox" [c7a64a81-a965-4057-b889-68fc0eeea6c7] Running
start_stop_delete_test.go:198: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 11.0411121s
start_stop_delete_test.go:198: (dbg) Run:  kubectl --context old-k8s-version-20220602192231-12108 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (12.30s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (19.75s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:230: (dbg) Run:  out/minikube-windows-amd64.exe stop -p no-preload-20220602192234-12108 --alsologtostderr -v=3

                                                
                                                
=== CONT  TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:230: (dbg) Done: out/minikube-windows-amd64.exe stop -p no-preload-20220602192234-12108 --alsologtostderr -v=3: (19.7528037s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (19.75s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (6.51s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:207: (dbg) Run:  out/minikube-windows-amd64.exe addons enable metrics-server -p old-k8s-version-20220602192231-12108 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:207: (dbg) Done: out/minikube-windows-amd64.exe addons enable metrics-server -p old-k8s-version-20220602192231-12108 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (6.0199858s)
start_stop_delete_test.go:217: (dbg) Run:  kubectl --context old-k8s-version-20220602192231-12108 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (6.51s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (19.45s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:230: (dbg) Run:  out/minikube-windows-amd64.exe stop -p old-k8s-version-20220602192231-12108 --alsologtostderr -v=3

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:230: (dbg) Done: out/minikube-windows-amd64.exe stop -p old-k8s-version-20220602192231-12108 --alsologtostderr -v=3: (19.4496287s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (19.45s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (6.94s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:241: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p no-preload-20220602192234-12108 -n no-preload-20220602192234-12108
start_stop_delete_test.go:241: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p no-preload-20220602192234-12108 -n no-preload-20220602192234-12108: exit status 7 (3.2743257s)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:241: status error: exit status 7 (may be ok)
start_stop_delete_test.go:248: (dbg) Run:  out/minikube-windows-amd64.exe addons enable dashboard -p no-preload-20220602192234-12108 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4
start_stop_delete_test.go:248: (dbg) Done: out/minikube-windows-amd64.exe addons enable dashboard -p no-preload-20220602192234-12108 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4: (3.6689441s)
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (6.94s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (454.1s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:258: (dbg) Run:  out/minikube-windows-amd64.exe start -p no-preload-20220602192234-12108 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker --kubernetes-version=v1.23.6

                                                
                                                
=== CONT  TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:258: (dbg) Done: out/minikube-windows-amd64.exe start -p no-preload-20220602192234-12108 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker --kubernetes-version=v1.23.6: (7m23.9211727s)
start_stop_delete_test.go:264: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p no-preload-20220602192234-12108 -n no-preload-20220602192234-12108

                                                
                                                
=== CONT  TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:264: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.Host}} -p no-preload-20220602192234-12108 -n no-preload-20220602192234-12108: (10.1766934s)
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (454.10s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (7.15s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:241: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p old-k8s-version-20220602192231-12108 -n old-k8s-version-20220602192231-12108
start_stop_delete_test.go:241: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p old-k8s-version-20220602192231-12108 -n old-k8s-version-20220602192231-12108: exit status 7 (3.537525s)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:241: status error: exit status 7 (may be ok)
start_stop_delete_test.go:248: (dbg) Run:  out/minikube-windows-amd64.exe addons enable dashboard -p old-k8s-version-20220602192231-12108 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4
start_stop_delete_test.go:248: (dbg) Done: out/minikube-windows-amd64.exe addons enable dashboard -p old-k8s-version-20220602192231-12108 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4: (3.6162155s)
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (7.15s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (477.73s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:258: (dbg) Run:  out/minikube-windows-amd64.exe start -p old-k8s-version-20220602192231-12108 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker --kubernetes-version=v1.16.0
E0602 19:26:57.285006   12108 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\addons-20220602171403-12108\client.crt: The system cannot find the path specified.

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:258: (dbg) Done: out/minikube-windows-amd64.exe start -p old-k8s-version-20220602192231-12108 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker --kubernetes-version=v1.16.0: (7m48.1913216s)
start_stop_delete_test.go:264: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p old-k8s-version-20220602192231-12108 -n old-k8s-version-20220602192231-12108

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:264: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.Host}} -p old-k8s-version-20220602192231-12108 -n old-k8s-version-20220602192231-12108: (9.5399092s)
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (477.73s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/DeployApp (11.32s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/DeployApp
start_stop_delete_test.go:198: (dbg) Run:  kubectl --context default-k8s-different-port-20220602192441-12108 create -f testdata\busybox.yaml
start_stop_delete_test.go:198: (dbg) TestStartStop/group/default-k8s-different-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:342: "busybox" [1d8f8b86-96f3-489b-911d-3be0a4204948] Pending
helpers_test.go:342: "busybox" [1d8f8b86-96f3-489b-911d-3be0a4204948] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:342: "busybox" [1d8f8b86-96f3-489b-911d-3be0a4204948] Running
start_stop_delete_test.go:198: (dbg) TestStartStop/group/default-k8s-different-port/serial/DeployApp: integration-test=busybox healthy within 10.1936064s
start_stop_delete_test.go:198: (dbg) Run:  kubectl --context default-k8s-different-port-20220602192441-12108 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-different-port/serial/DeployApp (11.32s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/EnableAddonWhileActive (6.44s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:207: (dbg) Run:  out/minikube-windows-amd64.exe addons enable metrics-server -p default-k8s-different-port-20220602192441-12108 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:207: (dbg) Done: out/minikube-windows-amd64.exe addons enable metrics-server -p default-k8s-different-port-20220602192441-12108 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (6.0577106s)
start_stop_delete_test.go:217: (dbg) Run:  kubectl --context default-k8s-different-port-20220602192441-12108 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-different-port/serial/EnableAddonWhileActive (6.44s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/Stop (19.02s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/Stop
start_stop_delete_test.go:230: (dbg) Run:  out/minikube-windows-amd64.exe stop -p default-k8s-different-port-20220602192441-12108 --alsologtostderr -v=3
E0602 19:27:41.004683   12108 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\ingress-addon-legacy-20220602180932-12108\client.crt: The system cannot find the path specified.
start_stop_delete_test.go:230: (dbg) Done: out/minikube-windows-amd64.exe stop -p default-k8s-different-port-20220602192441-12108 --alsologtostderr -v=3: (19.0181699s)
--- PASS: TestStartStop/group/default-k8s-different-port/serial/Stop (19.02s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/EnableAddonAfterStop (6.41s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:241: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p default-k8s-different-port-20220602192441-12108 -n default-k8s-different-port-20220602192441-12108
start_stop_delete_test.go:241: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p default-k8s-different-port-20220602192441-12108 -n default-k8s-different-port-20220602192441-12108: exit status 7 (3.2158914s)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:241: status error: exit status 7 (may be ok)
start_stop_delete_test.go:248: (dbg) Run:  out/minikube-windows-amd64.exe addons enable dashboard -p default-k8s-different-port-20220602192441-12108 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4
start_stop_delete_test.go:248: (dbg) Done: out/minikube-windows-amd64.exe addons enable dashboard -p default-k8s-different-port-20220602192441-12108 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4: (3.1958475s)
--- PASS: TestStartStop/group/default-k8s-different-port/serial/EnableAddonAfterStop (6.41s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/SecondStart (419.15s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/SecondStart
start_stop_delete_test.go:258: (dbg) Run:  out/minikube-windows-amd64.exe start -p default-k8s-different-port-20220602192441-12108 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker --kubernetes-version=v1.23.6
E0602 19:29:06.675719   12108 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\functional-20220602172845-12108\client.crt: The system cannot find the path specified.
E0602 19:31:57.282432   12108 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\addons-20220602171403-12108\client.crt: The system cannot find the path specified.

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/SecondStart
start_stop_delete_test.go:258: (dbg) Done: out/minikube-windows-amd64.exe start -p default-k8s-different-port-20220602192441-12108 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker --kubernetes-version=v1.23.6: (6m49.2424182s)
start_stop_delete_test.go:264: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p default-k8s-different-port-20220602192441-12108 -n default-k8s-different-port-20220602192441-12108

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/SecondStart
start_stop_delete_test.go:264: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.Host}} -p default-k8s-different-port-20220602192441-12108 -n default-k8s-different-port-20220602192441-12108: (9.9026117s)
--- PASS: TestStartStop/group/default-k8s-different-port/serial/SecondStart (419.15s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (39.1s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:276: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:342: "kubernetes-dashboard-cd7c84bfc-cnfb9" [66e36106-a03d-4573-8caa-60ac9b86ad8b] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
helpers_test.go:342: "kubernetes-dashboard-cd7c84bfc-cnfb9" [66e36106-a03d-4573-8caa-60ac9b86ad8b] Running
start_stop_delete_test.go:276: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 39.0970524s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (39.10s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.54s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:289: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:342: "kubernetes-dashboard-cd7c84bfc-cnfb9" [66e36106-a03d-4573-8caa-60ac9b86ad8b] Running
start_stop_delete_test.go:289: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.0344365s
start_stop_delete_test.go:293: (dbg) Run:  kubectl --context embed-certs-20220602192235-12108 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.54s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (7.89s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:306: (dbg) Run:  out/minikube-windows-amd64.exe ssh -p embed-certs-20220602192235-12108 "sudo crictl images -o json"
start_stop_delete_test.go:306: (dbg) Done: out/minikube-windows-amd64.exe ssh -p embed-certs-20220602192235-12108 "sudo crictl images -o json": (7.887125s)
start_stop_delete_test.go:306: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (7.89s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (56.5s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:313: (dbg) Run:  out/minikube-windows-amd64.exe pause -p embed-certs-20220602192235-12108 --alsologtostderr -v=1
start_stop_delete_test.go:313: (dbg) Done: out/minikube-windows-amd64.exe pause -p embed-certs-20220602192235-12108 --alsologtostderr -v=1: (9.7930154s)
start_stop_delete_test.go:313: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p embed-certs-20220602192235-12108 -n embed-certs-20220602192235-12108

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:313: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p embed-certs-20220602192235-12108 -n embed-certs-20220602192235-12108: exit status 2 (9.047522s)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:313: status error: exit status 2 (may be ok)
start_stop_delete_test.go:313: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Kubelet}} -p embed-certs-20220602192235-12108 -n embed-certs-20220602192235-12108

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:313: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Kubelet}} -p embed-certs-20220602192235-12108 -n embed-certs-20220602192235-12108: exit status 2 (8.7883719s)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:313: status error: exit status 2 (may be ok)
start_stop_delete_test.go:313: (dbg) Run:  out/minikube-windows-amd64.exe unpause -p embed-certs-20220602192235-12108 --alsologtostderr -v=1
start_stop_delete_test.go:313: (dbg) Done: out/minikube-windows-amd64.exe unpause -p embed-certs-20220602192235-12108 --alsologtostderr -v=1: (10.0720118s)
start_stop_delete_test.go:313: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p embed-certs-20220602192235-12108 -n embed-certs-20220602192235-12108
start_stop_delete_test.go:313: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p embed-certs-20220602192235-12108 -n embed-certs-20220602192235-12108: (10.1456251s)
start_stop_delete_test.go:313: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Kubelet}} -p embed-certs-20220602192235-12108 -n embed-certs-20220602192235-12108

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:313: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.Kubelet}} -p embed-certs-20220602192235-12108 -n embed-certs-20220602192235-12108: (8.6506786s)
--- PASS: TestStartStop/group/embed-certs/serial/Pause (56.50s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (48.11s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:276: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:342: "kubernetes-dashboard-cd7c84bfc-vbpv7" [cc11e1c2-9798-4905-8f00-d9d2ff60c3a7] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
E0602 19:34:06.677546   12108 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\functional-20220602172845-12108\client.crt: The system cannot find the path specified.

                                                
                                                
=== CONT  TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
helpers_test.go:342: "kubernetes-dashboard-cd7c84bfc-vbpv7" [cc11e1c2-9798-4905-8f00-d9d2ff60c3a7] Running

                                                
                                                
=== CONT  TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:276: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 48.0865494s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (48.11s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (5.05s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:276: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:342: "kubernetes-dashboard-958c5c65f-5ljh7" [80ec5e62-1d6d-4019-9849-5e91302c8c2a] Running

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:276: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.0480604s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (5.05s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop (51.1s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:276: (dbg) TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:342: "kubernetes-dashboard-cd7c84bfc-24cn2" [14d1f438-3ba1-4f21-bc86-3f30ce2b0626] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop
helpers_test.go:342: "kubernetes-dashboard-cd7c84bfc-24cn2" [14d1f438-3ba1-4f21-bc86-3f30ce2b0626] Running

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:276: (dbg) TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 51.0995484s
--- PASS: TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop (51.10s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (11.55s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:289: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:342: "kubernetes-dashboard-958c5c65f-5ljh7" [80ec5e62-1d6d-4019-9849-5e91302c8c2a] Running

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:289: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.0367743s
start_stop_delete_test.go:293: (dbg) Run:  kubectl --context old-k8s-version-20220602192231-12108 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:293: (dbg) Done: kubectl --context old-k8s-version-20220602192231-12108 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: (6.4933958s)
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (11.55s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (9.52s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:289: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:342: "kubernetes-dashboard-cd7c84bfc-vbpv7" [cc11e1c2-9798-4905-8f00-d9d2ff60c3a7] Running

                                                
                                                
=== CONT  TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:289: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 8.9733234s
start_stop_delete_test.go:293: (dbg) Run:  kubectl --context no-preload-20220602192234-12108 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (9.52s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (8.6s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:306: (dbg) Run:  out/minikube-windows-amd64.exe ssh -p old-k8s-version-20220602192231-12108 "sudo crictl images -o json"

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:306: (dbg) Done: out/minikube-windows-amd64.exe ssh -p old-k8s-version-20220602192231-12108 "sudo crictl images -o json": (8.5959443s)
start_stop_delete_test.go:306: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (8.60s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (8.68s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:306: (dbg) Run:  out/minikube-windows-amd64.exe ssh -p no-preload-20220602192234-12108 "sudo crictl images -o json"

                                                
                                                
=== CONT  TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:306: (dbg) Done: out/minikube-windows-amd64.exe ssh -p no-preload-20220602192234-12108 "sudo crictl images -o json": (8.6752015s)
start_stop_delete_test.go:306: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (8.68s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (45.7s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:313: (dbg) Run:  out/minikube-windows-amd64.exe pause -p old-k8s-version-20220602192231-12108 --alsologtostderr -v=1

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:313: (dbg) Done: out/minikube-windows-amd64.exe pause -p old-k8s-version-20220602192231-12108 --alsologtostderr -v=1: (7.7681014s)
start_stop_delete_test.go:313: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p old-k8s-version-20220602192231-12108 -n old-k8s-version-20220602192231-12108

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:313: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p old-k8s-version-20220602192231-12108 -n old-k8s-version-20220602192231-12108: exit status 2 (7.59034s)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:313: status error: exit status 2 (may be ok)
start_stop_delete_test.go:313: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Kubelet}} -p old-k8s-version-20220602192231-12108 -n old-k8s-version-20220602192231-12108

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:313: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Kubelet}} -p old-k8s-version-20220602192231-12108 -n old-k8s-version-20220602192231-12108: exit status 2 (7.4992391s)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:313: status error: exit status 2 (may be ok)
start_stop_delete_test.go:313: (dbg) Run:  out/minikube-windows-amd64.exe unpause -p old-k8s-version-20220602192231-12108 --alsologtostderr -v=1

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:313: (dbg) Done: out/minikube-windows-amd64.exe unpause -p old-k8s-version-20220602192231-12108 --alsologtostderr -v=1: (6.8663557s)
start_stop_delete_test.go:313: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p old-k8s-version-20220602192231-12108 -n old-k8s-version-20220602192231-12108

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:313: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p old-k8s-version-20220602192231-12108 -n old-k8s-version-20220602192231-12108: (8.1803768s)
start_stop_delete_test.go:313: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Kubelet}} -p old-k8s-version-20220602192231-12108 -n old-k8s-version-20220602192231-12108

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:313: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.Kubelet}} -p old-k8s-version-20220602192231-12108 -n old-k8s-version-20220602192231-12108: (7.7936265s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (45.70s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (45.69s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:313: (dbg) Run:  out/minikube-windows-amd64.exe pause -p no-preload-20220602192234-12108 --alsologtostderr -v=1

                                                
                                                
=== CONT  TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:313: (dbg) Done: out/minikube-windows-amd64.exe pause -p no-preload-20220602192234-12108 --alsologtostderr -v=1: (7.5335371s)
start_stop_delete_test.go:313: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p no-preload-20220602192234-12108 -n no-preload-20220602192234-12108
start_stop_delete_test.go:313: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p no-preload-20220602192234-12108 -n no-preload-20220602192234-12108: exit status 2 (7.3095933s)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:313: status error: exit status 2 (may be ok)
start_stop_delete_test.go:313: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Kubelet}} -p no-preload-20220602192234-12108 -n no-preload-20220602192234-12108

                                                
                                                
=== CONT  TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:313: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Kubelet}} -p no-preload-20220602192234-12108 -n no-preload-20220602192234-12108: exit status 2 (7.5716178s)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:313: status error: exit status 2 (may be ok)
start_stop_delete_test.go:313: (dbg) Run:  out/minikube-windows-amd64.exe unpause -p no-preload-20220602192234-12108 --alsologtostderr -v=1

                                                
                                                
=== CONT  TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:313: (dbg) Done: out/minikube-windows-amd64.exe unpause -p no-preload-20220602192234-12108 --alsologtostderr -v=1: (7.1232278s)
start_stop_delete_test.go:313: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p no-preload-20220602192234-12108 -n no-preload-20220602192234-12108

                                                
                                                
=== CONT  TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:313: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p no-preload-20220602192234-12108 -n no-preload-20220602192234-12108: (8.307103s)
start_stop_delete_test.go:313: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Kubelet}} -p no-preload-20220602192234-12108 -n no-preload-20220602192234-12108

                                                
                                                
=== CONT  TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:313: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.Kubelet}} -p no-preload-20220602192234-12108 -n no-preload-20220602192234-12108: (7.8486584s)
--- PASS: TestStartStop/group/no-preload/serial/Pause (45.69s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (519.82s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:188: (dbg) Run:  out/minikube-windows-amd64.exe start -p newest-cni-20220602193528-12108 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubelet.network-plugin=cni --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 --driver=docker --kubernetes-version=v1.23.6

                                                
                                                
=== CONT  TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:188: (dbg) Done: out/minikube-windows-amd64.exe start -p newest-cni-20220602193528-12108 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubelet.network-plugin=cni --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 --driver=docker --kubernetes-version=v1.23.6: (8m39.8241874s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (519.82s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop (5.57s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:289: (dbg) TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:342: "kubernetes-dashboard-cd7c84bfc-24cn2" [14d1f438-3ba1-4f21-bc86-3f30ce2b0626] Running

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:289: (dbg) TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.039914s
start_stop_delete_test.go:293: (dbg) Run:  kubectl --context default-k8s-different-port-20220602192441-12108 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop (5.57s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/VerifyKubernetesImages (7.74s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:306: (dbg) Run:  out/minikube-windows-amd64.exe ssh -p default-k8s-different-port-20220602192441-12108 "sudo crictl images -o json"

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:306: (dbg) Done: out/minikube-windows-amd64.exe ssh -p default-k8s-different-port-20220602192441-12108 "sudo crictl images -o json": (7.7359293s)
start_stop_delete_test.go:306: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/default-k8s-different-port/serial/VerifyKubernetesImages (7.74s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/Pause (50.95s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/Pause
start_stop_delete_test.go:313: (dbg) Run:  out/minikube-windows-amd64.exe pause -p default-k8s-different-port-20220602192441-12108 --alsologtostderr -v=1

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/Pause
start_stop_delete_test.go:313: (dbg) Done: out/minikube-windows-amd64.exe pause -p default-k8s-different-port-20220602192441-12108 --alsologtostderr -v=1: (7.4308294s)
start_stop_delete_test.go:313: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p default-k8s-different-port-20220602192441-12108 -n default-k8s-different-port-20220602192441-12108
start_stop_delete_test.go:313: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p default-k8s-different-port-20220602192441-12108 -n default-k8s-different-port-20220602192441-12108: exit status 2 (7.4874221s)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:313: status error: exit status 2 (may be ok)
start_stop_delete_test.go:313: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Kubelet}} -p default-k8s-different-port-20220602192441-12108 -n default-k8s-different-port-20220602192441-12108
start_stop_delete_test.go:313: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Kubelet}} -p default-k8s-different-port-20220602192441-12108 -n default-k8s-different-port-20220602192441-12108: exit status 2 (6.9037321s)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:313: status error: exit status 2 (may be ok)
start_stop_delete_test.go:313: (dbg) Run:  out/minikube-windows-amd64.exe unpause -p default-k8s-different-port-20220602192441-12108 --alsologtostderr -v=1
start_stop_delete_test.go:313: (dbg) Done: out/minikube-windows-amd64.exe unpause -p default-k8s-different-port-20220602192441-12108 --alsologtostderr -v=1: (14.6625128s)
start_stop_delete_test.go:313: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p default-k8s-different-port-20220602192441-12108 -n default-k8s-different-port-20220602192441-12108
start_stop_delete_test.go:313: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p default-k8s-different-port-20220602192441-12108 -n default-k8s-different-port-20220602192441-12108: (7.390768s)
start_stop_delete_test.go:313: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Kubelet}} -p default-k8s-different-port-20220602192441-12108 -n default-k8s-different-port-20220602192441-12108

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/Pause
start_stop_delete_test.go:313: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.Kubelet}} -p default-k8s-different-port-20220602192441-12108 -n default-k8s-different-port-20220602192441-12108: (7.0730175s)
--- PASS: TestStartStop/group/default-k8s-different-port/serial/Pause (50.95s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (763.55s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:101: (dbg) Run:  out/minikube-windows-amd64.exe start -p auto-20220602191545-12108 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --driver=docker

                                                
                                                
=== CONT  TestNetworkPlugins/group/auto/Start
net_test.go:101: (dbg) Done: out/minikube-windows-amd64.exe start -p auto-20220602191545-12108 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --driver=docker: (12m43.5523332s)
--- PASS: TestNetworkPlugins/group/auto/Start (763.55s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (6.51s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:207: (dbg) Run:  out/minikube-windows-amd64.exe addons enable metrics-server -p newest-cni-20220602193528-12108 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:207: (dbg) Done: out/minikube-windows-amd64.exe addons enable metrics-server -p newest-cni-20220602193528-12108 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (6.5130332s)
start_stop_delete_test.go:213: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (6.51s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (19.58s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:230: (dbg) Run:  out/minikube-windows-amd64.exe stop -p newest-cni-20220602193528-12108 --alsologtostderr -v=3
start_stop_delete_test.go:230: (dbg) Done: out/minikube-windows-amd64.exe stop -p newest-cni-20220602193528-12108 --alsologtostderr -v=3: (19.5818939s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (19.58s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (6.46s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:241: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p newest-cni-20220602193528-12108 -n newest-cni-20220602193528-12108
start_stop_delete_test.go:241: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p newest-cni-20220602193528-12108 -n newest-cni-20220602193528-12108: exit status 7 (3.2420012s)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:241: status error: exit status 7 (may be ok)
start_stop_delete_test.go:248: (dbg) Run:  out/minikube-windows-amd64.exe addons enable dashboard -p newest-cni-20220602193528-12108 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4
start_stop_delete_test.go:248: (dbg) Done: out/minikube-windows-amd64.exe addons enable dashboard -p newest-cni-20220602193528-12108 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4: (3.2186326s)
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (6.46s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (84.45s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:258: (dbg) Run:  out/minikube-windows-amd64.exe start -p newest-cni-20220602193528-12108 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubelet.network-plugin=cni --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 --driver=docker --kubernetes-version=v1.23.6
E0602 19:44:51.960968   12108 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\default-k8s-different-port-20220602192441-12108\client.crt: The system cannot find the path specified.
E0602 19:45:47.522652   12108 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\no-preload-20220602192234-12108\client.crt: The system cannot find the path specified.
start_stop_delete_test.go:258: (dbg) Done: out/minikube-windows-amd64.exe start -p newest-cni-20220602193528-12108 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubelet.network-plugin=cni --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 --driver=docker --kubernetes-version=v1.23.6: (1m15.7449444s)
start_stop_delete_test.go:264: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p newest-cni-20220602193528-12108 -n newest-cni-20220602193528-12108
E0602 19:46:04.369082   12108 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\old-k8s-version-20220602192231-12108\client.crt: The system cannot find the path specified.
start_stop_delete_test.go:264: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.Host}} -p newest-cni-20220602193528-12108 -n newest-cni-20220602193528-12108: (8.7004105s)
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (84.45s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:275: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:286: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (8.24s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:306: (dbg) Run:  out/minikube-windows-amd64.exe ssh -p newest-cni-20220602193528-12108 "sudo crictl images -o json"
start_stop_delete_test.go:306: (dbg) Done: out/minikube-windows-amd64.exe ssh -p newest-cni-20220602193528-12108 "sudo crictl images -o json": (8.2411807s)
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (8.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (131.71s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:101: (dbg) Run:  out/minikube-windows-amd64.exe start -p bridge-20220602191545-12108 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=bridge --driver=docker
E0602 19:49:06.686353   12108 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\functional-20220602172845-12108\client.crt: The system cannot find the path specified.

                                                
                                                
=== CONT  TestNetworkPlugins/group/bridge/Start
net_test.go:101: (dbg) Done: out/minikube-windows-amd64.exe start -p bridge-20220602191545-12108 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=bridge --driver=docker: (2m11.7093812s)
--- PASS: TestNetworkPlugins/group/bridge/Start (131.71s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (7.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:122: (dbg) Run:  out/minikube-windows-amd64.exe ssh -p auto-20220602191545-12108 "pgrep -a kubelet"
net_test.go:122: (dbg) Done: out/minikube-windows-amd64.exe ssh -p auto-20220602191545-12108 "pgrep -a kubelet": (7.1198055s)
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (7.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (21.04s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:138: (dbg) Run:  kubectl --context auto-20220602191545-12108 replace --force -f testdata\netcat-deployment.yaml
net_test.go:152: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:342: "netcat-668db85669-qz5hl" [344658a4-9a7e-4645-9246-7b7c71321a19] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:342: "netcat-668db85669-qz5hl" [344658a4-9a7e-4645-9246-7b7c71321a19] Running
net_test.go:152: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 20.0359468s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (21.04s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (7.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:122: (dbg) Run:  out/minikube-windows-amd64.exe ssh -p bridge-20220602191545-12108 "pgrep -a kubelet"

                                                
                                                
=== CONT  TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:122: (dbg) Done: out/minikube-windows-amd64.exe ssh -p bridge-20220602191545-12108 "pgrep -a kubelet": (7.2531721s)
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (7.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (21.9s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:138: (dbg) Run:  kubectl --context bridge-20220602191545-12108 replace --force -f testdata\netcat-deployment.yaml

                                                
                                                
=== CONT  TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:152: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:342: "netcat-668db85669-tbl5k" [4ffad020-3378-4239-86d3-d72518dae75a] Pending
helpers_test.go:342: "netcat-668db85669-tbl5k" [4ffad020-3378-4239-86d3-d72518dae75a] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:342: "netcat-668db85669-tbl5k" [4ffad020-3378-4239-86d3-d72518dae75a] Running

                                                
                                                
=== CONT  TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:152: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 21.1298596s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (21.90s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.64s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:169: (dbg) Run:  kubectl --context bridge-20220602191545-12108 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.64s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.49s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:188: (dbg) Run:  kubectl --context bridge-20220602191545-12108 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.49s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.52s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:238: (dbg) Run:  kubectl --context bridge-20220602191545-12108 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.52s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (142.88s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:101: (dbg) Run:  out/minikube-windows-amd64.exe start -p enable-default-cni-20220602191545-12108 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --enable-default-cni=true --driver=docker
E0602 19:52:08.001474   12108 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\default-k8s-different-port-20220602192441-12108\client.crt: The system cannot find the path specified.

                                                
                                                
=== CONT  TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:101: (dbg) Done: out/minikube-windows-amd64.exe start -p enable-default-cni-20220602191545-12108 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --enable-default-cni=true --driver=docker: (2m22.8798747s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (142.88s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (7.03s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:122: (dbg) Run:  out/minikube-windows-amd64.exe ssh -p enable-default-cni-20220602191545-12108 "pgrep -a kubelet"

                                                
                                                
=== CONT  TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:122: (dbg) Done: out/minikube-windows-amd64.exe ssh -p enable-default-cni-20220602191545-12108 "pgrep -a kubelet": (7.0321238s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (7.03s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (31.59s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:138: (dbg) Run:  kubectl --context enable-default-cni-20220602191545-12108 replace --force -f testdata\netcat-deployment.yaml
net_test.go:152: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:342: "netcat-668db85669-fp26f" [1b14eb0d-29a0-4f24-aa6d-6b3dbd6ab35c] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:342: "netcat-668db85669-fp26f" [1b14eb0d-29a0-4f24-aa6d-6b3dbd6ab35c] Running
net_test.go:152: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 30.4940343s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (31.59s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.67s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:169: (dbg) Run:  kubectl --context enable-default-cni-20220602191545-12108 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.67s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.67s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:188: (dbg) Run:  kubectl --context enable-default-cni-20220602191545-12108 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.67s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.59s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:238: (dbg) Run:  kubectl --context enable-default-cni-20220602191545-12108 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.59s)

                                                
                                    

Test skip (25/257)

x
+
TestDownloadOnly/v1.16.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/cached-images
aaa_download_only_test.go:121: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.16.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/binaries
aaa_download_only_test.go:140: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.16.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.23.6/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.23.6/cached-images
aaa_download_only_test.go:121: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.23.6/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.23.6/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.23.6/binaries
aaa_download_only_test.go:140: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.23.6/binaries (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Registry (51.11s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:280: registry stabilized in 27.892ms

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:282: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...

                                                
                                                
=== CONT  TestAddons/parallel/Registry
helpers_test.go:342: "registry-ckj8x" [ed75f488-bb74-47a0-959f-8993c137f9af] Running

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:282: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.0579366s
addons_test.go:285: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...

                                                
                                                
=== CONT  TestAddons/parallel/Registry
helpers_test.go:342: "registry-proxy-r25rf" [d8d3e0f1-18b3-48af-adfd-bf92e1abff28] Running

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:285: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.0943115s
addons_test.go:290: (dbg) Run:  kubectl --context addons-20220602171403-12108 delete po -l run=registry-test --now
addons_test.go:295: (dbg) Run:  kubectl --context addons-20220602171403-12108 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:295: (dbg) Done: kubectl --context addons-20220602171403-12108 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (40.601094s)
addons_test.go:305: Unable to complete rest of the test due to connectivity assumptions
--- SKIP: TestAddons/parallel/Registry (51.11s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (51.87s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:162: (dbg) Run:  kubectl --context addons-20220602171403-12108 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:182: (dbg) Run:  kubectl --context addons-20220602171403-12108 replace --force -f testdata\nginx-ingress-v1.yaml
addons_test.go:182: (dbg) Done: kubectl --context addons-20220602171403-12108 replace --force -f testdata\nginx-ingress-v1.yaml: (5.2349138s)
addons_test.go:195: (dbg) Run:  kubectl --context addons-20220602171403-12108 replace --force -f testdata\nginx-pod-svc.yaml
addons_test.go:195: (dbg) Done: kubectl --context addons-20220602171403-12108 replace --force -f testdata\nginx-pod-svc.yaml: (1.9919321s)
addons_test.go:200: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:342: "nginx" [b9c607f8-854f-44ff-bab9-8dca8f5c3b46] Pending
helpers_test.go:342: "nginx" [b9c607f8-854f-44ff-bab9-8dca8f5c3b46] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
helpers_test.go:342: "nginx" [b9c607f8-854f-44ff-bab9-8dca8f5c3b46] Running

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:200: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 37.2659915s
addons_test.go:212: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-20220602171403-12108 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:212: (dbg) Done: out/minikube-windows-amd64.exe -p addons-20220602171403-12108 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": (6.8162768s)
addons_test.go:232: skipping ingress DNS test for any combination that needs port forwarding
--- SKIP: TestAddons/parallel/Ingress (51.87s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:448: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
driver_install_or_update_test.go:41: Skip if not linux.
--- SKIP: TestKVMDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (300.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:897: (dbg) daemon: [out/minikube-windows-amd64.exe dashboard --url --port 36195 -p functional-20220602172845-12108 --alsologtostderr -v=1]

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:908: output didn't produce a URL
functional_test.go:902: (dbg) stopping [out/minikube-windows-amd64.exe dashboard --url --port 36195 -p functional-20220602172845-12108 --alsologtostderr -v=1] ...
helpers_test.go:500: unable to terminate pid 6620: Access is denied.
E0602 17:43:20.421140   12108 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\addons-20220602171403-12108\client.crt: The system cannot find the path specified.
E0602 17:46:57.266000   12108 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\addons-20220602171403-12108\client.crt: The system cannot find the path specified.
E0602 17:51:57.269226   12108 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\addons-20220602171403-12108\client.crt: The system cannot find the path specified.
E0602 17:56:57.262992   12108 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\addons-20220602171403-12108\client.crt: The system cannot find the path specified.
E0602 18:00:00.431043   12108 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\addons-20220602171403-12108\client.crt: The system cannot find the path specified.
E0602 18:01:57.270314   12108 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\addons-20220602171403-12108\client.crt: The system cannot find the path specified.
E0602 18:06:57.261098   12108 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\addons-20220602171403-12108\client.crt: The system cannot find the path specified.
--- SKIP: TestFunctional/parallel/DashboardCmd (300.02s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd
=== PAUSE TestFunctional/parallel/MountCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd
functional_test_mount_test.go:57: skipping: mount broken on windows: https://github.com/kubernetes/minikube/issues/8303
--- SKIP: TestFunctional/parallel/MountCmd (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (33.56s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1558: (dbg) Run:  kubectl --context functional-20220602172845-12108 create deployment hello-node-connect --image=k8s.gcr.io/echoserver:1.8

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1564: (dbg) Run:  kubectl --context functional-20220602172845-12108 expose deployment hello-node-connect --type=NodePort --port=8080

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1569: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:342: "hello-node-connect-74cf8bc446-qjhfg" [7fa9b05a-7335-4fbc-9fdb-5aac635d4c93] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
helpers_test.go:342: "hello-node-connect-74cf8bc446-qjhfg" [7fa9b05a-7335-4fbc-9fdb-5aac635d4c93] Running

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1569: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 32.1923042s
functional_test.go:1575: test is broken for port-forwarded drivers: https://github.com/kubernetes/minikube/issues/7383
--- SKIP: TestFunctional/parallel/ServiceCmdConnect (33.56s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:542: only validate podman env with docker container runtime, currently testing docker
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:193: skipping: access direct test is broken on windows: https://github.com/kubernetes/minikube/issues/8304
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:97: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:97: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:97: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddons (47.63s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddons
addons_test.go:162: (dbg) Run:  kubectl --context ingress-addon-legacy-20220602180932-12108 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:162: (dbg) Done: kubectl --context ingress-addon-legacy-20220602180932-12108 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s: (11.968994s)
addons_test.go:182: (dbg) Run:  kubectl --context ingress-addon-legacy-20220602180932-12108 replace --force -f testdata\nginx-ingress-v1beta1.yaml
addons_test.go:182: (dbg) Done: kubectl --context ingress-addon-legacy-20220602180932-12108 replace --force -f testdata\nginx-ingress-v1beta1.yaml: (1.1257598s)
addons_test.go:195: (dbg) Run:  kubectl --context ingress-addon-legacy-20220602180932-12108 replace --force -f testdata\nginx-pod-svc.yaml
addons_test.go:195: (dbg) Done: kubectl --context ingress-addon-legacy-20220602180932-12108 replace --force -f testdata\nginx-pod-svc.yaml: (1.0090432s)
addons_test.go:200: (dbg) TestIngressAddonLegacy/serial/ValidateIngressAddons: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:342: "nginx" [ad7eab94-15c5-4e55-b87d-7f0190cb9558] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:342: "nginx" [ad7eab94-15c5-4e55-b87d-7f0190cb9558] Running
addons_test.go:200: (dbg) TestIngressAddonLegacy/serial/ValidateIngressAddons: run=nginx healthy within 27.0685145s
addons_test.go:212: (dbg) Run:  out/minikube-windows-amd64.exe -p ingress-addon-legacy-20220602180932-12108 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:212: (dbg) Done: out/minikube-windows-amd64.exe -p ingress-addon-legacy-20220602180932-12108 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": (6.3340839s)
addons_test.go:232: skipping ingress DNS test for any combination that needs port forwarding
--- SKIP: TestIngressAddonLegacy/serial/ValidateIngressAddons (47.63s)

                                                
                                    
x
+
TestScheduledStopUnix (0s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:76: test only runs on unix
--- SKIP: TestScheduledStopUnix (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:39: skipping due to https://github.com/kubernetes/minikube/issues/14232
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (17.09s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:105: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-20220602192424-12108" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p disable-driver-mounts-20220602192424-12108
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p disable-driver-mounts-20220602192424-12108: (17.092076s)
--- SKIP: TestStartStop/group/disable-driver-mounts (17.09s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel (14.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel
net_test.go:79: flannel is not yet compatible with Docker driver: iptables v1.8.3 (legacy): Couldn't load target `CNI-x': No such file or directory
helpers_test.go:175: Cleaning up "flannel-20220602191545-12108" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p flannel-20220602191545-12108
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p flannel-20220602191545-12108: (14.3036772s)
--- SKIP: TestNetworkPlugins/group/flannel (14.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel (15.87s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel
net_test.go:79: flannel is not yet compatible with Docker driver: iptables v1.8.3 (legacy): Couldn't load target `CNI-x': No such file or directory
helpers_test.go:175: Cleaning up "custom-flannel-20220602191600-12108" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p custom-flannel-20220602191600-12108

                                                
                                                
=== CONT  TestNetworkPlugins/group/custom-flannel
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p custom-flannel-20220602191600-12108: (15.8682665s)
--- SKIP: TestNetworkPlugins/group/custom-flannel (15.87s)

                                                
                                    
Copied to clipboard