Test Report: Docker_Windows 14079

                    
                      bc7278193255a66f30064dc56185dbbc87656da8:2022-05-31:24200
                    
                

Test fail (14/254)

x
+
TestFunctional/parallel/ServiceCmd (2067.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd
=== PAUSE TestFunctional/parallel/ServiceCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmd
functional_test.go:1432: (dbg) Run:  kubectl --context functional-20220531173104-2108 create deployment hello-node --image=k8s.gcr.io/echoserver:1.8

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmd
functional_test.go:1438: (dbg) Run:  kubectl --context functional-20220531173104-2108 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1438: (dbg) Done: kubectl --context functional-20220531173104-2108 expose deployment hello-node --type=NodePort --port=8080: (1.7943778s)
functional_test.go:1443: (dbg) TestFunctional/parallel/ServiceCmd: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:342: "hello-node-54fbb85-c6cbz" [cb26a5f8-1d38-4f26-a335-6c49185e047a] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmd
helpers_test.go:342: "hello-node-54fbb85-c6cbz" [cb26a5f8-1d38-4f26-a335-6c49185e047a] Running

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmd
functional_test.go:1443: (dbg) TestFunctional/parallel/ServiceCmd: app=hello-node healthy within 12.085074s
functional_test.go:1448: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20220531173104-2108 service list

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmd
functional_test.go:1448: (dbg) Done: out/minikube-windows-amd64.exe -p functional-20220531173104-2108 service list: (6.8622051s)
functional_test.go:1462: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20220531173104-2108 service --namespace=default --https --url hello-node
functional_test.go:1391: Failed to sent interrupt to proc not supported by windows

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmd
functional_test.go:1462: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-20220531173104-2108 service --namespace=default --https --url hello-node: exit status 1 (33m42.3573256s)

                                                
                                                
-- stdout --
	https://127.0.0.1:51551

                                                
                                                
-- /stdout --
** stderr ** 
	! Because you are using a Docker driver on windows, the terminal needs to be open to run it.

                                                
                                                
** /stderr **
functional_test.go:1464: failed to get service url. args "out/minikube-windows-amd64.exe -p functional-20220531173104-2108 service --namespace=default --https --url hello-node" : exit status 1
functional_test.go:1401: service test failed - dumping debug information
functional_test.go:1402: -----------------------service failure post-mortem--------------------------------
functional_test.go:1405: (dbg) Run:  kubectl --context functional-20220531173104-2108 describe po hello-node
functional_test.go:1409: hello-node pod describe:
Name:         hello-node-54fbb85-c6cbz
Namespace:    default
Priority:     0
Node:         functional-20220531173104-2108/192.168.49.2
Start Time:   Tue, 31 May 2022 17:37:02 +0000
Labels:       app=hello-node
pod-template-hash=54fbb85
Annotations:  <none>
Status:       Running
IP:           172.17.0.6
IPs:
IP:           172.17.0.6
Controlled By:  ReplicaSet/hello-node-54fbb85
Containers:
echoserver:
Container ID:   docker://6adb5fb1de30080d3ea7a23717d2cf5cedc66e5a49250f40c68daf17e777c836
Image:          k8s.gcr.io/echoserver:1.8
Image ID:       docker-pullable://k8s.gcr.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969
Port:           <none>
Host Port:      <none>
State:          Running
Started:      Tue, 31 May 2022 17:37:06 +0000
Ready:          True
Restart Count:  0
Environment:    <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-wfj6f (ro)
Conditions:
Type              Status
Initialized       True 
Ready             True 
ContainersReady   True 
PodScheduled      True 
Volumes:
kube-api-access-wfj6f:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
ConfigMapOptional:       <nil>
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute for 300s
node.kubernetes.io/unreachable:NoExecute for 300s
Events:
Type    Reason     Age        From                                     Message
----    ------     ----       ----                                     -------
Normal  Scheduled  <unknown>                                           Successfully assigned default/hello-node-54fbb85-c6cbz to functional-20220531173104-2108
Normal  Pulling    33m        kubelet, functional-20220531173104-2108  Pulling image "k8s.gcr.io/echoserver:1.8"
Normal  Pulled     33m        kubelet, functional-20220531173104-2108  Successfully pulled image "k8s.gcr.io/echoserver:1.8" in 517.5097ms
Normal  Created    33m        kubelet, functional-20220531173104-2108  Created container echoserver
Normal  Started    33m        kubelet, functional-20220531173104-2108  Started container echoserver

                                                
                                                
Name:         hello-node-connect-74cf8bc446-5wbb6
Namespace:    default
Priority:     0
Node:         functional-20220531173104-2108/192.168.49.2
Start Time:   Tue, 31 May 2022 17:36:37 +0000
Labels:       app=hello-node-connect
pod-template-hash=74cf8bc446
Annotations:  <none>
Status:       Running
IP:           172.17.0.5
IPs:
IP:           172.17.0.5
Controlled By:  ReplicaSet/hello-node-connect-74cf8bc446
Containers:
echoserver:
Container ID:   docker://1182335c773e478c9890bdd83b406b436f7adcf53e8f4d349c0688be8bb65ef9
Image:          k8s.gcr.io/echoserver:1.8
Image ID:       docker-pullable://k8s.gcr.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969
Port:           <none>
Host Port:      <none>
State:          Running
Started:      Tue, 31 May 2022 17:37:06 +0000
Ready:          True
Restart Count:  0
Environment:    <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-tf44x (ro)
Conditions:
Type              Status
Initialized       True 
Ready             True 
ContainersReady   True 
PodScheduled      True 
Volumes:
kube-api-access-tf44x:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
ConfigMapOptional:       <nil>
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute for 300s
node.kubernetes.io/unreachable:NoExecute for 300s
Events:
Type    Reason     Age        From                                     Message
----    ------     ----       ----                                     -------
Normal  Scheduled  <unknown>                                           Successfully assigned default/hello-node-connect-74cf8bc446-5wbb6 to functional-20220531173104-2108
Normal  Pulling    34m        kubelet, functional-20220531173104-2108  Pulling image "k8s.gcr.io/echoserver:1.8"
Normal  Pulled     33m        kubelet, functional-20220531173104-2108  Successfully pulled image "k8s.gcr.io/echoserver:1.8" in 23.0353974s
Normal  Created    33m        kubelet, functional-20220531173104-2108  Created container echoserver
Normal  Started    33m        kubelet, functional-20220531173104-2108  Started container echoserver

                                                
                                                
functional_test.go:1411: (dbg) Run:  kubectl --context functional-20220531173104-2108 logs -l app=hello-node
functional_test.go:1415: hello-node logs:
functional_test.go:1417: (dbg) Run:  kubectl --context functional-20220531173104-2108 describe svc hello-node
functional_test.go:1421: hello-node svc describe:
Name:                     hello-node
Namespace:                default
Labels:                   app=hello-node
Annotations:              <none>
Selector:                 app=hello-node
Type:                     NodePort
IP:                       10.99.219.246
Port:                     <unset>  8080/TCP
TargetPort:               8080/TCP
NodePort:                 <unset>  30045/TCP
Endpoints:                172.17.0.6:8080
Session Affinity:         None
External Traffic Policy:  Cluster
Events:                   <none>
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestFunctional/parallel/ServiceCmd]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect functional-20220531173104-2108
helpers_test.go:231: (dbg) Done: docker inspect functional-20220531173104-2108: (1.035364s)
helpers_test.go:235: (dbg) docker inspect functional-20220531173104-2108:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "ac9800570c9ffae9b635aac2624d98b3eb69b6a125527175288721176f4e2ea2",
	        "Created": "2022-05-31T17:31:57.5400252Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 20855,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2022-05-31T17:31:58.5923319Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:aedbaa58534633065a66af6f01ba15f6c7dc1b8a285b6938f9d04325ceab9ed4",
	        "ResolvConfPath": "/var/lib/docker/containers/ac9800570c9ffae9b635aac2624d98b3eb69b6a125527175288721176f4e2ea2/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/ac9800570c9ffae9b635aac2624d98b3eb69b6a125527175288721176f4e2ea2/hostname",
	        "HostsPath": "/var/lib/docker/containers/ac9800570c9ffae9b635aac2624d98b3eb69b6a125527175288721176f4e2ea2/hosts",
	        "LogPath": "/var/lib/docker/containers/ac9800570c9ffae9b635aac2624d98b3eb69b6a125527175288721176f4e2ea2/ac9800570c9ffae9b635aac2624d98b3eb69b6a125527175288721176f4e2ea2-json.log",
	        "Name": "/functional-20220531173104-2108",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "functional-20220531173104-2108:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-20220531173104-2108",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4194304000,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 4194304000,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/0baaca0513be4bf258a1f50e3173d958d093976d0a3a9664ac04d87ab09e3671-init/diff:/var/lib/docker/overlay2/42ebd8012a176a6c9bc83a2b81ffb1eb5c8e01d5410cb5d59346522bbaddf2cc/diff:/var/lib/docker/overlay2/59dce173ea661e9679f479af711a101ab0e97afb60abfd3c5b7a199b5c3e2b3b/diff:/var/lib/docker/overlay2/0328b60a223ca9f8bab93e6b86106d8b64d16fa559a56e88abbdee372b3b6a70/diff:/var/lib/docker/overlay2/b781f2620a052ee02138337819bde18c09122be2f20b7cfefaf7688f18d0c559/diff:/var/lib/docker/overlay2/af966c145b90b1748180b9ffcb1521d6fa9914e1d0ca582b239123591ffd1527/diff:/var/lib/docker/overlay2/5cd2b511f6f3bc93855ed77b5510ca4c67426eea433ccda53ea8e864342a413e/diff:/var/lib/docker/overlay2/f896d291d0c004470c3e38ea0d3be8e2b2a48ea36d45662c40fe3e105cbf4dec/diff:/var/lib/docker/overlay2/9e8994dcf5b1692245d5e40982d040298bfa7f7977892cf4be8ba3697f2c1283/diff:/var/lib/docker/overlay2/a7da4130c1b629e2a737b34701c6d4dfe6c48f92771856a887e06a1edc5456f8/diff:/var/lib/docker/overlay2/4c2573
4b9c8459489256b5f70dbb446897b9510d1cf9187e903f845ffa2a7ec2/diff:/var/lib/docker/overlay2/5c6cef49a0d0d1a36777fa7e0955ecdffb41ce354b7984f232e9cd51916416f7/diff:/var/lib/docker/overlay2/b79c799ed97edb702ed4c4ccb55ef9c645ae162e30e8f297ca5dd1152c29de41/diff:/var/lib/docker/overlay2/c84b7bc7c79ffdedf2d1265e21eec011dc3215811fb0569f7eb7d6b9aec884e8/diff:/var/lib/docker/overlay2/df8e2c3af362fd04ee17cb8d67105cf489427b2ae7cec77b79a2778e6c8c0234/diff:/var/lib/docker/overlay2/e56e356f8425868b31ada978267de73f074f211985ff1849ece7ab8341c33bae/diff:/var/lib/docker/overlay2/82c032066e83d3297742c83dd29132974e9db73a0b0b0a8edd3bcbbdb29cd53c/diff:/var/lib/docker/overlay2/15532131f3e6d0b2faf705733b06ae0c869147f2ca9592e3a80b6eaadad23544/diff:/var/lib/docker/overlay2/73fa456f504732f46cbe49368167247ca47b3099a6a75a7023ba16e7f598aee5/diff:/var/lib/docker/overlay2/e5635e020aadcc8dd1e5e3cd2eaa45cb97147f47bf406211fc61d7cbfc531193/diff:/var/lib/docker/overlay2/40b76b3249d3f7a8a737e2db80ebc1ed3b76d59724641217e8aae414ad832781/diff:/var/lib/d
ocker/overlay2/50ea2ce78d4fe52f626b2755a14f71a3c4f9b5a4f929646d9200876bdb1652c1/diff:/var/lib/docker/overlay2/d0a6e94d1f4aa73824d39c6e655bc4bdcd6568cea821b5d0f71174591c9cbbb3/diff:/var/lib/docker/overlay2/20c8fbe37a8c89a03b7bffe8cbc507e888cd5886f86f43b551d6a09fee1ce5e7/diff:/var/lib/docker/overlay2/48942b31cfe24e44c65a8be1785cd90488444f8c420a79b72a123034b01dd3f8/diff:/var/lib/docker/overlay2/c90124ab97e02facd949bfbd45815d6d73a40303b47ba4a4bc035788f5ee2dc3/diff:/var/lib/docker/overlay2/38c82aeabee1c8f46551413ecabb24f2f22680bb623f79e40c751558747a03f5/diff:/var/lib/docker/overlay2/4fa8894d1c1d773bc2e0511f273eab03fb7b8be7489eab5cd3eb57cc0d12e855/diff:/var/lib/docker/overlay2/23319fcddb47e50928e2044bac662de8153728f3a2eefa9c6ad5a5f413efec88/diff:/var/lib/docker/overlay2/b7ecd073b5b747c21ecbd1ca61887899f7e227fac3e383e24f868549b7929d74/diff:/var/lib/docker/overlay2/29a5674b4bbabfd07c4ce0b2a8b84ce98af380bf984043a4a9a6cd0743e4630c/diff:/var/lib/docker/overlay2/86a10266979ed72dc4372ade724e64741de35702626642ba60a15cca143
3682e/diff:/var/lib/docker/overlay2/03a1af7f82f1cb2b6eadbd1f13c8e9f6ca281ef3a8968d6aa45d284f286aefca/diff:/var/lib/docker/overlay2/f36cce4566278d24128326f8ef6ea446884c0c6941ccdb763ddf936e178afbff/diff:/var/lib/docker/overlay2/e54a2a61ba3597af53ec65a822821ffca97788e4b1dbfeedf98bf4d12e78973d/diff:/var/lib/docker/overlay2/dd54a25b898b0d7952f0bcb99a0450ee3d6b4269599e9355b4ae5e0c540c2caa/diff:/var/lib/docker/overlay2/ae6c1d1e9e79e03382217f21886420e3118a3f18f7c44f76c19262a84a43e219/diff:/var/lib/docker/overlay2/82faa00f86c1fa99063466464f71cdd6d510aa3e45c6c43301b2119b5bd5285a/diff:/var/lib/docker/overlay2/9f54999972b485642f042b9ed4d00316be0a1d35c060e619aca79b1583180446/diff:/var/lib/docker/overlay2/b467240c20564ba44d0946c716cf18ab5be973b43b02c37ee3ddd8f94502f41b/diff:/var/lib/docker/overlay2/21217d4ff1c5cf81dd53cfd831e0961189fb9f86812e1f53843f0022383345e6/diff",
	                "MergedDir": "/var/lib/docker/overlay2/0baaca0513be4bf258a1f50e3173d958d093976d0a3a9664ac04d87ab09e3671/merged",
	                "UpperDir": "/var/lib/docker/overlay2/0baaca0513be4bf258a1f50e3173d958d093976d0a3a9664ac04d87ab09e3671/diff",
	                "WorkDir": "/var/lib/docker/overlay2/0baaca0513be4bf258a1f50e3173d958d093976d0a3a9664ac04d87ab09e3671/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "functional-20220531173104-2108",
	                "Source": "/var/lib/docker/volumes/functional-20220531173104-2108/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-20220531173104-2108",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653596720-14230@sha256:e953786303ac8350802546ee187d34e89f0007072a54fdbcc2f86a1fb8575418",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-20220531173104-2108",
	                "name.minikube.sigs.k8s.io": "functional-20220531173104-2108",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "8f5ec6b3a98d73d85b19c3e80da3877e22e400bd553323f9d3272184e546a351",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "51287"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "51288"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "51289"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "51285"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "51286"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/8f5ec6b3a98d",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-20220531173104-2108": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "ac9800570c9f",
	                        "functional-20220531173104-2108"
	                    ],
	                    "NetworkID": "740a4157baec428672fe430eeb2c354ea9bd01fcc4b0eff5f0aed585a405d7f0",
	                    "EndpointID": "566335ecd16642ab3b131d5b72b708a374c0c8614edea0f8a954ac95c28b18bc",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p functional-20220531173104-2108 -n functional-20220531173104-2108
helpers_test.go:239: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.Host}} -p functional-20220531173104-2108 -n functional-20220531173104-2108: (6.4055546s)
helpers_test.go:244: <<< TestFunctional/parallel/ServiceCmd FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestFunctional/parallel/ServiceCmd]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20220531173104-2108 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-windows-amd64.exe -p functional-20220531173104-2108 logs -n 25: (8.3837397s)
helpers_test.go:252: TestFunctional/parallel/ServiceCmd logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |----------------|-----------------------------------------------------------------------------------------------------|--------------------------------|-------------------|----------------|---------------------|---------------------|
	|    Command     |                                                Args                                                 |            Profile             |       User        |    Version     |     Start Time      |      End Time       |
	|----------------|-----------------------------------------------------------------------------------------------------|--------------------------------|-------------------|----------------|---------------------|---------------------|
	| image          | functional-20220531173104-2108 image load --daemon                                                  | functional-20220531173104-2108 | minikube7\jenkins | v1.26.0-beta.1 | 31 May 22 17:38 GMT | 31 May 22 17:38 GMT |
	|                | gcr.io/google-containers/addon-resizer:functional-20220531173104-2108                               |                                |                   |                |                     |                     |
	| image          | functional-20220531173104-2108                                                                      | functional-20220531173104-2108 | minikube7\jenkins | v1.26.0-beta.1 | 31 May 22 17:38 GMT | 31 May 22 17:38 GMT |
	|                | image ls                                                                                            |                                |                   |                |                     |                     |
	| image          | functional-20220531173104-2108 image load --daemon                                                  | functional-20220531173104-2108 | minikube7\jenkins | v1.26.0-beta.1 | 31 May 22 17:38 GMT | 31 May 22 17:38 GMT |
	|                | gcr.io/google-containers/addon-resizer:functional-20220531173104-2108                               |                                |                   |                |                     |                     |
	| image          | functional-20220531173104-2108                                                                      | functional-20220531173104-2108 | minikube7\jenkins | v1.26.0-beta.1 | 31 May 22 17:38 GMT | 31 May 22 17:38 GMT |
	|                | image ls                                                                                            |                                |                   |                |                     |                     |
	| image          | functional-20220531173104-2108 image load --daemon                                                  | functional-20220531173104-2108 | minikube7\jenkins | v1.26.0-beta.1 | 31 May 22 17:38 GMT | 31 May 22 17:38 GMT |
	|                | gcr.io/google-containers/addon-resizer:functional-20220531173104-2108                               |                                |                   |                |                     |                     |
	| image          | functional-20220531173104-2108                                                                      | functional-20220531173104-2108 | minikube7\jenkins | v1.26.0-beta.1 | 31 May 22 17:38 GMT | 31 May 22 17:39 GMT |
	|                | image ls                                                                                            |                                |                   |                |                     |                     |
	| image          | functional-20220531173104-2108 image save                                                           | functional-20220531173104-2108 | minikube7\jenkins | v1.26.0-beta.1 | 31 May 22 17:39 GMT | 31 May 22 17:39 GMT |
	|                | gcr.io/google-containers/addon-resizer:functional-20220531173104-2108                               |                                |                   |                |                     |                     |
	|                | C:\jenkins\workspace\Docker_Windows_integration\addon-resizer-save.tar                              |                                |                   |                |                     |                     |
	| image          | functional-20220531173104-2108 image rm                                                             | functional-20220531173104-2108 | minikube7\jenkins | v1.26.0-beta.1 | 31 May 22 17:39 GMT | 31 May 22 17:39 GMT |
	|                | gcr.io/google-containers/addon-resizer:functional-20220531173104-2108                               |                                |                   |                |                     |                     |
	| image          | functional-20220531173104-2108                                                                      | functional-20220531173104-2108 | minikube7\jenkins | v1.26.0-beta.1 | 31 May 22 17:39 GMT | 31 May 22 17:39 GMT |
	|                | image ls                                                                                            |                                |                   |                |                     |                     |
	| image          | functional-20220531173104-2108 image load                                                           | functional-20220531173104-2108 | minikube7\jenkins | v1.26.0-beta.1 | 31 May 22 17:39 GMT | 31 May 22 17:39 GMT |
	|                | C:\jenkins\workspace\Docker_Windows_integration\addon-resizer-save.tar                              |                                |                   |                |                     |                     |
	| image          | functional-20220531173104-2108                                                                      | functional-20220531173104-2108 | minikube7\jenkins | v1.26.0-beta.1 | 31 May 22 17:39 GMT | 31 May 22 17:39 GMT |
	|                | image ls                                                                                            |                                |                   |                |                     |                     |
	| image          | functional-20220531173104-2108 image save --daemon                                                  | functional-20220531173104-2108 | minikube7\jenkins | v1.26.0-beta.1 | 31 May 22 17:39 GMT | 31 May 22 17:39 GMT |
	|                | gcr.io/google-containers/addon-resizer:functional-20220531173104-2108                               |                                |                   |                |                     |                     |
	| cp             | functional-20220531173104-2108                                                                      | functional-20220531173104-2108 | minikube7\jenkins | v1.26.0-beta.1 | 31 May 22 17:39 GMT | 31 May 22 17:40 GMT |
	|                | cp testdata\cp-test.txt                                                                             |                                |                   |                |                     |                     |
	|                | /home/docker/cp-test.txt                                                                            |                                |                   |                |                     |                     |
	| ssh            | functional-20220531173104-2108                                                                      | functional-20220531173104-2108 | minikube7\jenkins | v1.26.0-beta.1 | 31 May 22 17:40 GMT | 31 May 22 17:40 GMT |
	|                | ssh -n                                                                                              |                                |                   |                |                     |                     |
	|                | functional-20220531173104-2108                                                                      |                                |                   |                |                     |                     |
	|                | sudo cat                                                                                            |                                |                   |                |                     |                     |
	|                | /home/docker/cp-test.txt                                                                            |                                |                   |                |                     |                     |
	| cp             | functional-20220531173104-2108 cp functional-20220531173104-2108:/home/docker/cp-test.txt           | functional-20220531173104-2108 | minikube7\jenkins | v1.26.0-beta.1 | 31 May 22 17:40 GMT | 31 May 22 17:40 GMT |
	|                | C:\Users\jenkins.minikube7\AppData\Local\Temp\TestFunctionalparallelCpCmd2199164473\001\cp-test.txt |                                |                   |                |                     |                     |
	| ssh            | functional-20220531173104-2108                                                                      | functional-20220531173104-2108 | minikube7\jenkins | v1.26.0-beta.1 | 31 May 22 17:40 GMT | 31 May 22 17:40 GMT |
	|                | ssh -n                                                                                              |                                |                   |                |                     |                     |
	|                | functional-20220531173104-2108                                                                      |                                |                   |                |                     |                     |
	|                | sudo cat                                                                                            |                                |                   |                |                     |                     |
	|                | /home/docker/cp-test.txt                                                                            |                                |                   |                |                     |                     |
	| update-context | functional-20220531173104-2108                                                                      | functional-20220531173104-2108 | minikube7\jenkins | v1.26.0-beta.1 | 31 May 22 17:40 GMT | 31 May 22 17:40 GMT |
	|                | update-context                                                                                      |                                |                   |                |                     |                     |
	|                | --alsologtostderr -v=2                                                                              |                                |                   |                |                     |                     |
	| update-context | functional-20220531173104-2108                                                                      | functional-20220531173104-2108 | minikube7\jenkins | v1.26.0-beta.1 | 31 May 22 17:40 GMT | 31 May 22 17:40 GMT |
	|                | update-context                                                                                      |                                |                   |                |                     |                     |
	|                | --alsologtostderr -v=2                                                                              |                                |                   |                |                     |                     |
	| update-context | functional-20220531173104-2108                                                                      | functional-20220531173104-2108 | minikube7\jenkins | v1.26.0-beta.1 | 31 May 22 17:40 GMT | 31 May 22 17:40 GMT |
	|                | update-context                                                                                      |                                |                   |                |                     |                     |
	|                | --alsologtostderr -v=2                                                                              |                                |                   |                |                     |                     |
	| image          | functional-20220531173104-2108                                                                      | functional-20220531173104-2108 | minikube7\jenkins | v1.26.0-beta.1 | 31 May 22 17:40 GMT | 31 May 22 17:40 GMT |
	|                | image ls --format yaml                                                                              |                                |                   |                |                     |                     |
	| image          | functional-20220531173104-2108                                                                      | functional-20220531173104-2108 | minikube7\jenkins | v1.26.0-beta.1 | 31 May 22 17:40 GMT | 31 May 22 17:40 GMT |
	|                | image ls --format table                                                                             |                                |                   |                |                     |                     |
	| image          | functional-20220531173104-2108                                                                      | functional-20220531173104-2108 | minikube7\jenkins | v1.26.0-beta.1 | 31 May 22 17:40 GMT | 31 May 22 17:40 GMT |
	|                | image ls --format json                                                                              |                                |                   |                |                     |                     |
	| image          | functional-20220531173104-2108                                                                      | functional-20220531173104-2108 | minikube7\jenkins | v1.26.0-beta.1 | 31 May 22 17:40 GMT | 31 May 22 17:40 GMT |
	|                | image ls --format short                                                                             |                                |                   |                |                     |                     |
	| image          | functional-20220531173104-2108 image build -t                                                       | functional-20220531173104-2108 | minikube7\jenkins | v1.26.0-beta.1 | 31 May 22 17:40 GMT | 31 May 22 17:40 GMT |
	|                | localhost/my-image:functional-20220531173104-2108                                                   |                                |                   |                |                     |                     |
	|                | testdata\build                                                                                      |                                |                   |                |                     |                     |
	| image          | functional-20220531173104-2108                                                                      | functional-20220531173104-2108 | minikube7\jenkins | v1.26.0-beta.1 | 31 May 22 17:40 GMT | 31 May 22 17:41 GMT |
	|                | image ls                                                                                            |                                |                   |                |                     |                     |
	|----------------|-----------------------------------------------------------------------------------------------------|--------------------------------|-------------------|----------------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2022/05/31 17:37:21
	Running on machine: minikube7
	Binary: Built with gc go1.18.2 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0531 17:37:21.384219    6484 out.go:296] Setting OutFile to fd 716 ...
	I0531 17:37:21.440222    6484 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0531 17:37:21.440222    6484 out.go:309] Setting ErrFile to fd 644...
	I0531 17:37:21.440222    6484 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0531 17:37:21.452218    6484 out.go:303] Setting JSON to false
	I0531 17:37:21.454218    6484 start.go:115] hostinfo: {"hostname":"minikube7","uptime":76911,"bootTime":1653941730,"procs":159,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19044 Build 19044","kernelVersion":"10.0.19044 Build 19044","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f66ed2ea-6c04-4a6b-8eea-b2fb0953e990"}
	W0531 17:37:21.454218    6484 start.go:123] gopshost.Virtualization returned error: not implemented yet
	I0531 17:37:21.471229    6484 out.go:177] * [functional-20220531173104-2108] minikube v1.26.0-beta.1 on Microsoft Windows 10 Enterprise N 10.0.19044 Build 19044
	I0531 17:37:21.480227    6484 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube7\minikube-integration\kubeconfig
	I0531 17:37:21.483217    6484 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube7\minikube-integration\.minikube
	I0531 17:37:21.486235    6484 out.go:177]   - MINIKUBE_LOCATION=14079
	I0531 17:37:21.487219    6484 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0531 17:37:21.487219    6484 config.go:178] Loaded profile config "functional-20220531173104-2108": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.6
	I0531 17:37:21.487219    6484 driver.go:358] Setting default libvirt URI to qemu:///system
	I0531 17:37:24.141487    6484 docker.go:137] docker version: linux-20.10.14
	I0531 17:37:24.150854    6484 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0531 17:37:26.278518    6484 cli_runner.go:217] Completed: docker system info --format "{{json .}}": (2.1274722s)
	I0531 17:37:26.279048    6484 info.go:265] docker info: {ID:JKWR:L4LW:XYJC:G6AI:GZFU:RUGW:CCH6:OD2M:V572:4FTB:B7YC:DTUC Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:58 OomKillDisable:true NGoroutines:52 SystemTime:2022-05-31 17:37:25.201793 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.102.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_6
4 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:53902323712 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.14 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3df54a852345ae127d1fa3092b95168e4a88e2f8 Expected:3df54a852345ae127d1fa3092b95168e4a88e2f8} RuncCommit:{ID:v1.0.3-0-gf46b6ba Expected:v1.0.3-0-gf46b6ba} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,p
rofile=default] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.5.1] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:C:\Program Files\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc.
Version:v0.17.0]] Warnings:<nil>}}
	I0531 17:37:26.284116    6484 out.go:177] * Using the docker driver based on existing profile
	I0531 17:37:26.288089    6484 start.go:284] selected driver: docker
	I0531 17:37:26.288089    6484 start.go:806] validating driver "docker" against &{Name:functional-20220531173104-2108 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653596720-14230@sha256:e953786303ac8350802546ee187d34e89f0007072a54fdbcc2f86a1fb8575418 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.6 ClusterName:functional-20220531173104-2108 Namespace:de
fault APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.23.6 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false helm-tiller:false ingress:false ingress-dns:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false regis
try:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube7:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0531 17:37:26.288376    6484 start.go:817] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0531 17:37:26.310470    6484 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0531 17:37:28.402370    6484 cli_runner.go:217] Completed: docker system info --format "{{json .}}": (2.0918897s)
	I0531 17:37:28.402370    6484 info.go:265] docker info: {ID:JKWR:L4LW:XYJC:G6AI:GZFU:RUGW:CCH6:OD2M:V572:4FTB:B7YC:DTUC Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:58 OomKillDisable:true NGoroutines:52 SystemTime:2022-05-31 17:37:27.3595171 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.102.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_
64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:53902323712 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.14 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3df54a852345ae127d1fa3092b95168e4a88e2f8 Expected:3df54a852345ae127d1fa3092b95168e4a88e2f8} RuncCommit:{ID:v1.0.3-0-gf46b6ba Expected:v1.0.3-0-gf46b6ba} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,
profile=default] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.5.1] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:C:\Program Files\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc.
Version:v0.17.0]] Warnings:<nil>}}
	I0531 17:37:28.447389    6484 cni.go:95] Creating CNI manager for ""
	I0531 17:37:28.447389    6484 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0531 17:37:28.447389    6484 start_flags.go:306] config:
	{Name:functional-20220531173104-2108 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653596720-14230@sha256:e953786303ac8350802546ee187d34e89f0007072a54fdbcc2f86a1fb8575418 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.6 ClusterName:functional-20220531173104-2108 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:clust
er.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.23.6 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false helm-tiller:false ingress:false ingress-dns:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true s
torage-provisioner-gluster:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube7:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0531 17:37:28.453380    6484 out.go:177] * dry-run validation complete!
	
	* 
	* ==> Docker <==
	* -- Logs begin at Tue 2022-05-31 17:31:59 UTC, end at Tue 2022-05-31 18:11:18 UTC. --
	May 31 17:32:16 functional-20220531173104-2108 dockerd[510]: time="2022-05-31T17:32:16.680751800Z" level=info msg="API listen on [::]:2376"
	May 31 17:32:16 functional-20220531173104-2108 dockerd[510]: time="2022-05-31T17:32:16.689768600Z" level=info msg="API listen on /var/run/docker.sock"
	May 31 17:33:14 functional-20220531173104-2108 dockerd[510]: time="2022-05-31T17:33:14.642799700Z" level=info msg="ignoring event" container=98e3aa1f1296f25e41ca656952022f984ab049f068d93686284c38a494fe3b39 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	May 31 17:33:14 functional-20220531173104-2108 dockerd[510]: time="2022-05-31T17:33:14.808182700Z" level=info msg="ignoring event" container=315f694ed7ff001175914f292f26e0bca8bf0ec3c19f5e4695b56a660e7ab976 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	May 31 17:35:19 functional-20220531173104-2108 dockerd[510]: time="2022-05-31T17:35:19.487805300Z" level=info msg="ignoring event" container=aa7f15a988995a6fa85f719ea833d5b834d26e7aa2b3844d38192e27a15c9ab0 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	May 31 17:35:19 functional-20220531173104-2108 dockerd[510]: time="2022-05-31T17:35:19.487884500Z" level=info msg="ignoring event" container=af4786384c492cf1a455f410500e518d7d0e4decde59c579559a2f79cf287bf1 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	May 31 17:35:19 functional-20220531173104-2108 dockerd[510]: time="2022-05-31T17:35:19.690452400Z" level=info msg="ignoring event" container=3c1115c1305583de691cab3c9af45e5287db0a3529ea517c35eff4767491efcc module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	May 31 17:35:19 functional-20220531173104-2108 dockerd[510]: time="2022-05-31T17:35:19.801212900Z" level=info msg="ignoring event" container=77a3b3590b9aa46cda285fc13f0eb69864570d411f836d6635490284cbb8ffd3 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	May 31 17:35:19 functional-20220531173104-2108 dockerd[510]: time="2022-05-31T17:35:19.888831400Z" level=info msg="ignoring event" container=bfa84a2ffd228f119df433c7266b611f44398bcf0f75764f7b83026b9c5f073d module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	May 31 17:35:19 functional-20220531173104-2108 dockerd[510]: time="2022-05-31T17:35:19.888902300Z" level=info msg="ignoring event" container=d2010d6f19be0c00b1b23922145e44466896613cc54c9dabe18f1743990c7946 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	May 31 17:35:19 functional-20220531173104-2108 dockerd[510]: time="2022-05-31T17:35:19.889046400Z" level=info msg="ignoring event" container=2ddc1a898dfb6bdfa2192a1ad55421e8c06adb5f1ec25dc0c4c4cfac116e88af module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	May 31 17:35:19 functional-20220531173104-2108 dockerd[510]: time="2022-05-31T17:35:19.991434500Z" level=info msg="ignoring event" container=909e135e054cc30a8071e17a4ae073838e8cbee380ac575ae832c3eed81d8053 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	May 31 17:35:19 functional-20220531173104-2108 dockerd[510]: time="2022-05-31T17:35:19.991967100Z" level=info msg="ignoring event" container=9aaae8d9608b86439288c99b679a5d80ea4c92244bd1e99fa17b6d9a0825a80a module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	May 31 17:35:19 functional-20220531173104-2108 dockerd[510]: time="2022-05-31T17:35:19.992448000Z" level=info msg="ignoring event" container=bbd1baa4cb2a56e436e609efae295f3b7f3554e405c09ef8cd71a2e20875eb05 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	May 31 17:35:20 functional-20220531173104-2108 dockerd[510]: time="2022-05-31T17:35:20.101148100Z" level=info msg="ignoring event" container=ddadbb8af8f59dd7bcbbc6f541cc9add1879600b98260c585bee87b63a3f2176 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	May 31 17:35:21 functional-20220531173104-2108 dockerd[510]: time="2022-05-31T17:35:21.394548300Z" level=info msg="ignoring event" container=f808f562e4c468a2ac3b7b9648e0ed31f46e15deb3cd95cfa099880a3f3edaa9 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	May 31 17:35:21 functional-20220531173104-2108 dockerd[510]: time="2022-05-31T17:35:21.587607700Z" level=info msg="ignoring event" container=b10946ca0bcfbc6e53ac9f0da0b7eecc81faf3ff2d465ba20619ae8b54d19491 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	May 31 17:35:24 functional-20220531173104-2108 dockerd[510]: time="2022-05-31T17:35:24.497971500Z" level=info msg="ignoring event" container=796c188ab0cbbbbdb1748c7ca3ec638f0b548aef6e16807e83b55ec33a0828a4 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	May 31 17:35:24 functional-20220531173104-2108 dockerd[510]: time="2022-05-31T17:35:24.598114300Z" level=info msg="ignoring event" container=c93090aa25350595a9d466ede333b6ee67fbad25ebd223b99a43e6b2c0d3ca85 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	May 31 17:35:34 functional-20220531173104-2108 dockerd[510]: time="2022-05-31T17:35:34.111579800Z" level=info msg="ignoring event" container=fe506de671ae7fba5dffae696c3cd7d1e1a52f3732f1c738ce0eb93a0bf53f5e module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	May 31 17:35:34 functional-20220531173104-2108 dockerd[510]: time="2022-05-31T17:35:34.309193700Z" level=info msg="ignoring event" container=f9286f1960b88854b2e86df970d569d11daa22334f7fef143f0e936f6cd145d3 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	May 31 17:37:01 functional-20220531173104-2108 dockerd[510]: time="2022-05-31T17:37:01.088956500Z" level=info msg="ignoring event" container=e63bea6377f2a0a0cc3ee701649c463d8c75818970630810f7ea0f4f37715b75 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	May 31 17:37:02 functional-20220531173104-2108 dockerd[510]: time="2022-05-31T17:37:02.806372400Z" level=info msg="ignoring event" container=23b52b71894a503bdc65ed36c95f8a20ef3e2bc39d1b744966749997165ada3f module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	May 31 17:40:55 functional-20220531173104-2108 dockerd[510]: time="2022-05-31T17:40:55.794817100Z" level=info msg="ignoring event" container=831ef2e56aa65c62d7c5c41f2db658df0122503be11bdf7e0a17fe28cd5822b1 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	May 31 17:40:56 functional-20220531173104-2108 dockerd[510]: time="2022-05-31T17:40:56.349511900Z" level=info msg="Layer sha256:8d988d9cbd4c3812fb85f3c741a359985602af139e727005f4d4471ac42f9d1a cleaned up"
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                                                           CREATED             STATE               NAME                      ATTEMPT             POD ID
	0b5ccd9823932       mysql@sha256:7e99b2b8d5bca914ef31059858210f57b009c40375d647f0d4d65ecd01d6b1d5                   31 minutes ago      Running             mysql                     0                   333834c1cb3d8
	3289072c6c844       nginx@sha256:2bcabc23b45489fb0885d69a06ba1d648aeda973fae7bb981bafbb884165e514                   34 minutes ago      Running             myfrontend                0                   00208e1f542a1
	6adb5fb1de300       k8s.gcr.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969   34 minutes ago      Running             echoserver                0                   7ef50adfd038f
	1182335c773e4       k8s.gcr.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969   34 minutes ago      Running             echoserver                0                   768d4e79d18b3
	835617a7dc034       nginx@sha256:a74534e76ee1121d418fa7394ca930eb67440deda413848bc67c68138535b989                   35 minutes ago      Running             nginx                     0                   104acaf228231
	0e1ed68a43faa       a4ca41631cc7a                                                                                   35 minutes ago      Running             coredns                   1                   44909b59d356f
	a24e38366a7b4       6e38f40d628db                                                                                   35 minutes ago      Running             storage-provisioner       2                   37cc42c5630cd
	fedfb55afc4b2       8fa62c12256df                                                                                   35 minutes ago      Running             kube-apiserver            0                   d3e5c39f666c8
	adf0898fd4ff8       595f327f224a4                                                                                   35 minutes ago      Running             kube-scheduler            1                   d06ea92e30a2c
	ad11c3777c4ed       4c03754524064                                                                                   35 minutes ago      Running             kube-proxy                1                   d59f78b47d67b
	c93090aa25350       6e38f40d628db                                                                                   35 minutes ago      Exited              storage-provisioner       1                   37cc42c5630cd
	7eb1fdcb70879       25f8c7f3da61c                                                                                   35 minutes ago      Running             etcd                      1                   0d14dc606fc5b
	67bdd1b22c523       df7b72818ad2e                                                                                   35 minutes ago      Running             kube-controller-manager   1                   b60f69cbd7b19
	796c188ab0cbb       a4ca41631cc7a                                                                                   38 minutes ago      Exited              coredns                   0                   bfa84a2ffd228
	909e135e054cc       4c03754524064                                                                                   38 minutes ago      Exited              kube-proxy                0                   aa7f15a988995
	ddadbb8af8f59       df7b72818ad2e                                                                                   38 minutes ago      Exited              kube-controller-manager   0                   d2010d6f19be0
	f808f562e4c46       595f327f224a4                                                                                   38 minutes ago      Exited              kube-scheduler            0                   9aaae8d9608b8
	77a3b3590b9aa       25f8c7f3da61c                                                                                   38 minutes ago      Exited              etcd                      0                   bbd1baa4cb2a5
	
	* 
	* ==> coredns [0e1ed68a43fa] <==
	* .:53
	[INFO] plugin/reload: Running configuration MD5 = c23ed519c17e71ee396ed052e6209e94
	CoreDNS-1.8.6
	linux/amd64, go1.17.1, 13a9191
	
	* 
	* ==> coredns [796c188ab0cb] <==
	* .:53
	[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	CoreDNS-1.8.6
	linux/amd64, go1.17.1, 13a9191
	[INFO] Reloading
	[INFO] plugin/health: Going into lameduck mode for 5s
	[INFO] plugin/reload: Running configuration MD5 = c23ed519c17e71ee396ed052e6209e94
	[INFO] Reloading complete
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	* 
	* ==> describe nodes <==
	* Name:               functional-20220531173104-2108
	Roles:              control-plane,master
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=functional-20220531173104-2108
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=bd46569bd7cb517fcad2e704abdebd2826bd8454
	                    minikube.k8s.io/name=functional-20220531173104-2108
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2022_05_31T17_32_50_0700
	                    minikube.k8s.io/version=v1.26.0-beta.1
	                    node-role.kubernetes.io/control-plane=
	                    node-role.kubernetes.io/master=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 31 May 2022 17:32:44 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  functional-20220531173104-2108
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 31 May 2022 18:11:11 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 31 May 2022 18:06:36 +0000   Tue, 31 May 2022 17:32:41 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 31 May 2022 18:06:36 +0000   Tue, 31 May 2022 17:32:41 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 31 May 2022 18:06:36 +0000   Tue, 31 May 2022 17:32:41 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 31 May 2022 18:06:36 +0000   Tue, 31 May 2022 17:33:01 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    functional-20220531173104-2108
	Capacity:
	  cpu:                16
	  ephemeral-storage:  263174212Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             52638988Ki
	  pods:               110
	Allocatable:
	  cpu:                16
	  ephemeral-storage:  263174212Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             52638988Ki
	  pods:               110
	System Info:
	  Machine ID:                 bfc82849fe6e4a6a9236307a23a8b5f1
	  System UUID:                bfc82849fe6e4a6a9236307a23a8b5f1
	  Boot ID:                    99d8680c-6839-4c5e-a5fa-8740ef80d5ef
	  Kernel Version:             5.10.102.1-microsoft-standard-WSL2
	  OS Image:                   Ubuntu 20.04.4 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://20.10.16
	  Kubelet Version:            v1.23.6
	  Kube-Proxy Version:         v1.23.6
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (12 in total)
	  Namespace                   Name                                                      CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                      ------------  ----------  ---------------  -------------  ---
	  default                     hello-node-54fbb85-c6cbz                                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         34m
	  default                     hello-node-connect-74cf8bc446-5wbb6                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         34m
	  default                     mysql-b87c45988-hqk85                                     600m (3%!)(MISSING)     700m (4%!)(MISSING)   512Mi (0%!)(MISSING)       700Mi (1%!)(MISSING)     31m
	  default                     nginx-svc                                                 0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         35m
	  default                     sp-pod                                                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         34m
	  kube-system                 coredns-64897985d-9zl57                                   100m (0%!)(MISSING)     0 (0%!)(MISSING)      70Mi (0%!)(MISSING)        170Mi (0%!)(MISSING)     38m
	  kube-system                 etcd-functional-20220531173104-2108                       100m (0%!)(MISSING)     0 (0%!)(MISSING)      100Mi (0%!)(MISSING)       0 (0%!)(MISSING)         38m
	  kube-system                 kube-apiserver-functional-20220531173104-2108             250m (1%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         35m
	  kube-system                 kube-controller-manager-functional-20220531173104-2108    200m (1%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         38m
	  kube-system                 kube-proxy-sc9l4                                          0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         38m
	  kube-system                 kube-scheduler-functional-20220531173104-2108             100m (0%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         38m
	  kube-system                 storage-provisioner                                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         38m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                1350m (8%!)(MISSING)  700m (4%!)(MISSING)
	  memory             682Mi (1%!)(MISSING)  870Mi (1%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From        Message
	  ----    ------                   ----               ----        -------
	  Normal  Starting                 35m                kube-proxy  
	  Normal  Starting                 38m                kube-proxy  
	  Normal  NodeHasNoDiskPressure    38m (x7 over 38m)  kubelet     Node functional-20220531173104-2108 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     38m (x7 over 38m)  kubelet     Node functional-20220531173104-2108 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientMemory  38m (x8 over 38m)  kubelet     Node functional-20220531173104-2108 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    38m                kubelet     Node functional-20220531173104-2108 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     38m                kubelet     Node functional-20220531173104-2108 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientMemory  38m                kubelet     Node functional-20220531173104-2108 status is now: NodeHasSufficientMemory
	  Normal  Starting                 38m                kubelet     Starting kubelet.
	  Normal  NodeAllocatableEnforced  38m                kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeReady                38m                kubelet     Node functional-20220531173104-2108 status is now: NodeReady
	  Normal  Starting                 35m                kubelet     Starting kubelet.
	  Normal  NodeHasSufficientMemory  35m (x2 over 35m)  kubelet     Node functional-20220531173104-2108 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    35m (x2 over 35m)  kubelet     Node functional-20220531173104-2108 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     35m (x2 over 35m)  kubelet     Node functional-20220531173104-2108 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  35m                kubelet     Updated Node Allocatable limit across pods
	
	* 
	* ==> dmesg <==
	* [May31 17:46] WSL2: Performing memory compaction.
	[May31 17:47] WSL2: Performing memory compaction.
	[May31 17:48] WSL2: Performing memory compaction.
	[May31 17:49] WSL2: Performing memory compaction.
	[May31 17:50] WSL2: Performing memory compaction.
	[May31 17:51] WSL2: Performing memory compaction.
	[May31 17:52] WSL2: Performing memory compaction.
	[May31 17:53] WSL2: Performing memory compaction.
	[May31 17:54] WSL2: Performing memory compaction.
	[May31 17:55] WSL2: Performing memory compaction.
	[May31 17:56] WSL2: Performing memory compaction.
	[May31 17:57] WSL2: Performing memory compaction.
	[May31 17:58] WSL2: Performing memory compaction.
	[May31 17:59] WSL2: Performing memory compaction.
	[May31 18:00] WSL2: Performing memory compaction.
	[May31 18:01] WSL2: Performing memory compaction.
	[May31 18:02] WSL2: Performing memory compaction.
	[May31 18:03] WSL2: Performing memory compaction.
	[May31 18:04] WSL2: Performing memory compaction.
	[May31 18:05] WSL2: Performing memory compaction.
	[May31 18:06] WSL2: Performing memory compaction.
	[May31 18:07] WSL2: Performing memory compaction.
	[May31 18:08] WSL2: Performing memory compaction.
	[May31 18:09] WSL2: Performing memory compaction.
	[May31 18:10] WSL2: Performing memory compaction.
	
	* 
	* ==> etcd [77a3b3590b9a] <==
	* {"level":"warn","ts":"2022-05-31T17:33:09.698Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"106.7221ms","expected-duration":"100ms","prefix":"","request":"header:<ID:8128013381219972195 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/pods/kube-system/coredns-64897985d-9zl57\" mod_revision:437 > success:<request_put:<key:\"/registry/pods/kube-system/coredns-64897985d-9zl57\" value_size:4559 >> failure:<request_range:<key:\"/registry/pods/kube-system/coredns-64897985d-9zl57\" > >>","response":"size:16"}
	{"level":"info","ts":"2022-05-31T17:33:09.785Z","caller":"traceutil/trace.go:171","msg":"trace[2132922401] linearizableReadLoop","detail":"{readStateIndex:478; appliedIndex:477; }","duration":"195.0218ms","start":"2022-05-31T17:33:09.590Z","end":"2022-05-31T17:33:09.785Z","steps":["trace[2132922401] 'read index received'  (duration: 745.1µs)","trace[2132922401] 'applied index is now lower than readState.Index'  (duration: 194.2719ms)"],"step_count":2}
	{"level":"warn","ts":"2022-05-31T17:33:09.785Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"195.2333ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/functional-20220531173104-2108\" ","response":"range_response_count:1 size:4510"}
	{"level":"info","ts":"2022-05-31T17:33:09.785Z","caller":"traceutil/trace.go:171","msg":"trace[59375942] range","detail":"{range_begin:/registry/minions/functional-20220531173104-2108; range_end:; response_count:1; response_revision:465; }","duration":"195.2987ms","start":"2022-05-31T17:33:09.590Z","end":"2022-05-31T17:33:09.785Z","steps":["trace[59375942] 'agreement among raft nodes before linearized reading'  (duration: 195.1802ms)"],"step_count":1}
	{"level":"info","ts":"2022-05-31T17:33:09.786Z","caller":"traceutil/trace.go:171","msg":"trace[2120425565] transaction","detail":"{read_only:false; response_revision:465; number_of_response:1; }","duration":"197.5044ms","start":"2022-05-31T17:33:09.588Z","end":"2022-05-31T17:33:09.786Z","steps":["trace[2120425565] 'compare'  (duration: 106.451ms)"],"step_count":1}
	{"level":"warn","ts":"2022-05-31T17:33:11.008Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"106.1085ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"warn","ts":"2022-05-31T17:33:11.008Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"106.4847ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/coredns-64897985d-9zl57\" ","response":"range_response_count:1 size:4454"}
	{"level":"info","ts":"2022-05-31T17:33:11.008Z","caller":"traceutil/trace.go:171","msg":"trace[576033879] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:480; }","duration":"106.2849ms","start":"2022-05-31T17:33:10.902Z","end":"2022-05-31T17:33:11.008Z","steps":["trace[576033879] 'agreement among raft nodes before linearized reading'  (duration: 61.1456ms)","trace[576033879] 'range keys from in-memory index tree'  (duration: 44.9423ms)"],"step_count":2}
	{"level":"info","ts":"2022-05-31T17:33:11.009Z","caller":"traceutil/trace.go:171","msg":"trace[126526605] range","detail":"{range_begin:/registry/pods/kube-system/coredns-64897985d-9zl57; range_end:; response_count:1; response_revision:480; }","duration":"106.5717ms","start":"2022-05-31T17:33:10.902Z","end":"2022-05-31T17:33:11.008Z","steps":["trace[126526605] 'agreement among raft nodes before linearized reading'  (duration: 61.3689ms)","trace[126526605] 'range keys from in-memory index tree'  (duration: 45.0854ms)"],"step_count":2}
	{"level":"warn","ts":"2022-05-31T17:33:11.008Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"106.4246ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2022-05-31T17:33:11.009Z","caller":"traceutil/trace.go:171","msg":"trace[773953843] range","detail":"{range_begin:/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath; range_end:; response_count:0; response_revision:480; }","duration":"106.805ms","start":"2022-05-31T17:33:10.902Z","end":"2022-05-31T17:33:11.009Z","steps":["trace[773953843] 'agreement among raft nodes before linearized reading'  (duration: 61.3958ms)","trace[773953843] 'range keys from in-memory index tree'  (duration: 45.017ms)"],"step_count":2}
	{"level":"warn","ts":"2022-05-31T17:33:11.239Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"130.5692ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/functional-20220531173104-2108\" ","response":"range_response_count:1 size:4510"}
	{"level":"info","ts":"2022-05-31T17:33:11.239Z","caller":"traceutil/trace.go:171","msg":"trace[1737174060] range","detail":"{range_begin:/registry/minions/functional-20220531173104-2108; range_end:; response_count:1; response_revision:483; }","duration":"130.8705ms","start":"2022-05-31T17:33:11.109Z","end":"2022-05-31T17:33:11.239Z","steps":["trace[1737174060] 'agreement among raft nodes before linearized reading'  (duration: 84.0218ms)","trace[1737174060] 'range keys from in-memory index tree'  (duration: 46.5054ms)"],"step_count":2}
	{"level":"info","ts":"2022-05-31T17:33:11.239Z","caller":"traceutil/trace.go:171","msg":"trace[986926177] transaction","detail":"{read_only:false; response_revision:484; number_of_response:1; }","duration":"123.4756ms","start":"2022-05-31T17:33:11.116Z","end":"2022-05-31T17:33:11.239Z","steps":["trace[986926177] 'process raft request'  (duration: 76.6828ms)","trace[986926177] 'compare'  (duration: 46.242ms)"],"step_count":2}
	{"level":"info","ts":"2022-05-31T17:33:11.604Z","caller":"traceutil/trace.go:171","msg":"trace[1288232695] linearizableReadLoop","detail":"{readStateIndex:499; appliedIndex:499; }","duration":"101.8248ms","start":"2022-05-31T17:33:11.502Z","end":"2022-05-31T17:33:11.604Z","steps":["trace[1288232695] 'read index received'  (duration: 101.8156ms)","trace[1288232695] 'applied index is now lower than readState.Index'  (duration: 6.1µs)"],"step_count":2}
	{"level":"warn","ts":"2022-05-31T17:33:11.604Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"102.1533ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/kube-scheduler-functional-20220531173104-2108\" ","response":"range_response_count:1 size:4203"}
	{"level":"info","ts":"2022-05-31T17:33:11.604Z","caller":"traceutil/trace.go:171","msg":"trace[206237108] range","detail":"{range_begin:/registry/pods/kube-system/kube-scheduler-functional-20220531173104-2108; range_end:; response_count:1; response_revision:486; }","duration":"102.3019ms","start":"2022-05-31T17:33:11.502Z","end":"2022-05-31T17:33:11.604Z","steps":["trace[206237108] 'agreement among raft nodes before linearized reading'  (duration: 102.032ms)"],"step_count":1}
	{"level":"info","ts":"2022-05-31T17:35:19.286Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2022-05-31T17:35:19.286Z","caller":"embed/etcd.go:367","msg":"closing etcd server","name":"functional-20220531173104-2108","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
	WARNING: 2022/05/31 17:35:19 [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1:2379 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	WARNING: 2022/05/31 17:35:19 [core] grpc: addrConn.createTransport failed to connect to {192.168.49.2:2379 192.168.49.2:2379 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 192.168.49.2:2379: connect: connection refused". Reconnecting...
	{"level":"info","ts":"2022-05-31T17:35:19.298Z","caller":"etcdserver/server.go:1438","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"aec36adc501070cc","current-leader-member-id":"aec36adc501070cc"}
	{"level":"info","ts":"2022-05-31T17:35:19.401Z","caller":"embed/etcd.go:562","msg":"stopping serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2022-05-31T17:35:19.403Z","caller":"embed/etcd.go:567","msg":"stopped serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2022-05-31T17:35:19.403Z","caller":"embed/etcd.go:369","msg":"closed etcd server","name":"functional-20220531173104-2108","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
	
	* 
	* ==> etcd [7eb1fdcb7087] <==
	* {"level":"warn","ts":"2022-05-31T17:40:13.131Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"723.7071ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"warn","ts":"2022-05-31T17:40:13.130Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"2.7340478s","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2022-05-31T17:40:13.131Z","caller":"traceutil/trace.go:171","msg":"trace[114366969] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:946; }","duration":"2.7347865s","start":"2022-05-31T17:40:10.396Z","end":"2022-05-31T17:40:13.131Z","steps":["trace[114366969] 'range keys from in-memory index tree'  (duration: 2.7339633s)"],"step_count":1}
	{"level":"warn","ts":"2022-05-31T17:40:13.131Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2022-05-31T17:40:10.396Z","time spent":"2.7348276s","remote":"127.0.0.1:36182","response type":"/etcdserverpb.KV/Range","request count":0,"request size":18,"response count":0,"response size":29,"request content":"key:\"/registry/health\" "}
	{"level":"info","ts":"2022-05-31T17:40:13.131Z","caller":"traceutil/trace.go:171","msg":"trace[1235275351] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:948; }","duration":"723.7702ms","start":"2022-05-31T17:40:12.407Z","end":"2022-05-31T17:40:13.131Z","steps":["trace[1235275351] 'agreement among raft nodes before linearized reading'  (duration: 723.6897ms)"],"step_count":1}
	{"level":"warn","ts":"2022-05-31T17:40:13.131Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2022-05-31T17:40:12.407Z","time spent":"724.0601ms","remote":"127.0.0.1:36182","response type":"/etcdserverpb.KV/Range","request count":0,"request size":18,"response count":0,"response size":29,"request content":"key:\"/registry/health\" "}
	{"level":"warn","ts":"2022-05-31T17:40:13.131Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"2.0242807s","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/namespaces/default\" ","response":"range_response_count:1 size:343"}
	{"level":"info","ts":"2022-05-31T17:40:13.131Z","caller":"traceutil/trace.go:171","msg":"trace[1654666654] range","detail":"{range_begin:/registry/namespaces/default; range_end:; response_count:1; response_revision:948; }","duration":"2.0246224s","start":"2022-05-31T17:40:11.107Z","end":"2022-05-31T17:40:13.131Z","steps":["trace[1654666654] 'agreement among raft nodes before linearized reading'  (duration: 2.0240912s)"],"step_count":1}
	{"level":"warn","ts":"2022-05-31T17:40:13.131Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2022-05-31T17:40:11.107Z","time spent":"2.0246833s","remote":"127.0.0.1:36098","response type":"/etcdserverpb.KV/Range","request count":0,"request size":30,"response count":1,"response size":367,"request content":"key:\"/registry/namespaces/default\" "}
	{"level":"warn","ts":"2022-05-31T17:40:13.130Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"3.3309698s","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" ","response":"range_response_count:1 size:1133"}
	{"level":"info","ts":"2022-05-31T17:40:13.132Z","caller":"traceutil/trace.go:171","msg":"trace[756194152] range","detail":"{range_begin:/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath; range_end:; response_count:1; response_revision:946; }","duration":"3.3321347s","start":"2022-05-31T17:40:09.799Z","end":"2022-05-31T17:40:13.131Z","steps":["trace[756194152] 'range keys from in-memory index tree'  (duration: 3.3308663s)"],"step_count":1}
	{"level":"warn","ts":"2022-05-31T17:40:13.132Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2022-05-31T17:40:09.799Z","time spent":"3.3321908s","remote":"127.0.0.1:36100","response type":"/etcdserverpb.KV/Range","request count":0,"request size":67,"response count":1,"response size":1157,"request content":"key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" "}
	{"level":"info","ts":"2022-05-31T17:45:36.047Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":974}
	{"level":"info","ts":"2022-05-31T17:45:36.048Z","caller":"mvcc/kvstore_compaction.go:57","msg":"finished scheduled compaction","compact-revision":974,"took":"1.166ms"}
	{"level":"info","ts":"2022-05-31T17:50:36.063Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1183}
	{"level":"info","ts":"2022-05-31T17:50:36.064Z","caller":"mvcc/kvstore_compaction.go:57","msg":"finished scheduled compaction","compact-revision":1183,"took":"715.4µs"}
	{"level":"info","ts":"2022-05-31T17:55:36.080Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1393}
	{"level":"info","ts":"2022-05-31T17:55:36.081Z","caller":"mvcc/kvstore_compaction.go:57","msg":"finished scheduled compaction","compact-revision":1393,"took":"821.6µs"}
	{"level":"info","ts":"2022-05-31T17:56:11.306Z","caller":"traceutil/trace.go:171","msg":"trace[1260663686] transaction","detail":"{read_only:false; response_revision:1628; number_of_response:1; }","duration":"109.2685ms","start":"2022-05-31T17:56:11.196Z","end":"2022-05-31T17:56:11.305Z","steps":["trace[1260663686] 'process raft request'  (duration: 83.9453ms)","trace[1260663686] 'compare'  (duration: 25.0457ms)"],"step_count":2}
	{"level":"info","ts":"2022-05-31T18:00:36.113Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1603}
	{"level":"info","ts":"2022-05-31T18:00:36.114Z","caller":"mvcc/kvstore_compaction.go:57","msg":"finished scheduled compaction","compact-revision":1603,"took":"652.6µs"}
	{"level":"info","ts":"2022-05-31T18:05:36.130Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1813}
	{"level":"info","ts":"2022-05-31T18:05:36.131Z","caller":"mvcc/kvstore_compaction.go:57","msg":"finished scheduled compaction","compact-revision":1813,"took":"682.1µs"}
	{"level":"info","ts":"2022-05-31T18:10:36.144Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":2023}
	{"level":"info","ts":"2022-05-31T18:10:36.145Z","caller":"mvcc/kvstore_compaction.go:57","msg":"finished scheduled compaction","compact-revision":2023,"took":"613.2µs"}
	
	* 
	* ==> kernel <==
	*  18:11:20 up 59 min,  0 users,  load average: 0.21, 0.28, 0.44
	Linux functional-20220531173104-2108 5.10.102.1-microsoft-standard-WSL2 #1 SMP Wed Mar 2 00:30:59 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 20.04.4 LTS"
	
	* 
	* ==> kube-apiserver [fedfb55afc4b] <==
	* Trace[1631072592]: ---"Listing from storage done" 1290ms (17:39:42.603)
	Trace[1631072592]: [1.2911037s] [1.2911037s] END
	{"level":"warn","ts":"2022-05-31T17:40:12.397Z","logger":"etcd-client","caller":"v3/retry_interceptor.go:62","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0xc002b6c000/#initially=[https://127.0.0.1:2379]","attempt":0,"error":"rpc error: code = DeadlineExceeded desc = context deadline exceeded"}
	I0531 17:40:13.132369       1 trace.go:205] Trace[1101327824]: "Create" url:/api/v1/namespaces/kube-system/events,user-agent:kubelet/v1.23.6 (linux/amd64) kubernetes/ad33385,audit-id:38759316-3673-4f68-a8bb-151eb07d43f1,client:192.168.49.2,accept:application/vnd.kubernetes.protobuf,application/json,protocol:HTTP/2.0 (31-May-2022 17:40:12.413) (total time: 718ms):
	Trace[1101327824]: ---"Object stored in database" 718ms (17:40:13.132)
	Trace[1101327824]: [718.4759ms] [718.4759ms] END
	I0531 17:40:13.132425       1 trace.go:205] Trace[207720524]: "GuaranteedUpdate etcd3" type:*coordination.Lease (31-May-2022 17:40:10.605) (total time: 2526ms):
	Trace[207720524]: ---"Transaction committed" 2525ms (17:40:13.132)
	Trace[207720524]: [2.5264065s] [2.5264065s] END
	I0531 17:40:13.132630       1 trace.go:205] Trace[1666363601]: "List etcd3" key:/pods/default,resourceVersion:,resourceVersionMatch:,limit:0,continue: (31-May-2022 17:40:10.313) (total time: 2819ms):
	Trace[1666363601]: [2.8192399s] [2.8192399s] END
	I0531 17:40:13.132645       1 trace.go:205] Trace[1788252629]: "Update" url:/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/functional-20220531173104-2108,user-agent:kubelet/v1.23.6 (linux/amd64) kubernetes/ad33385,audit-id:5e81e9b7-91c0-4fe0-ba53-09a8d12ac490,client:192.168.49.2,accept:application/vnd.kubernetes.protobuf,application/json,protocol:HTTP/2.0 (31-May-2022 17:40:10.605) (total time: 2526ms):
	Trace[1788252629]: ---"Object stored in database" 2526ms (17:40:13.132)
	Trace[1788252629]: [2.5268845s] [2.5268845s] END
	I0531 17:40:13.132811       1 trace.go:205] Trace[1040422210]: "Get" url:/api/v1/namespaces/default,user-agent:kube-apiserver/v1.23.6 (linux/amd64) kubernetes/ad33385,audit-id:19943911-f674-4af8-93b9-4d946f70dd89,client:127.0.0.1,accept:application/vnd.kubernetes.protobuf, */*,protocol:HTTP/2.0 (31-May-2022 17:40:11.106) (total time: 2026ms):
	Trace[1040422210]: ---"About to write a response" 2026ms (17:40:13.132)
	Trace[1040422210]: [2.0264395s] [2.0264395s] END
	I0531 17:40:13.133466       1 trace.go:205] Trace[2858733]: "List" url:/api/v1/namespaces/default/pods,user-agent:e2e-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format,audit-id:bd4a9a02-c598-413a-a8b1-29a42f138d4c,client:192.168.49.1,accept:application/json, */*,protocol:HTTP/2.0 (31-May-2022 17:40:10.313) (total time: 2820ms):
	Trace[2858733]: ---"Listing from storage done" 2819ms (17:40:13.132)
	Trace[2858733]: [2.8201283s] [2.8201283s] END
	I0531 17:40:13.132820       1 trace.go:205] Trace[393344095]: "Get" url:/api/v1/namespaces/kube-system/endpoints/k8s.io-minikube-hostpath,user-agent:storage-provisioner/v0.0.0 (linux/amd64) kubernetes/$Format,audit-id:7cb2b9e1-3040-4a1b-98d0-2da211d49607,client:192.168.49.2,accept:application/json, */*,protocol:HTTP/2.0 (31-May-2022 17:40:09.798) (total time: 3333ms):
	Trace[393344095]: ---"About to write a response" 3333ms (17:40:13.132)
	Trace[393344095]: [3.333828s] [3.333828s] END
	W0531 17:51:52.489350       1 watcher.go:229] watch chan error: etcdserver: mvcc: required revision has been compacted
	W0531 18:07:46.407468       1 watcher.go:229] watch chan error: etcdserver: mvcc: required revision has been compacted
	
	* 
	* ==> kube-controller-manager [67bdd1b22c52] <==
	* I0531 17:35:52.586194       1 shared_informer.go:247] Caches are synced for deployment 
	I0531 17:35:52.586612       1 shared_informer.go:247] Caches are synced for crt configmap 
	I0531 17:35:52.587022       1 shared_informer.go:247] Caches are synced for stateful set 
	I0531 17:35:52.587048       1 shared_informer.go:247] Caches are synced for namespace 
	I0531 17:35:52.587492       1 shared_informer.go:247] Caches are synced for ReplicaSet 
	I0531 17:35:52.587073       1 shared_informer.go:247] Caches are synced for TTL 
	I0531 17:35:52.590485       1 shared_informer.go:247] Caches are synced for GC 
	I0531 17:35:52.591621       1 shared_informer.go:247] Caches are synced for expand 
	I0531 17:35:52.592161       1 shared_informer.go:240] Waiting for caches to sync for garbage collector
	I0531 17:35:52.593009       1 shared_informer.go:247] Caches are synced for certificate-csrapproving 
	I0531 17:35:52.593195       1 shared_informer.go:247] Caches are synced for PV protection 
	I0531 17:35:52.625116       1 shared_informer.go:247] Caches are synced for ClusterRoleAggregator 
	I0531 17:35:52.625668       1 shared_informer.go:247] Caches are synced for attach detach 
	I0531 17:35:52.691797       1 shared_informer.go:247] Caches are synced for resource quota 
	I0531 17:35:52.717799       1 shared_informer.go:247] Caches are synced for resource quota 
	I0531 17:35:53.193618       1 shared_informer.go:247] Caches are synced for garbage collector 
	I0531 17:35:53.216601       1 shared_informer.go:247] Caches are synced for garbage collector 
	I0531 17:35:53.216751       1 garbagecollector.go:155] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I0531 17:36:28.124977       1 event.go:294] "Event occurred" object="default/myclaim" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="ExternalProvisioning" message="waiting for a volume to be created, either by external provisioner \"k8s.io/minikube-hostpath\" or manually created by system administrator"
	I0531 17:36:37.487272       1 event.go:294] "Event occurred" object="default/hello-node-connect" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set hello-node-connect-74cf8bc446 to 1"
	I0531 17:36:37.689199       1 event.go:294] "Event occurred" object="default/hello-node-connect-74cf8bc446" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: hello-node-connect-74cf8bc446-5wbb6"
	I0531 17:37:01.384489       1 event.go:294] "Event occurred" object="default/hello-node" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set hello-node-54fbb85 to 1"
	I0531 17:37:02.485730       1 event.go:294] "Event occurred" object="default/hello-node-54fbb85" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: hello-node-54fbb85-c6cbz"
	I0531 17:39:27.189379       1 event.go:294] "Event occurred" object="default/mysql" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set mysql-b87c45988 to 1"
	I0531 17:39:27.287705       1 event.go:294] "Event occurred" object="default/mysql-b87c45988" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: mysql-b87c45988-hqk85"
	
	* 
	* ==> kube-controller-manager [ddadbb8af8f5] <==
	* I0531 17:33:02.393030       1 shared_informer.go:247] Caches are synced for taint 
	I0531 17:33:02.393461       1 node_lifecycle_controller.go:1397] Initializing eviction metric for zone: 
	W0531 17:33:02.393579       1 node_lifecycle_controller.go:1012] Missing timestamp for Node functional-20220531173104-2108. Assuming now as a timestamp.
	I0531 17:33:02.393654       1 node_lifecycle_controller.go:1213] Controller detected that zone  is now in state Normal.
	I0531 17:33:02.393994       1 taint_manager.go:187] "Starting NoExecuteTaintManager"
	I0531 17:33:02.394783       1 event.go:294] "Event occurred" object="functional-20220531173104-2108" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node functional-20220531173104-2108 event: Registered Node functional-20220531173104-2108 in Controller"
	I0531 17:33:02.394971       1 shared_informer.go:247] Caches are synced for ClusterRoleAggregator 
	I0531 17:33:02.489977       1 range_allocator.go:374] Set node functional-20220531173104-2108 PodCIDR to [10.244.0.0/24]
	I0531 17:33:02.492251       1 shared_informer.go:247] Caches are synced for resource quota 
	I0531 17:33:02.496977       1 shared_informer.go:247] Caches are synced for disruption 
	I0531 17:33:02.497081       1 disruption.go:371] Sending events to api server.
	I0531 17:33:02.503283       1 shared_informer.go:240] Waiting for caches to sync for garbage collector
	I0531 17:33:02.509448       1 shared_informer.go:247] Caches are synced for stateful set 
	I0531 17:33:02.584994       1 shared_informer.go:247] Caches are synced for resource quota 
	I0531 17:33:02.594681       1 shared_informer.go:247] Caches are synced for service account 
	I0531 17:33:02.595225       1 shared_informer.go:247] Caches are synced for namespace 
	I0531 17:33:02.889741       1 event.go:294] "Event occurred" object="kube-system/coredns" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-64897985d to 2"
	I0531 17:33:02.985790       1 shared_informer.go:247] Caches are synced for garbage collector 
	I0531 17:33:02.987849       1 shared_informer.go:247] Caches are synced for garbage collector 
	I0531 17:33:02.987963       1 garbagecollector.go:155] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I0531 17:33:02.995082       1 event.go:294] "Event occurred" object="kube-system/kube-proxy" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-sc9l4"
	I0531 17:33:03.193258       1 event.go:294] "Event occurred" object="kube-system/coredns-64897985d" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-64897985d-ct8qm"
	I0531 17:33:03.295205       1 event.go:294] "Event occurred" object="kube-system/coredns-64897985d" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-64897985d-9zl57"
	I0531 17:33:04.423418       1 event.go:294] "Event occurred" object="kube-system/coredns" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set coredns-64897985d to 1"
	I0531 17:33:04.599470       1 event.go:294] "Event occurred" object="kube-system/coredns-64897985d" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-64897985d-ct8qm"
	
	* 
	* ==> kube-proxy [909e135e054c] <==
	* E0531 17:33:06.103431       1 proxier.go:647] "Failed to read builtin modules file, you can ignore this message when kube-proxy is running inside container without mounting /lib/modules" err="open /lib/modules/5.10.102.1-microsoft-standard-WSL2/modules.builtin: no such file or directory" filePath="/lib/modules/5.10.102.1-microsoft-standard-WSL2/modules.builtin"
	I0531 17:33:06.106486       1 proxier.go:657] "Failed to load kernel module with modprobe, you can ignore this message when kube-proxy is running inside container without mounting /lib/modules" moduleName="ip_vs"
	I0531 17:33:06.110222       1 proxier.go:657] "Failed to load kernel module with modprobe, you can ignore this message when kube-proxy is running inside container without mounting /lib/modules" moduleName="ip_vs_rr"
	I0531 17:33:06.189907       1 proxier.go:657] "Failed to load kernel module with modprobe, you can ignore this message when kube-proxy is running inside container without mounting /lib/modules" moduleName="ip_vs_wrr"
	I0531 17:33:06.195133       1 proxier.go:657] "Failed to load kernel module with modprobe, you can ignore this message when kube-proxy is running inside container without mounting /lib/modules" moduleName="ip_vs_sh"
	I0531 17:33:06.198664       1 proxier.go:657] "Failed to load kernel module with modprobe, you can ignore this message when kube-proxy is running inside container without mounting /lib/modules" moduleName="nf_conntrack"
	I0531 17:33:06.327073       1 node.go:163] Successfully retrieved node IP: 192.168.49.2
	I0531 17:33:06.327259       1 server_others.go:138] "Detected node IP" address="192.168.49.2"
	I0531 17:33:06.327392       1 server_others.go:561] "Unknown proxy mode, assuming iptables proxy" proxyMode=""
	I0531 17:33:06.688869       1 server_others.go:206] "Using iptables Proxier"
	I0531 17:33:06.689291       1 server_others.go:213] "kube-proxy running in dual-stack mode" ipFamily=IPv4
	I0531 17:33:06.689740       1 server_others.go:214] "Creating dualStackProxier for iptables"
	I0531 17:33:06.689814       1 server_others.go:491] "Detect-local-mode set to ClusterCIDR, but no IPv6 cluster CIDR defined, , defaulting to no-op detect-local for IPv6"
	I0531 17:33:06.692445       1 server.go:656] "Version info" version="v1.23.6"
	I0531 17:33:06.696163       1 config.go:317] "Starting service config controller"
	I0531 17:33:06.696291       1 shared_informer.go:240] Waiting for caches to sync for service config
	I0531 17:33:06.696737       1 config.go:226] "Starting endpoint slice config controller"
	I0531 17:33:06.696756       1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
	I0531 17:33:06.797135       1 shared_informer.go:247] Caches are synced for endpoint slice config 
	I0531 17:33:06.797221       1 shared_informer.go:247] Caches are synced for service config 
	
	* 
	* ==> kube-proxy [ad11c3777c4e] <==
	* I0531 17:35:24.098053       1 proxier.go:657] "Failed to load kernel module with modprobe, you can ignore this message when kube-proxy is running inside container without mounting /lib/modules" moduleName="ip_vs_rr"
	I0531 17:35:24.101813       1 proxier.go:657] "Failed to load kernel module with modprobe, you can ignore this message when kube-proxy is running inside container without mounting /lib/modules" moduleName="ip_vs_wrr"
	I0531 17:35:24.104508       1 proxier.go:657] "Failed to load kernel module with modprobe, you can ignore this message when kube-proxy is running inside container without mounting /lib/modules" moduleName="ip_vs_sh"
	I0531 17:35:24.107448       1 proxier.go:657] "Failed to load kernel module with modprobe, you can ignore this message when kube-proxy is running inside container without mounting /lib/modules" moduleName="nf_conntrack"
	E0531 17:35:24.185240       1 node.go:152] Failed to retrieve node info: Get "https://control-plane.minikube.internal:8441/api/v1/nodes/functional-20220531173104-2108": dial tcp 192.168.49.2:8441: connect: connection refused
	E0531 17:35:25.227143       1 node.go:152] Failed to retrieve node info: Get "https://control-plane.minikube.internal:8441/api/v1/nodes/functional-20220531173104-2108": dial tcp 192.168.49.2:8441: connect: connection refused
	I0531 17:35:33.386904       1 node.go:163] Successfully retrieved node IP: 192.168.49.2
	I0531 17:35:33.387082       1 server_others.go:138] "Detected node IP" address="192.168.49.2"
	I0531 17:35:33.387156       1 server_others.go:561] "Unknown proxy mode, assuming iptables proxy" proxyMode=""
	I0531 17:35:33.805882       1 server_others.go:206] "Using iptables Proxier"
	I0531 17:35:33.806008       1 server_others.go:213] "kube-proxy running in dual-stack mode" ipFamily=IPv4
	I0531 17:35:33.806021       1 server_others.go:214] "Creating dualStackProxier for iptables"
	I0531 17:35:33.806044       1 server_others.go:491] "Detect-local-mode set to ClusterCIDR, but no IPv6 cluster CIDR defined, , defaulting to no-op detect-local for IPv6"
	I0531 17:35:33.806489       1 server.go:656] "Version info" version="v1.23.6"
	I0531 17:35:33.807519       1 config.go:317] "Starting service config controller"
	I0531 17:35:33.807537       1 shared_informer.go:240] Waiting for caches to sync for service config
	I0531 17:35:33.807580       1 config.go:226] "Starting endpoint slice config controller"
	I0531 17:35:33.807585       1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
	I0531 17:35:33.908774       1 shared_informer.go:247] Caches are synced for endpoint slice config 
	I0531 17:35:33.908937       1 shared_informer.go:247] Caches are synced for service config 
	W0531 17:35:34.187163       1 reflector.go:442] k8s.io/client-go/informers/factory.go:134: watch of *v1.Service ended with: very short watch: k8s.io/client-go/informers/factory.go:134: Unexpected watch close - watch lasted less than a second and no items received
	W0531 17:35:34.187246       1 reflector.go:442] k8s.io/client-go/informers/factory.go:134: watch of *v1.EndpointSlice ended with: very short watch: k8s.io/client-go/informers/factory.go:134: Unexpected watch close - watch lasted less than a second and no items received
	E0531 17:35:34.187323       1 event_broadcaster.go:262] Unable to write event: 'Post "https://control-plane.minikube.internal:8441/apis/events.k8s.io/v1/namespaces/default/events": unexpected EOF' (may retry after sleeping)
	W0531 17:35:35.389226       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8441/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=586": dial tcp 192.168.49.2:8441: connect: connection refused
	E0531 17:35:35.389399       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8441/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=586": dial tcp 192.168.49.2:8441: connect: connection refused
	
	* 
	* ==> kube-scheduler [adf0898fd4ff] <==
	* E0531 17:35:39.190312       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0531 17:35:39.190431       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0531 17:35:39.190512       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0531 17:35:39.190547       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0531 17:35:39.190726       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0531 17:35:39.191107       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0531 17:35:39.191011       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0531 17:35:39.190885       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1beta1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0531 17:35:39.191304       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0531 17:35:39.191378       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0531 17:35:39.191418       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0531 17:35:39.191448       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0531 17:35:39.191508       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: unknown (get nodes)
	E0531 17:35:39.191561       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: unknown (get pods)
	W0531 17:35:39.191747       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0531 17:35:39.191747       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0531 17:35:39.190596       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0531 17:35:39.287970       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0531 17:35:39.191321       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0531 17:35:39.191420       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.CSIStorageCapacity: failed to list *v1beta1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0531 17:35:39.190556       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0531 17:35:39.288095       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0531 17:35:39.191783       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0531 17:35:39.295870       1 reflector.go:324] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:205: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0531 17:35:39.296068       1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:205: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	
	* 
	* ==> kube-scheduler [f808f562e4c4] <==
	* W0531 17:32:45.288186       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0531 17:32:45.288296       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0531 17:32:45.288300       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0531 17:32:45.288524       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0531 17:32:45.288540       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0531 17:32:45.288552       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0531 17:32:45.589835       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0531 17:32:45.589984       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0531 17:32:45.702471       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0531 17:32:45.702598       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0531 17:32:45.731493       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1beta1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0531 17:32:45.731611       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.CSIStorageCapacity: failed to list *v1beta1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0531 17:32:45.741158       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0531 17:32:45.741325       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0531 17:32:45.807409       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0531 17:32:45.807526       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0531 17:32:45.829957       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0531 17:32:45.830093       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0531 17:32:47.002641       1 reflector.go:324] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:205: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0531 17:32:47.002772       1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:205: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0531 17:32:47.087448       1 plugin.go:138] "getting namespace, assuming empty set of namespace labels" err="namespace \"kube-system\" not found" namespace="kube-system"
	I0531 17:32:51.587983       1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	I0531 17:35:19.631143       1 configmap_cafile_content.go:222] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0531 17:35:19.631934       1 tlsconfig.go:255] "Shutting down DynamicServingCertificateController"
	I0531 17:35:19.632617       1 secure_serving.go:311] Stopped listening on 127.0.0.1:10259
	
	* 
	* ==> kubelet <==
	* -- Logs begin at Tue 2022-05-31 17:31:59 UTC, end at Tue 2022-05-31 18:11:20 UTC. --
	May 31 17:37:05 functional-20220531173104-2108 kubelet[6055]: I0531 17:37:05.105444    6055 reconciler.go:221] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mlwfx\" (UniqueName: \"kubernetes.io/projected/b31211e5-01d1-4a06-9d69-72182ab83802-kube-api-access-mlwfx\") pod \"sp-pod\" (UID: \"b31211e5-01d1-4a06-9d69-72182ab83802\") " pod="default/sp-pod"
	May 31 17:37:05 functional-20220531173104-2108 kubelet[6055]: I0531 17:37:05.287672    6055 docker_sandbox.go:402] "Failed to read pod IP from plugin/docker" err="Couldn't find network status for default/hello-node-54fbb85-c6cbz through plugin: invalid network status for"
	May 31 17:37:05 functional-20220531173104-2108 kubelet[6055]: I0531 17:37:05.287695    6055 pod_container_deletor.go:79] "Container not found in pod's containers" containerID="7ef50adfd038f2286f21035755a0bc0609a23b76236474e8b4d00792eecf0d76"
	May 31 17:37:05 functional-20220531173104-2108 kubelet[6055]: I0531 17:37:05.906126    6055 kubelet_volumes.go:160] "Cleaned up orphaned pod volumes dir" podUID=21fd3f05-62fb-433b-8773-6875c87575ce path="/var/lib/kubelet/pods/21fd3f05-62fb-433b-8773-6875c87575ce/volumes"
	May 31 17:37:06 functional-20220531173104-2108 kubelet[6055]: I0531 17:37:06.305829    6055 docker_sandbox.go:402] "Failed to read pod IP from plugin/docker" err="Couldn't find network status for default/hello-node-54fbb85-c6cbz through plugin: invalid network status for"
	May 31 17:37:07 functional-20220531173104-2108 kubelet[6055]: I0531 17:37:07.129839    6055 pod_container_deletor.go:79] "Container not found in pod's containers" containerID="00208e1f542a1644bacc9fc2e4e3d6cce48255de697dfc5e59ad362f05793ffb"
	May 31 17:37:07 functional-20220531173104-2108 kubelet[6055]: I0531 17:37:07.130189    6055 docker_sandbox.go:402] "Failed to read pod IP from plugin/docker" err="Couldn't find network status for default/sp-pod through plugin: invalid network status for"
	May 31 17:37:07 functional-20220531173104-2108 kubelet[6055]: I0531 17:37:07.139575    6055 docker_sandbox.go:402] "Failed to read pod IP from plugin/docker" err="Couldn't find network status for default/hello-node-connect-74cf8bc446-5wbb6 through plugin: invalid network status for"
	May 31 17:37:08 functional-20220531173104-2108 kubelet[6055]: I0531 17:37:08.207787    6055 docker_sandbox.go:402] "Failed to read pod IP from plugin/docker" err="Couldn't find network status for default/hello-node-connect-74cf8bc446-5wbb6 through plugin: invalid network status for"
	May 31 17:37:08 functional-20220531173104-2108 kubelet[6055]: I0531 17:37:08.219427    6055 docker_sandbox.go:402] "Failed to read pod IP from plugin/docker" err="Couldn't find network status for default/hello-node-54fbb85-c6cbz through plugin: invalid network status for"
	May 31 17:37:08 functional-20220531173104-2108 kubelet[6055]: I0531 17:37:08.233119    6055 docker_sandbox.go:402] "Failed to read pod IP from plugin/docker" err="Couldn't find network status for default/sp-pod through plugin: invalid network status for"
	May 31 17:37:09 functional-20220531173104-2108 kubelet[6055]: I0531 17:37:09.634733    6055 docker_sandbox.go:402] "Failed to read pod IP from plugin/docker" err="Couldn't find network status for default/sp-pod through plugin: invalid network status for"
	May 31 17:39:27 functional-20220531173104-2108 kubelet[6055]: I0531 17:39:27.296674    6055 topology_manager.go:200] "Topology Admit Handler"
	May 31 17:39:27 functional-20220531173104-2108 kubelet[6055]: I0531 17:39:27.489435    6055 reconciler.go:221] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-47pdt\" (UniqueName: \"kubernetes.io/projected/f511a394-30ea-40ce-9841-29501af6ecf7-kube-api-access-47pdt\") pod \"mysql-b87c45988-hqk85\" (UID: \"f511a394-30ea-40ce-9841-29501af6ecf7\") " pod="default/mysql-b87c45988-hqk85"
	May 31 17:39:29 functional-20220531173104-2108 kubelet[6055]: I0531 17:39:29.228606    6055 pod_container_deletor.go:79] "Container not found in pod's containers" containerID="333834c1cb3d8874bb9a5260ff5c49ec9c421df7d40510d4994f747cf7b71ec7"
	May 31 17:39:29 functional-20220531173104-2108 kubelet[6055]: I0531 17:39:29.228681    6055 docker_sandbox.go:402] "Failed to read pod IP from plugin/docker" err="Couldn't find network status for default/mysql-b87c45988-hqk85 through plugin: invalid network status for"
	May 31 17:39:30 functional-20220531173104-2108 kubelet[6055]: I0531 17:39:30.248557    6055 docker_sandbox.go:402] "Failed to read pod IP from plugin/docker" err="Couldn't find network status for default/mysql-b87c45988-hqk85 through plugin: invalid network status for"
	May 31 17:40:13 functional-20220531173104-2108 kubelet[6055]: I0531 17:40:13.811273    6055 docker_sandbox.go:402] "Failed to read pod IP from plugin/docker" err="Couldn't find network status for default/mysql-b87c45988-hqk85 through plugin: invalid network status for"
	May 31 17:40:32 functional-20220531173104-2108 kubelet[6055]: W0531 17:40:32.493500    6055 sysinfo.go:203] Nodes topology is not available, providing CPU topology
	May 31 17:45:32 functional-20220531173104-2108 kubelet[6055]: W0531 17:45:32.497023    6055 sysinfo.go:203] Nodes topology is not available, providing CPU topology
	May 31 17:50:32 functional-20220531173104-2108 kubelet[6055]: W0531 17:50:32.501018    6055 sysinfo.go:203] Nodes topology is not available, providing CPU topology
	May 31 17:55:32 functional-20220531173104-2108 kubelet[6055]: W0531 17:55:32.501239    6055 sysinfo.go:203] Nodes topology is not available, providing CPU topology
	May 31 18:00:32 functional-20220531173104-2108 kubelet[6055]: W0531 18:00:32.500643    6055 sysinfo.go:203] Nodes topology is not available, providing CPU topology
	May 31 18:05:32 functional-20220531173104-2108 kubelet[6055]: W0531 18:05:32.502294    6055 sysinfo.go:203] Nodes topology is not available, providing CPU topology
	May 31 18:10:32 functional-20220531173104-2108 kubelet[6055]: W0531 18:10:32.502814    6055 sysinfo.go:203] Nodes topology is not available, providing CPU topology
	
	* 
	* ==> storage-provisioner [a24e38366a7b] <==
	* I0531 17:35:42.604630       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0531 17:35:42.693442       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0531 17:35:42.693925       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0531 17:36:00.237376       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0531 17:36:00.237517       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"384e3ae8-40aa-4b44-8ed1-ea8b9c2eb632", APIVersion:"v1", ResourceVersion:"648", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' functional-20220531173104-2108_eb15f159-a5f4-4351-8afd-7524ee07e4af became leader
	I0531 17:36:00.237615       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_functional-20220531173104-2108_eb15f159-a5f4-4351-8afd-7524ee07e4af!
	I0531 17:36:00.338799       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_functional-20220531173104-2108_eb15f159-a5f4-4351-8afd-7524ee07e4af!
	I0531 17:36:28.124827       1 controller.go:1332] provision "default/myclaim" class "standard": started
	I0531 17:36:28.125875       1 event.go:282] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"default", Name:"myclaim", UID:"18fcadfc-4369-421a-97a7-f376ed316380", APIVersion:"v1", ResourceVersion:"686", FieldPath:""}): type: 'Normal' reason: 'Provisioning' External provisioner is provisioning volume for claim "default/myclaim"
	I0531 17:36:28.125373       1 storage_provisioner.go:61] Provisioning volume {&StorageClass{ObjectMeta:{standard    b434fe0f-783d-4d0d-8b61-1218c61b254f 471 0 2022-05-31 17:33:10 +0000 UTC <nil> <nil> map[addonmanager.kubernetes.io/mode:EnsureExists] map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"storage.k8s.io/v1","kind":"StorageClass","metadata":{"annotations":{"storageclass.kubernetes.io/is-default-class":"true"},"labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"name":"standard"},"provisioner":"k8s.io/minikube-hostpath"}
	 storageclass.kubernetes.io/is-default-class:true] [] []  [{kubectl-client-side-apply Update storage.k8s.io/v1 2022-05-31 17:33:10 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storageclass.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanager.kubernetes.io/mode":{}}},"f:provisioner":{},"f:reclaimPolicy":{},"f:volumeBindingMode":{}}}]},Provisioner:k8s.io/minikube-hostpath,Parameters:map[string]string{},ReclaimPolicy:*Delete,MountOptions:[],AllowVolumeExpansion:nil,VolumeBindingMode:*Immediate,AllowedTopologies:[]TopologySelectorTerm{},} pvc-18fcadfc-4369-421a-97a7-f376ed316380 &PersistentVolumeClaim{ObjectMeta:{myclaim  default  18fcadfc-4369-421a-97a7-f376ed316380 686 0 2022-05-31 17:36:28 +0000 UTC <nil> <nil> map[] map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"v1","kind":"PersistentVolumeClaim","metadata":{"annotations":{},"name":"myclaim","namespace":"default"},"spec":{"accessModes":["Rea
dWriteOnce"],"resources":{"requests":{"storage":"500Mi"}},"volumeMode":"Filesystem"}}
	 volume.beta.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath volume.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath] [] [kubernetes.io/pvc-protection]  [{kube-controller-manager Update v1 2022-05-31 17:36:28 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:volume.beta.kubernetes.io/storage-provisioner":{},"f:volume.kubernetes.io/storage-provisioner":{}}}}} {kubectl.exe Update v1 2022-05-31 17:36:28 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{}}},"f:spec":{"f:accessModes":{},"f:resources":{"f:requests":{".":{},"f:storage":{}}},"f:volumeMode":{}}}}]},Spec:PersistentVolumeClaimSpec{AccessModes:[ReadWriteOnce],Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{storage: {{524288000 0} {<nil>} 500Mi BinarySI},},},VolumeName:,Selector:nil,StorageClassName:*standard,VolumeMode:*Filesystem,DataSource:nil,},Status:PersistentVolumeClaimStatus{Phase:Pending,AccessModes:[],Capacity:ResourceList{}
,Conditions:[]PersistentVolumeClaimCondition{},},} nil} to /tmp/hostpath-provisioner/default/myclaim
	I0531 17:36:28.128438       1 controller.go:1439] provision "default/myclaim" class "standard": volume "pvc-18fcadfc-4369-421a-97a7-f376ed316380" provisioned
	I0531 17:36:28.129530       1 controller.go:1456] provision "default/myclaim" class "standard": succeeded
	I0531 17:36:28.129644       1 volume_store.go:212] Trying to save persistentvolume "pvc-18fcadfc-4369-421a-97a7-f376ed316380"
	I0531 17:36:28.141506       1 volume_store.go:219] persistentvolume "pvc-18fcadfc-4369-421a-97a7-f376ed316380" saved
	I0531 17:36:28.142141       1 event.go:282] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"default", Name:"myclaim", UID:"18fcadfc-4369-421a-97a7-f376ed316380", APIVersion:"v1", ResourceVersion:"686", FieldPath:""}): type: 'Normal' reason: 'ProvisioningSucceeded' Successfully provisioned volume pvc-18fcadfc-4369-421a-97a7-f376ed316380
	
	* 
	* ==> storage-provisioner [c93090aa2535] <==
	* I0531 17:35:24.385678       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F0531 17:35:24.389863       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: connect: connection refused
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p functional-20220531173104-2108 -n functional-20220531173104-2108
helpers_test.go:254: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p functional-20220531173104-2108 -n functional-20220531173104-2108: (6.3387248s)
helpers_test.go:261: (dbg) Run:  kubectl --context functional-20220531173104-2108 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:270: non-running pods: 
helpers_test.go:272: ======> post-mortem[TestFunctional/parallel/ServiceCmd]: describe non-running pods <======
helpers_test.go:275: (dbg) Run:  kubectl --context functional-20220531173104-2108 describe pod 
helpers_test.go:275: (dbg) Non-zero exit: kubectl --context functional-20220531173104-2108 describe pod : exit status 1 (199.0365ms)

                                                
                                                
** stderr ** 
	error: resource name may not be empty

                                                
                                                
** /stderr **
helpers_test.go:277: kubectl --context functional-20220531173104-2108 describe pod : exit status 1
--- FAIL: TestFunctional/parallel/ServiceCmd (2067.20s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (180.95s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:169: (dbg) Run:  kubectl --context functional-20220531173104-2108 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}

                                                
                                                
=== CONT  TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:169: (dbg) Run:  kubectl --context functional-20220531173104-2108 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}

                                                
                                                
=== CONT  TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:169: (dbg) Run:  kubectl --context functional-20220531173104-2108 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}

                                                
                                                
=== CONT  TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:169: (dbg) Run:  kubectl --context functional-20220531173104-2108 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}

                                                
                                                
=== CONT  TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:169: (dbg) Run:  kubectl --context functional-20220531173104-2108 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
functional_test_tunnel_test.go:169: (dbg) Run:  kubectl --context functional-20220531173104-2108 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}

                                                
                                                
=== CONT  TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:169: (dbg) Run:  kubectl --context functional-20220531173104-2108 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}

                                                
                                                
=== CONT  TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:169: (dbg) Run:  kubectl --context functional-20220531173104-2108 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}

                                                
                                                
=== CONT  TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:169: (dbg) Run:  kubectl --context functional-20220531173104-2108 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}

                                                
                                                
=== CONT  TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:169: (dbg) Run:  kubectl --context functional-20220531173104-2108 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}

                                                
                                                
=== CONT  TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:169: (dbg) Run:  kubectl --context functional-20220531173104-2108 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}

                                                
                                                
=== CONT  TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:169: (dbg) Run:  kubectl --context functional-20220531173104-2108 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}

                                                
                                                
=== CONT  TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:169: (dbg) Run:  kubectl --context functional-20220531173104-2108 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}

                                                
                                                
=== CONT  TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:169: (dbg) Run:  kubectl --context functional-20220531173104-2108 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}

                                                
                                                
=== CONT  TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:169: (dbg) Run:  kubectl --context functional-20220531173104-2108 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}

                                                
                                                
=== CONT  TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:169: (dbg) Run:  kubectl --context functional-20220531173104-2108 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
functional_test_tunnel_test.go:169: (dbg) Run:  kubectl --context functional-20220531173104-2108 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}

                                                
                                                
=== CONT  TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:169: (dbg) Run:  kubectl --context functional-20220531173104-2108 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}

                                                
                                                
=== CONT  TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:169: (dbg) Run:  kubectl --context functional-20220531173104-2108 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}

                                                
                                                
=== CONT  TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:169: (dbg) Run:  kubectl --context functional-20220531173104-2108 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}

                                                
                                                
=== CONT  TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:169: (dbg) Run:  kubectl --context functional-20220531173104-2108 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}

                                                
                                                
=== CONT  TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:169: (dbg) Run:  kubectl --context functional-20220531173104-2108 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
functional_test_tunnel_test.go:169: (dbg) Run:  kubectl --context functional-20220531173104-2108 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
functional_test_tunnel_test.go:169: (dbg) Run:  kubectl --context functional-20220531173104-2108 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}

                                                
                                                
=== CONT  TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:169: (dbg) Run:  kubectl --context functional-20220531173104-2108 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}

                                                
                                                
=== CONT  TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:169: (dbg) Run:  kubectl --context functional-20220531173104-2108 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}

                                                
                                                
=== CONT  TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:169: (dbg) Run:  kubectl --context functional-20220531173104-2108 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}

                                                
                                                
=== CONT  TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:169: (dbg) Run:  kubectl --context functional-20220531173104-2108 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}

                                                
                                                
=== CONT  TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:169: (dbg) Run:  kubectl --context functional-20220531173104-2108 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
functional_test_tunnel_test.go:169: (dbg) Run:  kubectl --context functional-20220531173104-2108 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
functional_test_tunnel_test.go:169: (dbg) Run:  kubectl --context functional-20220531173104-2108 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}

                                                
                                                
=== CONT  TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:169: (dbg) Run:  kubectl --context functional-20220531173104-2108 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}

                                                
                                                
=== CONT  TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:169: (dbg) Run:  kubectl --context functional-20220531173104-2108 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}

                                                
                                                
=== CONT  TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:169: (dbg) Run:  kubectl --context functional-20220531173104-2108 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
E0531 17:39:13.017354    2108 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\addons-20220531171726-2108\client.crt: The system cannot find the path specified.

                                                
                                                
=== CONT  TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:169: (dbg) Run:  kubectl --context functional-20220531173104-2108 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}

                                                
                                                
=== CONT  TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:169: (dbg) Run:  kubectl --context functional-20220531173104-2108 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}

                                                
                                                
=== CONT  TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:169: (dbg) Run:  kubectl --context functional-20220531173104-2108 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
functional_test_tunnel_test.go:169: (dbg) Run:  kubectl --context functional-20220531173104-2108 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
functional_test_tunnel_test.go:180: nginx-svc svc.status.loadBalancer.ingress never got an IP: timed out waiting for the condition
functional_test_tunnel_test.go:181: (dbg) Run:  kubectl --context functional-20220531173104-2108 get svc nginx-svc
functional_test_tunnel_test.go:185: failed to kubectl get svc nginx-svc:

                                                
                                                
-- stdout --
	NAME        TYPE           CLUSTER-IP      EXTERNAL-IP   PORT(S)        AGE
	nginx-svc   LoadBalancer   10.101.31.148   <pending>     80:31262/TCP   3m16s

                                                
                                                
-- /stdout --
--- FAIL: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (180.95s)

                                                
                                    
x
+
TestKubernetesUpgrade (703.93s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:229: (dbg) Run:  out/minikube-windows-amd64.exe start -p kubernetes-upgrade-20220531191315-2108 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=docker

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:229: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p kubernetes-upgrade-20220531191315-2108 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=docker: exit status 80 (6m52.9113005s)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-20220531191315-2108] minikube v1.26.0-beta.1 on Microsoft Windows 10 Enterprise N 10.0.19044 Build 19044
	  - KUBECONFIG=C:\Users\jenkins.minikube7\minikube-integration\kubeconfig
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube7\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=14079
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	* Using the docker driver based on user configuration
	* Using Docker Desktop driver with the root privilege
	* Starting control plane node kubernetes-upgrade-20220531191315-2108 in cluster kubernetes-upgrade-20220531191315-2108
	* Pulling base image ...
	* Creating docker container (CPUs=2, Memory=2200MB) ...
	* docker "kubernetes-upgrade-20220531191315-2108" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=2200MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0531 19:13:16.051544    9456 out.go:296] Setting OutFile to fd 1564 ...
	I0531 19:13:16.108538    9456 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0531 19:13:16.108538    9456 out.go:309] Setting ErrFile to fd 1456...
	I0531 19:13:16.108538    9456 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0531 19:13:16.122545    9456 out.go:303] Setting JSON to false
	I0531 19:13:16.124544    9456 start.go:115] hostinfo: {"hostname":"minikube7","uptime":82666,"bootTime":1653941730,"procs":161,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19044 Build 19044","kernelVersion":"10.0.19044 Build 19044","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f66ed2ea-6c04-4a6b-8eea-b2fb0953e990"}
	W0531 19:13:16.125539    9456 start.go:123] gopshost.Virtualization returned error: not implemented yet
	I0531 19:13:16.132540    9456 out.go:177] * [kubernetes-upgrade-20220531191315-2108] minikube v1.26.0-beta.1 on Microsoft Windows 10 Enterprise N 10.0.19044 Build 19044
	I0531 19:13:16.138533    9456 notify.go:193] Checking for updates...
	I0531 19:13:16.141543    9456 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube7\minikube-integration\kubeconfig
	I0531 19:13:16.147538    9456 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube7\minikube-integration\.minikube
	I0531 19:13:16.153544    9456 out.go:177]   - MINIKUBE_LOCATION=14079
	I0531 19:13:16.157534    9456 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0531 19:13:16.162537    9456 config.go:178] Loaded profile config "NoKubernetes-20220531190920-2108": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v0.0.0
	I0531 19:13:16.163541    9456 config.go:178] Loaded profile config "missing-upgrade-20220531190920-2108": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.18.0
	I0531 19:13:16.163541    9456 config.go:178] Loaded profile config "stopped-upgrade-20220531190920-2108": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.18.0
	I0531 19:13:16.163541    9456 driver.go:358] Setting default libvirt URI to qemu:///system
	I0531 19:13:19.119230    9456 docker.go:137] docker version: linux-20.10.14
	I0531 19:13:19.131242    9456 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0531 19:13:21.446504    9456 cli_runner.go:217] Completed: docker system info --format "{{json .}}": (2.315088s)
	I0531 19:13:21.446972    9456 info.go:265] docker info: {ID:JKWR:L4LW:XYJC:G6AI:GZFU:RUGW:CCH6:OD2M:V572:4FTB:B7YC:DTUC Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:87 OomKillDisable:true NGoroutines:66 SystemTime:2022-05-31 19:13:20.2614209 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.102.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_
64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:53902323712 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.14 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3df54a852345ae127d1fa3092b95168e4a88e2f8 Expected:3df54a852345ae127d1fa3092b95168e4a88e2f8} RuncCommit:{ID:v1.0.3-0-gf46b6ba Expected:v1.0.3-0-gf46b6ba} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,
profile=default] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.5.1] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:C:\Program Files\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc.
Version:v0.17.0]] Warnings:<nil>}}
	I0531 19:13:21.450265    9456 out.go:177] * Using the docker driver based on user configuration
	I0531 19:13:21.454087    9456 start.go:284] selected driver: docker
	I0531 19:13:21.454087    9456 start.go:806] validating driver "docker" against <nil>
	I0531 19:13:21.454087    9456 start.go:817] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0531 19:13:21.522178    9456 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0531 19:13:23.859728    9456 cli_runner.go:217] Completed: docker system info --format "{{json .}}": (2.3375391s)
	I0531 19:13:23.859728    9456 info.go:265] docker info: {ID:JKWR:L4LW:XYJC:G6AI:GZFU:RUGW:CCH6:OD2M:V572:4FTB:B7YC:DTUC Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:89 OomKillDisable:true NGoroutines:66 SystemTime:2022-05-31 19:13:22.6592397 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.102.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_
64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:53902323712 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.14 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3df54a852345ae127d1fa3092b95168e4a88e2f8 Expected:3df54a852345ae127d1fa3092b95168e4a88e2f8} RuncCommit:{ID:v1.0.3-0-gf46b6ba Expected:v1.0.3-0-gf46b6ba} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,
profile=default] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.5.1] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:C:\Program Files\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc.
Version:v0.17.0]] Warnings:<nil>}}
	I0531 19:13:23.859728    9456 start_flags.go:292] no existing cluster config was found, will generate one from the flags 
	I0531 19:13:23.860718    9456 start_flags.go:829] Wait components to verify : map[apiserver:true system_pods:true]
	I0531 19:13:23.873716    9456 out.go:177] * Using Docker Desktop driver with the root privilege
	I0531 19:13:23.876705    9456 cni.go:95] Creating CNI manager for ""
	I0531 19:13:23.876996    9456 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0531 19:13:23.877068    9456 start_flags.go:306] config:
	{Name:kubernetes-upgrade-20220531191315-2108 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653596720-14230@sha256:e953786303ac8350802546ee187d34e89f0007072a54fdbcc2f86a1fb8575418 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:kubernetes-upgrade-20220531191315-2108 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[]
DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube7:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0531 19:13:23.880975    9456 out.go:177] * Starting control plane node kubernetes-upgrade-20220531191315-2108 in cluster kubernetes-upgrade-20220531191315-2108
	I0531 19:13:23.884698    9456 cache.go:120] Beginning downloading kic base image for docker with docker
	I0531 19:13:23.887466    9456 out.go:177] * Pulling base image ...
	I0531 19:13:23.890435    9456 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0531 19:13:23.890435    9456 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653596720-14230@sha256:e953786303ac8350802546ee187d34e89f0007072a54fdbcc2f86a1fb8575418 in local docker daemon
	I0531 19:13:23.890435    9456 preload.go:148] Found local preload: C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4
	I0531 19:13:23.890435    9456 cache.go:57] Caching tarball of preloaded images
	I0531 19:13:23.891170    9456 preload.go:174] Found C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0531 19:13:23.891170    9456 cache.go:60] Finished verifying existence of preloaded tar for  v1.16.0 on docker
	I0531 19:13:23.891714    9456 profile.go:148] Saving config to C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\kubernetes-upgrade-20220531191315-2108\config.json ...
	I0531 19:13:23.891925    9456 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\kubernetes-upgrade-20220531191315-2108\config.json: {Name:mkebf72d829d95f5a7e4f9a5d36a090b7a97ede7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0531 19:13:25.079664    9456 image.go:79] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653596720-14230@sha256:e953786303ac8350802546ee187d34e89f0007072a54fdbcc2f86a1fb8575418 in local docker daemon, skipping pull
	I0531 19:13:25.079664    9456 cache.go:141] gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653596720-14230@sha256:e953786303ac8350802546ee187d34e89f0007072a54fdbcc2f86a1fb8575418 exists in daemon, skipping load
	I0531 19:13:25.079784    9456 cache.go:206] Successfully downloaded all kic artifacts
	I0531 19:13:25.080103    9456 start.go:352] acquiring machines lock for kubernetes-upgrade-20220531191315-2108: {Name:mk11cef1482e399c3184a31b95484b539a7a52c1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0531 19:13:25.080305    9456 start.go:356] acquired machines lock for "kubernetes-upgrade-20220531191315-2108" in 133µs
	I0531 19:13:25.080636    9456 start.go:91] Provisioning new machine with config: &{Name:kubernetes-upgrade-20220531191315-2108 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653596720-14230@sha256:e953786303ac8350802546ee187d34e89f0007072a54fdbcc2f86a1fb8575418 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:kubernetes-upgrade-20220531191315
-2108 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube7:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:f
alse} &{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0531 19:13:25.080917    9456 start.go:131] createHost starting for "" (driver="docker")
	I0531 19:13:25.091190    9456 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0531 19:13:25.091190    9456 start.go:165] libmachine.API.Create for "kubernetes-upgrade-20220531191315-2108" (driver="docker")
	I0531 19:13:25.091190    9456 client.go:168] LocalClient.Create starting
	I0531 19:13:25.092655    9456 main.go:134] libmachine: Reading certificate data from C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem
	I0531 19:13:25.092951    9456 main.go:134] libmachine: Decoding PEM data...
	I0531 19:13:25.093231    9456 main.go:134] libmachine: Parsing certificate...
	I0531 19:13:25.093478    9456 main.go:134] libmachine: Reading certificate data from C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\cert.pem
	I0531 19:13:25.093686    9456 main.go:134] libmachine: Decoding PEM data...
	I0531 19:13:25.093686    9456 main.go:134] libmachine: Parsing certificate...
	I0531 19:13:25.106627    9456 cli_runner.go:164] Run: docker network inspect kubernetes-upgrade-20220531191315-2108 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0531 19:13:26.213958    9456 cli_runner.go:211] docker network inspect kubernetes-upgrade-20220531191315-2108 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0531 19:13:26.213958    9456 cli_runner.go:217] Completed: docker network inspect kubernetes-upgrade-20220531191315-2108 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}": (1.1063149s)
	I0531 19:13:26.223940    9456 network_create.go:272] running [docker network inspect kubernetes-upgrade-20220531191315-2108] to gather additional debugging logs...
	I0531 19:13:26.223940    9456 cli_runner.go:164] Run: docker network inspect kubernetes-upgrade-20220531191315-2108
	W0531 19:13:27.450864    9456 cli_runner.go:211] docker network inspect kubernetes-upgrade-20220531191315-2108 returned with exit code 1
	I0531 19:13:27.450938    9456 cli_runner.go:217] Completed: docker network inspect kubernetes-upgrade-20220531191315-2108: (1.2268472s)
	I0531 19:13:27.450938    9456 network_create.go:275] error running [docker network inspect kubernetes-upgrade-20220531191315-2108]: docker network inspect kubernetes-upgrade-20220531191315-2108: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: kubernetes-upgrade-20220531191315-2108
	I0531 19:13:27.451016    9456 network_create.go:277] output of [docker network inspect kubernetes-upgrade-20220531191315-2108]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: kubernetes-upgrade-20220531191315-2108
	
	** /stderr **
	I0531 19:13:27.457995    9456 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0531 19:13:28.691794    9456 cli_runner.go:217] Completed: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}": (1.2332195s)
	I0531 19:13:28.710763    9456 network.go:288] reserving subnet 192.168.49.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[] amended:true}} dirty:map[192.168.49.0:0xc000c864e8] misses:0}
	I0531 19:13:28.710763    9456 network.go:235] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0531 19:13:28.710763    9456 network_create.go:115] attempt to create docker network kubernetes-upgrade-20220531191315-2108 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0531 19:13:28.721776    9456 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true kubernetes-upgrade-20220531191315-2108
	I0531 19:13:30.032903    9456 cli_runner.go:217] Completed: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true kubernetes-upgrade-20220531191315-2108: (1.3110035s)
	I0531 19:13:30.033197    9456 network_create.go:99] docker network kubernetes-upgrade-20220531191315-2108 192.168.49.0/24 created
	I0531 19:13:30.033197    9456 kic.go:106] calculated static IP "192.168.49.2" for the "kubernetes-upgrade-20220531191315-2108" container
	I0531 19:13:30.052659    9456 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0531 19:13:31.295644    9456 cli_runner.go:217] Completed: docker ps -a --format {{.Names}}: (1.2429783s)
	I0531 19:13:31.302185    9456 cli_runner.go:164] Run: docker volume create kubernetes-upgrade-20220531191315-2108 --label name.minikube.sigs.k8s.io=kubernetes-upgrade-20220531191315-2108 --label created_by.minikube.sigs.k8s.io=true
	I0531 19:13:32.540167    9456 cli_runner.go:217] Completed: docker volume create kubernetes-upgrade-20220531191315-2108 --label name.minikube.sigs.k8s.io=kubernetes-upgrade-20220531191315-2108 --label created_by.minikube.sigs.k8s.io=true: (1.2366623s)
	I0531 19:13:32.540167    9456 oci.go:103] Successfully created a docker volume kubernetes-upgrade-20220531191315-2108
	I0531 19:13:32.554945    9456 cli_runner.go:164] Run: docker run --rm --name kubernetes-upgrade-20220531191315-2108-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=kubernetes-upgrade-20220531191315-2108 --entrypoint /usr/bin/test -v kubernetes-upgrade-20220531191315-2108:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653596720-14230@sha256:e953786303ac8350802546ee187d34e89f0007072a54fdbcc2f86a1fb8575418 -d /var/lib
	I0531 19:13:35.752774    9456 cli_runner.go:217] Completed: docker run --rm --name kubernetes-upgrade-20220531191315-2108-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=kubernetes-upgrade-20220531191315-2108 --entrypoint /usr/bin/test -v kubernetes-upgrade-20220531191315-2108:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653596720-14230@sha256:e953786303ac8350802546ee187d34e89f0007072a54fdbcc2f86a1fb8575418 -d /var/lib: (3.1978142s)
	I0531 19:13:35.752774    9456 oci.go:107] Successfully prepared a docker volume kubernetes-upgrade-20220531191315-2108
	I0531 19:13:35.752774    9456 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0531 19:13:35.752774    9456 kic.go:179] Starting extracting preloaded images to volume ...
	I0531 19:13:35.760295    9456 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v kubernetes-upgrade-20220531191315-2108:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653596720-14230@sha256:e953786303ac8350802546ee187d34e89f0007072a54fdbcc2f86a1fb8575418 -I lz4 -xf /preloaded.tar -C /extractDir
	I0531 19:14:06.449661    9456 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v kubernetes-upgrade-20220531191315-2108:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653596720-14230@sha256:e953786303ac8350802546ee187d34e89f0007072a54fdbcc2f86a1fb8575418 -I lz4 -xf /preloaded.tar -C /extractDir: (30.6892276s)
	I0531 19:14:06.449661    9456 kic.go:188] duration metric: took 30.696748 seconds to extract preloaded images to volume
	I0531 19:14:06.457652    9456 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0531 19:14:09.016244    9456 cli_runner.go:217] Completed: docker system info --format "{{json .}}": (2.5585806s)
	I0531 19:14:09.016244    9456 info.go:265] docker info: {ID:JKWR:L4LW:XYJC:G6AI:GZFU:RUGW:CCH6:OD2M:V572:4FTB:B7YC:DTUC Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:74 OomKillDisable:true NGoroutines:62 SystemTime:2022-05-31 19:14:07.7610822 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.102.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_
64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:53902323712 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.14 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3df54a852345ae127d1fa3092b95168e4a88e2f8 Expected:3df54a852345ae127d1fa3092b95168e4a88e2f8} RuncCommit:{ID:v1.0.3-0-gf46b6ba Expected:v1.0.3-0-gf46b6ba} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,
profile=default] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.5.1] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:C:\Program Files\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc.
Version:v0.17.0]] Warnings:<nil>}}
	I0531 19:14:09.034262    9456 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0531 19:14:11.585102    9456 cli_runner.go:217] Completed: docker info --format "'{{json .SecurityOptions}}'": (2.5508285s)
	I0531 19:14:11.595212    9456 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname kubernetes-upgrade-20220531191315-2108 --name kubernetes-upgrade-20220531191315-2108 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=kubernetes-upgrade-20220531191315-2108 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=kubernetes-upgrade-20220531191315-2108 --network kubernetes-upgrade-20220531191315-2108 --ip 192.168.49.2 --volume kubernetes-upgrade-20220531191315-2108:/var --security-opt apparmor=unconfined --memory=2200mb --memory-swap=2200mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653596720-14230@sha256:e953786303ac8350802546ee187d34e89f0007072a54fdbcc2f86a1fb8575418
	W0531 19:14:13.214830    9456 cli_runner.go:211] docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname kubernetes-upgrade-20220531191315-2108 --name kubernetes-upgrade-20220531191315-2108 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=kubernetes-upgrade-20220531191315-2108 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=kubernetes-upgrade-20220531191315-2108 --network kubernetes-upgrade-20220531191315-2108 --ip 192.168.49.2 --volume kubernetes-upgrade-20220531191315-2108:/var --security-opt apparmor=unconfined --memory=2200mb --memory-swap=2200mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653596720-14230@sha256:e953786303ac8350802546ee187d34e89f0007072a54fdbcc2f86a1fb8575418 returned with exit code 12
5
	I0531 19:14:13.214830    9456 cli_runner.go:217] Completed: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname kubernetes-upgrade-20220531191315-2108 --name kubernetes-upgrade-20220531191315-2108 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=kubernetes-upgrade-20220531191315-2108 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=kubernetes-upgrade-20220531191315-2108 --network kubernetes-upgrade-20220531191315-2108 --ip 192.168.49.2 --volume kubernetes-upgrade-20220531191315-2108:/var --security-opt apparmor=unconfined --memory=2200mb --memory-swap=2200mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653596720-14230@sha256:e953786303ac8350802546ee187d34e89f0007072a54fdbcc2f86a1fb8575418: (1.6196099s)
	I0531 19:14:13.214830    9456 client.go:171] LocalClient.Create took 48.1234214s
	I0531 19:14:15.232127    9456 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0531 19:14:15.240558    9456 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220531191315-2108
	W0531 19:14:16.427583    9456 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220531191315-2108 returned with exit code 1
	I0531 19:14:16.427583    9456 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220531191315-2108: (1.1867867s)
	I0531 19:14:16.427583    9456 retry.go:31] will retry after 276.165072ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0531 19:14:16.734713    9456 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220531191315-2108
	W0531 19:14:17.960953    9456 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220531191315-2108 returned with exit code 1
	I0531 19:14:17.961324    9456 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220531191315-2108: (1.226234s)
	W0531 19:14:17.961429    9456 start.go:281] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	W0531 19:14:17.961575    9456 start.go:248] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0531 19:14:17.972305    9456 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0531 19:14:17.978925    9456 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220531191315-2108
	W0531 19:14:19.265188    9456 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220531191315-2108 returned with exit code 1
	I0531 19:14:19.265188    9456 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220531191315-2108: (1.286257s)
	I0531 19:14:19.265188    9456 retry.go:31] will retry after 291.140013ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0531 19:14:19.571381    9456 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220531191315-2108
	W0531 19:14:20.833228    9456 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220531191315-2108 returned with exit code 1
	I0531 19:14:20.833228    9456 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220531191315-2108: (1.261842s)
	W0531 19:14:20.833228    9456 start.go:296] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	W0531 19:14:20.833228    9456 start.go:253] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0531 19:14:20.833228    9456 start.go:134] duration metric: createHost completed in 55.751964s
	I0531 19:14:20.833228    9456 start.go:81] releasing machines lock for "kubernetes-upgrade-20220531191315-2108", held for 55.7525551s
	W0531 19:14:20.833228    9456 start.go:599] error starting host: creating host: create: creating: create kic node: create container: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname kubernetes-upgrade-20220531191315-2108 --name kubernetes-upgrade-20220531191315-2108 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=kubernetes-upgrade-20220531191315-2108 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=kubernetes-upgrade-20220531191315-2108 --network kubernetes-upgrade-20220531191315-2108 --ip 192.168.49.2 --volume kubernetes-upgrade-20220531191315-2108:/var --security-opt apparmor=unconfined --memory=2200mb --memory-swap=2200mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653596720-14230@sha256:e953786
303ac8350802546ee187d34e89f0007072a54fdbcc2f86a1fb8575418: exit status 125
	stdout:
	9999f89419ab60fc86f55222180a60e9b37b4d52db98ed215b80d388bded3524
	
	stderr:
	docker: Error response from daemon: network kubernetes-upgrade-20220531191315-2108 not found.
	I0531 19:14:20.849287    9456 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-20220531191315-2108 --format={{.State.Status}}
	I0531 19:14:22.015257    9456 cli_runner.go:217] Completed: docker container inspect kubernetes-upgrade-20220531191315-2108 --format={{.State.Status}}: (1.1659647s)
	W0531 19:14:22.015371    9456 start.go:604] delete host: Docker machine "kubernetes-upgrade-20220531191315-2108" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	W0531 19:14:22.015465    9456 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: create kic node: create container: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname kubernetes-upgrade-20220531191315-2108 --name kubernetes-upgrade-20220531191315-2108 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=kubernetes-upgrade-20220531191315-2108 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=kubernetes-upgrade-20220531191315-2108 --network kubernetes-upgrade-20220531191315-2108 --ip 192.168.49.2 --volume kubernetes-upgrade-20220531191315-2108:/var --security-opt apparmor=unconfined --memory=2200mb --memory-swap=2200mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653596720-142
30@sha256:e953786303ac8350802546ee187d34e89f0007072a54fdbcc2f86a1fb8575418: exit status 125
	stdout:
	9999f89419ab60fc86f55222180a60e9b37b4d52db98ed215b80d388bded3524
	
	stderr:
	docker: Error response from daemon: network kubernetes-upgrade-20220531191315-2108 not found.
	
	! StartHost failed, but will try again: creating host: create: creating: create kic node: create container: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname kubernetes-upgrade-20220531191315-2108 --name kubernetes-upgrade-20220531191315-2108 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=kubernetes-upgrade-20220531191315-2108 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=kubernetes-upgrade-20220531191315-2108 --network kubernetes-upgrade-20220531191315-2108 --ip 192.168.49.2 --volume kubernetes-upgrade-20220531191315-2108:/var --security-opt apparmor=unconfined --memory=2200mb --memory-swap=2200mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653596720-14230@sha256:e953786303ac8350802546ee187d34e8
9f0007072a54fdbcc2f86a1fb8575418: exit status 125
	stdout:
	9999f89419ab60fc86f55222180a60e9b37b4d52db98ed215b80d388bded3524
	
	stderr:
	docker: Error response from daemon: network kubernetes-upgrade-20220531191315-2108 not found.
	
	I0531 19:14:22.015867    9456 start.go:614] Will try again in 5 seconds ...
	I0531 19:14:27.021335    9456 start.go:352] acquiring machines lock for kubernetes-upgrade-20220531191315-2108: {Name:mk11cef1482e399c3184a31b95484b539a7a52c1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0531 19:14:27.021335    9456 start.go:356] acquired machines lock for "kubernetes-upgrade-20220531191315-2108" in 0s
	I0531 19:14:27.021963    9456 start.go:94] Skipping create...Using existing machine configuration
	I0531 19:14:27.021963    9456 fix.go:55] fixHost starting: 
	I0531 19:14:27.039733    9456 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-20220531191315-2108 --format={{.State.Status}}
	I0531 19:14:28.282752    9456 cli_runner.go:217] Completed: docker container inspect kubernetes-upgrade-20220531191315-2108 --format={{.State.Status}}: (1.2428993s)
	I0531 19:14:28.282844    9456 fix.go:103] recreateIfNeeded on kubernetes-upgrade-20220531191315-2108: state= err=<nil>
	I0531 19:14:28.282844    9456 fix.go:108] machineExists: false. err=machine does not exist
	I0531 19:14:28.327397    9456 out.go:177] * docker "kubernetes-upgrade-20220531191315-2108" container is missing, will recreate.
	I0531 19:14:28.330269    9456 delete.go:124] DEMOLISHING kubernetes-upgrade-20220531191315-2108 ...
	I0531 19:14:28.345273    9456 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-20220531191315-2108 --format={{.State.Status}}
	I0531 19:14:29.518144    9456 cli_runner.go:217] Completed: docker container inspect kubernetes-upgrade-20220531191315-2108 --format={{.State.Status}}: (1.1728657s)
	I0531 19:14:29.518144    9456 stop.go:79] host is in state 
	I0531 19:14:29.519299    9456 main.go:134] libmachine: Stopping "kubernetes-upgrade-20220531191315-2108"...
	I0531 19:14:29.541457    9456 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-20220531191315-2108 --format={{.State.Status}}
	I0531 19:14:30.771045    9456 cli_runner.go:217] Completed: docker container inspect kubernetes-upgrade-20220531191315-2108 --format={{.State.Status}}: (1.2295821s)
	I0531 19:14:30.787060    9456 kic_runner.go:93] Run: systemctl --version
	I0531 19:14:30.787060    9456 kic_runner.go:114] Args: [docker exec --privileged kubernetes-upgrade-20220531191315-2108 systemctl --version]
	I0531 19:14:31.988468    9456 kic_runner.go:93] Run: sudo service kubelet stop
	I0531 19:14:31.988468    9456 kic_runner.go:114] Args: [docker exec --privileged kubernetes-upgrade-20220531191315-2108 sudo service kubelet stop]
	I0531 19:14:33.135987    9456 openrc.go:165] stop output: 
	** stderr ** 
	Error response from daemon: Container 9999f89419ab60fc86f55222180a60e9b37b4d52db98ed215b80d388bded3524 is not running
	
	** /stderr **
	W0531 19:14:33.135987    9456 kic.go:439] couldn't stop kubelet. will continue with stop anyways: sudo service kubelet stop: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: Container 9999f89419ab60fc86f55222180a60e9b37b4d52db98ed215b80d388bded3524 is not running
	I0531 19:14:33.162307    9456 kic_runner.go:93] Run: sudo service kubelet stop
	I0531 19:14:33.162395    9456 kic_runner.go:114] Args: [docker exec --privileged kubernetes-upgrade-20220531191315-2108 sudo service kubelet stop]
	I0531 19:14:34.365880    9456 openrc.go:165] stop output: 
	** stderr ** 
	Error response from daemon: Container 9999f89419ab60fc86f55222180a60e9b37b4d52db98ed215b80d388bded3524 is not running
	
	** /stderr **
	W0531 19:14:34.366181    9456 kic.go:441] couldn't force stop kubelet. will continue with stop anyways: sudo service kubelet stop: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: Container 9999f89419ab60fc86f55222180a60e9b37b4d52db98ed215b80d388bded3524 is not running
	I0531 19:14:34.382530    9456 kic_runner.go:93] Run: docker ps -a --filter=name=k8s_.*_(kube-system|kubernetes-dashboard|storage-gluster|istio-operator)_ --format={{.ID}}
	I0531 19:14:34.382530    9456 kic_runner.go:114] Args: [docker exec --privileged kubernetes-upgrade-20220531191315-2108 docker ps -a --filter=name=k8s_.*_(kube-system|kubernetes-dashboard|storage-gluster|istio-operator)_ --format={{.ID}}]
	I0531 19:14:35.623591    9456 kic.go:452] unable list containers : docker: docker ps -a --filter=name=k8s_.*_(kube-system|kubernetes-dashboard|storage-gluster|istio-operator)_ --format={{.ID}}: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: Container 9999f89419ab60fc86f55222180a60e9b37b4d52db98ed215b80d388bded3524 is not running
	I0531 19:14:35.623591    9456 kic.go:462] successfully stopped kubernetes!
	I0531 19:14:35.647602    9456 kic_runner.go:93] Run: pgrep kube-apiserver
	I0531 19:14:35.647602    9456 kic_runner.go:114] Args: [docker exec --privileged kubernetes-upgrade-20220531191315-2108 pgrep kube-apiserver]
	I0531 19:14:38.208276    9456 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-20220531191315-2108 --format={{.State.Status}}
	I0531 19:14:39.394947    9456 cli_runner.go:217] Completed: docker container inspect kubernetes-upgrade-20220531191315-2108 --format={{.State.Status}}: (1.1866663s)
	I0531 19:14:42.418799    9456 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-20220531191315-2108 --format={{.State.Status}}
	I0531 19:14:43.572207    9456 cli_runner.go:217] Completed: docker container inspect kubernetes-upgrade-20220531191315-2108 --format={{.State.Status}}: (1.1534032s)
	I0531 19:14:46.592604    9456 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-20220531191315-2108 --format={{.State.Status}}
	I0531 19:14:47.689109    9456 cli_runner.go:217] Completed: docker container inspect kubernetes-upgrade-20220531191315-2108 --format={{.State.Status}}: (1.0965002s)
	I0531 19:14:50.720278    9456 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-20220531191315-2108 --format={{.State.Status}}
	I0531 19:14:51.958261    9456 cli_runner.go:217] Completed: docker container inspect kubernetes-upgrade-20220531191315-2108 --format={{.State.Status}}: (1.2379773s)
	I0531 19:14:54.979217    9456 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-20220531191315-2108 --format={{.State.Status}}
	I0531 19:14:56.223241    9456 cli_runner.go:217] Completed: docker container inspect kubernetes-upgrade-20220531191315-2108 --format={{.State.Status}}: (1.2440184s)
	I0531 19:14:59.257574    9456 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-20220531191315-2108 --format={{.State.Status}}
	I0531 19:15:00.480247    9456 cli_runner.go:217] Completed: docker container inspect kubernetes-upgrade-20220531191315-2108 --format={{.State.Status}}: (1.2226679s)
	I0531 19:15:03.511988    9456 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-20220531191315-2108 --format={{.State.Status}}
	I0531 19:15:04.648939    9456 cli_runner.go:217] Completed: docker container inspect kubernetes-upgrade-20220531191315-2108 --format={{.State.Status}}: (1.1369459s)
	I0531 19:15:07.671821    9456 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-20220531191315-2108 --format={{.State.Status}}
	I0531 19:15:08.920227    9456 cli_runner.go:217] Completed: docker container inspect kubernetes-upgrade-20220531191315-2108 --format={{.State.Status}}: (1.2484003s)
	I0531 19:15:11.938782    9456 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-20220531191315-2108 --format={{.State.Status}}
	I0531 19:15:13.046534    9456 cli_runner.go:217] Completed: docker container inspect kubernetes-upgrade-20220531191315-2108 --format={{.State.Status}}: (1.106803s)
	I0531 19:15:16.074704    9456 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-20220531191315-2108 --format={{.State.Status}}
	I0531 19:15:17.164992    9456 cli_runner.go:217] Completed: docker container inspect kubernetes-upgrade-20220531191315-2108 --format={{.State.Status}}: (1.0902834s)
	I0531 19:15:20.188259    9456 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-20220531191315-2108 --format={{.State.Status}}
	I0531 19:15:21.393966    9456 cli_runner.go:217] Completed: docker container inspect kubernetes-upgrade-20220531191315-2108 --format={{.State.Status}}: (1.2056429s)
	I0531 19:15:24.421799    9456 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-20220531191315-2108 --format={{.State.Status}}
	I0531 19:15:25.563799    9456 cli_runner.go:217] Completed: docker container inspect kubernetes-upgrade-20220531191315-2108 --format={{.State.Status}}: (1.1418224s)
	I0531 19:15:28.585543    9456 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-20220531191315-2108 --format={{.State.Status}}
	I0531 19:15:29.939500    9456 cli_runner.go:217] Completed: docker container inspect kubernetes-upgrade-20220531191315-2108 --format={{.State.Status}}: (1.353951s)
	I0531 19:15:32.962277    9456 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-20220531191315-2108 --format={{.State.Status}}
	I0531 19:15:34.114570    9456 cli_runner.go:217] Completed: docker container inspect kubernetes-upgrade-20220531191315-2108 --format={{.State.Status}}: (1.1522878s)
	I0531 19:15:37.133058    9456 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-20220531191315-2108 --format={{.State.Status}}
	I0531 19:15:38.358615    9456 cli_runner.go:217] Completed: docker container inspect kubernetes-upgrade-20220531191315-2108 --format={{.State.Status}}: (1.2255518s)
	I0531 19:15:41.385088    9456 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-20220531191315-2108 --format={{.State.Status}}
	I0531 19:15:42.530147    9456 cli_runner.go:217] Completed: docker container inspect kubernetes-upgrade-20220531191315-2108 --format={{.State.Status}}: (1.1450542s)
	I0531 19:15:45.555592    9456 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-20220531191315-2108 --format={{.State.Status}}
	I0531 19:15:46.725728    9456 cli_runner.go:217] Completed: docker container inspect kubernetes-upgrade-20220531191315-2108 --format={{.State.Status}}: (1.1701306s)
	I0531 19:15:49.753639    9456 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-20220531191315-2108 --format={{.State.Status}}
	I0531 19:15:50.920387    9456 cli_runner.go:217] Completed: docker container inspect kubernetes-upgrade-20220531191315-2108 --format={{.State.Status}}: (1.1667431s)
	I0531 19:15:53.936317    9456 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-20220531191315-2108 --format={{.State.Status}}
	I0531 19:15:55.102600    9456 cli_runner.go:217] Completed: docker container inspect kubernetes-upgrade-20220531191315-2108 --format={{.State.Status}}: (1.1662777s)
	I0531 19:15:58.125561    9456 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-20220531191315-2108 --format={{.State.Status}}
	I0531 19:15:59.304263    9456 cli_runner.go:217] Completed: docker container inspect kubernetes-upgrade-20220531191315-2108 --format={{.State.Status}}: (1.1785489s)
	I0531 19:16:02.331884    9456 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-20220531191315-2108 --format={{.State.Status}}
	I0531 19:16:03.630687    9456 cli_runner.go:217] Completed: docker container inspect kubernetes-upgrade-20220531191315-2108 --format={{.State.Status}}: (1.2987973s)
	I0531 19:16:06.655003    9456 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-20220531191315-2108 --format={{.State.Status}}
	I0531 19:16:07.886717    9456 cli_runner.go:217] Completed: docker container inspect kubernetes-upgrade-20220531191315-2108 --format={{.State.Status}}: (1.2317082s)
	I0531 19:16:10.916836    9456 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-20220531191315-2108 --format={{.State.Status}}
	I0531 19:16:12.256871    9456 cli_runner.go:217] Completed: docker container inspect kubernetes-upgrade-20220531191315-2108 --format={{.State.Status}}: (1.3400292s)
	I0531 19:16:15.285578    9456 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-20220531191315-2108 --format={{.State.Status}}
	I0531 19:16:16.659456    9456 cli_runner.go:217] Completed: docker container inspect kubernetes-upgrade-20220531191315-2108 --format={{.State.Status}}: (1.373871s)
	I0531 19:16:19.692829    9456 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-20220531191315-2108 --format={{.State.Status}}
	I0531 19:16:21.049802    9456 cli_runner.go:217] Completed: docker container inspect kubernetes-upgrade-20220531191315-2108 --format={{.State.Status}}: (1.3568779s)
	I0531 19:16:24.072729    9456 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-20220531191315-2108 --format={{.State.Status}}
	I0531 19:16:25.395748    9456 cli_runner.go:217] Completed: docker container inspect kubernetes-upgrade-20220531191315-2108 --format={{.State.Status}}: (1.3227137s)
	I0531 19:16:28.442020    9456 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-20220531191315-2108 --format={{.State.Status}}
	I0531 19:16:29.756071    9456 cli_runner.go:217] Completed: docker container inspect kubernetes-upgrade-20220531191315-2108 --format={{.State.Status}}: (1.3140451s)
	I0531 19:16:32.777089    9456 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-20220531191315-2108 --format={{.State.Status}}
	I0531 19:16:34.093957    9456 cli_runner.go:217] Completed: docker container inspect kubernetes-upgrade-20220531191315-2108 --format={{.State.Status}}: (1.3168616s)
	I0531 19:16:37.125879    9456 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-20220531191315-2108 --format={{.State.Status}}
	I0531 19:16:38.452417    9456 cli_runner.go:217] Completed: docker container inspect kubernetes-upgrade-20220531191315-2108 --format={{.State.Status}}: (1.3265329s)
	I0531 19:16:41.477270    9456 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-20220531191315-2108 --format={{.State.Status}}
	I0531 19:16:42.895829    9456 cli_runner.go:217] Completed: docker container inspect kubernetes-upgrade-20220531191315-2108 --format={{.State.Status}}: (1.4185534s)
	I0531 19:16:45.932500    9456 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-20220531191315-2108 --format={{.State.Status}}
	I0531 19:16:47.270234    9456 cli_runner.go:217] Completed: docker container inspect kubernetes-upgrade-20220531191315-2108 --format={{.State.Status}}: (1.337267s)
	I0531 19:16:50.296473    9456 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-20220531191315-2108 --format={{.State.Status}}
	I0531 19:16:51.789396    9456 cli_runner.go:217] Completed: docker container inspect kubernetes-upgrade-20220531191315-2108 --format={{.State.Status}}: (1.4929153s)
	I0531 19:16:54.808640    9456 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-20220531191315-2108 --format={{.State.Status}}
	I0531 19:16:56.072207    9456 cli_runner.go:217] Completed: docker container inspect kubernetes-upgrade-20220531191315-2108 --format={{.State.Status}}: (1.2635616s)
	I0531 19:16:59.094954    9456 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-20220531191315-2108 --format={{.State.Status}}
	I0531 19:17:00.497621    9456 cli_runner.go:217] Completed: docker container inspect kubernetes-upgrade-20220531191315-2108 --format={{.State.Status}}: (1.4026606s)
	I0531 19:17:03.518263    9456 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-20220531191315-2108 --format={{.State.Status}}
	I0531 19:17:04.713109    9456 cli_runner.go:217] Completed: docker container inspect kubernetes-upgrade-20220531191315-2108 --format={{.State.Status}}: (1.1948411s)
	I0531 19:17:07.739611    9456 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-20220531191315-2108 --format={{.State.Status}}
	I0531 19:17:08.955274    9456 cli_runner.go:217] Completed: docker container inspect kubernetes-upgrade-20220531191315-2108 --format={{.State.Status}}: (1.2154762s)
	I0531 19:17:11.978833    9456 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-20220531191315-2108 --format={{.State.Status}}
	I0531 19:17:13.092324    9456 cli_runner.go:217] Completed: docker container inspect kubernetes-upgrade-20220531191315-2108 --format={{.State.Status}}: (1.1134862s)
	I0531 19:17:16.122807    9456 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-20220531191315-2108 --format={{.State.Status}}
	I0531 19:17:17.225864    9456 cli_runner.go:217] Completed: docker container inspect kubernetes-upgrade-20220531191315-2108 --format={{.State.Status}}: (1.1030524s)
	I0531 19:17:20.256164    9456 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-20220531191315-2108 --format={{.State.Status}}
	I0531 19:17:21.399922    9456 cli_runner.go:217] Completed: docker container inspect kubernetes-upgrade-20220531191315-2108 --format={{.State.Status}}: (1.1437533s)
	I0531 19:17:24.425813    9456 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-20220531191315-2108 --format={{.State.Status}}
	I0531 19:17:25.703123    9456 cli_runner.go:217] Completed: docker container inspect kubernetes-upgrade-20220531191315-2108 --format={{.State.Status}}: (1.2771299s)
	I0531 19:17:28.724223    9456 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-20220531191315-2108 --format={{.State.Status}}
	I0531 19:17:29.855893    9456 cli_runner.go:217] Completed: docker container inspect kubernetes-upgrade-20220531191315-2108 --format={{.State.Status}}: (1.1316654s)
	I0531 19:17:32.872108    9456 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-20220531191315-2108 --format={{.State.Status}}
	I0531 19:17:34.036115    9456 cli_runner.go:217] Completed: docker container inspect kubernetes-upgrade-20220531191315-2108 --format={{.State.Status}}: (1.1640019s)
	I0531 19:17:37.051120    9456 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-20220531191315-2108 --format={{.State.Status}}
	I0531 19:17:38.200935    9456 cli_runner.go:217] Completed: docker container inspect kubernetes-upgrade-20220531191315-2108 --format={{.State.Status}}: (1.1498096s)
	I0531 19:17:41.220717    9456 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-20220531191315-2108 --format={{.State.Status}}
	I0531 19:17:42.333737    9456 cli_runner.go:217] Completed: docker container inspect kubernetes-upgrade-20220531191315-2108 --format={{.State.Status}}: (1.113015s)
	I0531 19:17:45.359703    9456 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-20220531191315-2108 --format={{.State.Status}}
	I0531 19:17:46.556159    9456 cli_runner.go:217] Completed: docker container inspect kubernetes-upgrade-20220531191315-2108 --format={{.State.Status}}: (1.1964503s)
	I0531 19:17:49.584999    9456 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-20220531191315-2108 --format={{.State.Status}}
	I0531 19:17:50.707137    9456 cli_runner.go:217] Completed: docker container inspect kubernetes-upgrade-20220531191315-2108 --format={{.State.Status}}: (1.1221336s)
	I0531 19:17:53.730298    9456 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-20220531191315-2108 --format={{.State.Status}}
	I0531 19:17:54.925203    9456 cli_runner.go:217] Completed: docker container inspect kubernetes-upgrade-20220531191315-2108 --format={{.State.Status}}: (1.1949s)
	I0531 19:17:57.956316    9456 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-20220531191315-2108 --format={{.State.Status}}
	I0531 19:17:59.154459    9456 cli_runner.go:217] Completed: docker container inspect kubernetes-upgrade-20220531191315-2108 --format={{.State.Status}}: (1.198137s)
	I0531 19:18:02.193193    9456 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-20220531191315-2108 --format={{.State.Status}}
	I0531 19:18:03.456281    9456 cli_runner.go:217] Completed: docker container inspect kubernetes-upgrade-20220531191315-2108 --format={{.State.Status}}: (1.2630151s)
	I0531 19:18:06.474693    9456 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-20220531191315-2108 --format={{.State.Status}}
	I0531 19:18:07.865168    9456 cli_runner.go:217] Completed: docker container inspect kubernetes-upgrade-20220531191315-2108 --format={{.State.Status}}: (1.1801116s)
	I0531 19:18:10.888962    9456 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-20220531191315-2108 --format={{.State.Status}}
	I0531 19:18:11.965989    9456 cli_runner.go:217] Completed: docker container inspect kubernetes-upgrade-20220531191315-2108 --format={{.State.Status}}: (1.0770221s)
	I0531 19:18:14.994044    9456 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-20220531191315-2108 --format={{.State.Status}}
	I0531 19:18:16.150649    9456 cli_runner.go:217] Completed: docker container inspect kubernetes-upgrade-20220531191315-2108 --format={{.State.Status}}: (1.156415s)
	I0531 19:18:19.548557    9456 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-20220531191315-2108 --format={{.State.Status}}
	I0531 19:18:20.627750    9456 cli_runner.go:217] Completed: docker container inspect kubernetes-upgrade-20220531191315-2108 --format={{.State.Status}}: (1.0790306s)
	I0531 19:18:23.643944    9456 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-20220531191315-2108 --format={{.State.Status}}
	I0531 19:18:24.697757    9456 cli_runner.go:217] Completed: docker container inspect kubernetes-upgrade-20220531191315-2108 --format={{.State.Status}}: (1.0527348s)
	I0531 19:18:27.716689    9456 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-20220531191315-2108 --format={{.State.Status}}
	I0531 19:18:28.906688    9456 cli_runner.go:217] Completed: docker container inspect kubernetes-upgrade-20220531191315-2108 --format={{.State.Status}}: (1.189993s)
	I0531 19:18:31.928176    9456 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-20220531191315-2108 --format={{.State.Status}}
	I0531 19:18:33.071869    9456 cli_runner.go:217] Completed: docker container inspect kubernetes-upgrade-20220531191315-2108 --format={{.State.Status}}: (1.1436882s)
	I0531 19:18:36.094758    9456 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-20220531191315-2108 --format={{.State.Status}}
	I0531 19:18:37.243220    9456 cli_runner.go:217] Completed: docker container inspect kubernetes-upgrade-20220531191315-2108 --format={{.State.Status}}: (1.1484214s)
	I0531 19:18:40.263757    9456 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-20220531191315-2108 --format={{.State.Status}}
	I0531 19:18:41.356952    9456 cli_runner.go:217] Completed: docker container inspect kubernetes-upgrade-20220531191315-2108 --format={{.State.Status}}: (1.0930929s)
	I0531 19:18:44.384031    9456 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-20220531191315-2108 --format={{.State.Status}}
	I0531 19:18:45.510215    9456 cli_runner.go:217] Completed: docker container inspect kubernetes-upgrade-20220531191315-2108 --format={{.State.Status}}: (1.1261793s)
	I0531 19:18:48.549728    9456 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-20220531191315-2108 --format={{.State.Status}}
	I0531 19:18:49.685461    9456 cli_runner.go:217] Completed: docker container inspect kubernetes-upgrade-20220531191315-2108 --format={{.State.Status}}: (1.1357282s)
	I0531 19:18:52.686696    9456 stop.go:59] stop err: Maximum number of retries (60) exceeded
	I0531 19:18:52.686992    9456 delete.go:129] stophost failed (probably ok): Temporary Error: stop: Maximum number of retries (60) exceeded
	I0531 19:18:52.705173    9456 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-20220531191315-2108 --format={{.State.Status}}
	I0531 19:18:53.829743    9456 cli_runner.go:217] Completed: docker container inspect kubernetes-upgrade-20220531191315-2108 --format={{.State.Status}}: (1.1245652s)
	W0531 19:18:53.829743    9456 delete.go:135] deletehost failed: Docker machine "kubernetes-upgrade-20220531191315-2108" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I0531 19:18:53.836742    9456 cli_runner.go:164] Run: docker container inspect -f {{.Id}} kubernetes-upgrade-20220531191315-2108
	I0531 19:18:54.981964    9456 cli_runner.go:217] Completed: docker container inspect -f {{.Id}} kubernetes-upgrade-20220531191315-2108: (1.1452166s)
	I0531 19:18:54.989009    9456 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-20220531191315-2108 --format={{.State.Status}}
	I0531 19:18:56.148717    9456 cli_runner.go:217] Completed: docker container inspect kubernetes-upgrade-20220531191315-2108 --format={{.State.Status}}: (1.1595558s)
	I0531 19:18:56.158472    9456 cli_runner.go:164] Run: docker exec --privileged -t kubernetes-upgrade-20220531191315-2108 /bin/bash -c "sudo init 0"
	W0531 19:18:58.545134    9456 cli_runner.go:211] docker exec --privileged -t kubernetes-upgrade-20220531191315-2108 /bin/bash -c "sudo init 0" returned with exit code 1
	I0531 19:18:58.545134    9456 cli_runner.go:217] Completed: docker exec --privileged -t kubernetes-upgrade-20220531191315-2108 /bin/bash -c "sudo init 0": (2.3865748s)
	I0531 19:18:58.545134    9456 oci.go:625] error shutdown kubernetes-upgrade-20220531191315-2108: docker exec --privileged -t kubernetes-upgrade-20220531191315-2108 /bin/bash -c "sudo init 0": exit status 1
	stdout:
	
	stderr:
	Error response from daemon: Container 9999f89419ab60fc86f55222180a60e9b37b4d52db98ed215b80d388bded3524 is not running
	I0531 19:18:59.553137    9456 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-20220531191315-2108 --format={{.State.Status}}
	I0531 19:19:00.683918    9456 cli_runner.go:217] Completed: docker container inspect kubernetes-upgrade-20220531191315-2108 --format={{.State.Status}}: (1.1305765s)
	I0531 19:19:00.683992    9456 oci.go:639] temporary error: container kubernetes-upgrade-20220531191315-2108 status is  but expect it to be exited
	I0531 19:19:00.684072    9456 oci.go:645] Successfully shutdown container kubernetes-upgrade-20220531191315-2108
	I0531 19:19:00.692806    9456 cli_runner.go:164] Run: docker rm -f -v kubernetes-upgrade-20220531191315-2108
	I0531 19:19:01.917926    9456 cli_runner.go:217] Completed: docker rm -f -v kubernetes-upgrade-20220531191315-2108: (1.2251145s)
	I0531 19:19:01.924925    9456 cli_runner.go:164] Run: docker container inspect -f {{.Id}} kubernetes-upgrade-20220531191315-2108
	W0531 19:19:04.000106    9456 cli_runner.go:211] docker container inspect -f {{.Id}} kubernetes-upgrade-20220531191315-2108 returned with exit code 1
	I0531 19:19:04.000172    9456 cli_runner.go:217] Completed: docker container inspect -f {{.Id}} kubernetes-upgrade-20220531191315-2108: (2.0750014s)
	I0531 19:19:04.013441    9456 cli_runner.go:164] Run: docker network inspect kubernetes-upgrade-20220531191315-2108 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0531 19:19:06.778528    9456 cli_runner.go:211] docker network inspect kubernetes-upgrade-20220531191315-2108 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0531 19:19:06.778528    9456 cli_runner.go:217] Completed: docker network inspect kubernetes-upgrade-20220531191315-2108 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}": (2.7650741s)
	I0531 19:19:06.788503    9456 network_create.go:272] running [docker network inspect kubernetes-upgrade-20220531191315-2108] to gather additional debugging logs...
	I0531 19:19:06.788503    9456 cli_runner.go:164] Run: docker network inspect kubernetes-upgrade-20220531191315-2108
	W0531 19:19:07.886577    9456 cli_runner.go:211] docker network inspect kubernetes-upgrade-20220531191315-2108 returned with exit code 1
	I0531 19:19:07.886796    9456 cli_runner.go:217] Completed: docker network inspect kubernetes-upgrade-20220531191315-2108: (1.0980684s)
	I0531 19:19:07.886796    9456 network_create.go:275] error running [docker network inspect kubernetes-upgrade-20220531191315-2108]: docker network inspect kubernetes-upgrade-20220531191315-2108: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: kubernetes-upgrade-20220531191315-2108
	I0531 19:19:07.886901    9456 network_create.go:277] output of [docker network inspect kubernetes-upgrade-20220531191315-2108]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: kubernetes-upgrade-20220531191315-2108
	
	** /stderr **
	W0531 19:19:07.887722    9456 delete.go:139] delete failed (probably ok) <nil>
	I0531 19:19:07.887722    9456 fix.go:115] Sleeping 1 second for extra luck!
	I0531 19:19:08.897835    9456 start.go:131] createHost starting for "" (driver="docker")
	I0531 19:19:08.903552    9456 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0531 19:19:08.903552    9456 start.go:165] libmachine.API.Create for "kubernetes-upgrade-20220531191315-2108" (driver="docker")
	I0531 19:19:08.903552    9456 client.go:168] LocalClient.Create starting
	I0531 19:19:08.904239    9456 main.go:134] libmachine: Reading certificate data from C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem
	I0531 19:19:08.910348    9456 main.go:134] libmachine: Decoding PEM data...
	I0531 19:19:08.910348    9456 main.go:134] libmachine: Parsing certificate...
	I0531 19:19:08.910348    9456 main.go:134] libmachine: Reading certificate data from C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\cert.pem
	I0531 19:19:08.915462    9456 main.go:134] libmachine: Decoding PEM data...
	I0531 19:19:08.915462    9456 main.go:134] libmachine: Parsing certificate...
	I0531 19:19:08.927455    9456 cli_runner.go:164] Run: docker network inspect kubernetes-upgrade-20220531191315-2108 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0531 19:19:10.047518    9456 cli_runner.go:211] docker network inspect kubernetes-upgrade-20220531191315-2108 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0531 19:19:10.047518    9456 cli_runner.go:217] Completed: docker network inspect kubernetes-upgrade-20220531191315-2108 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}": (1.1200571s)
	I0531 19:19:10.053518    9456 network_create.go:272] running [docker network inspect kubernetes-upgrade-20220531191315-2108] to gather additional debugging logs...
	I0531 19:19:10.054517    9456 cli_runner.go:164] Run: docker network inspect kubernetes-upgrade-20220531191315-2108
	W0531 19:19:11.185640    9456 cli_runner.go:211] docker network inspect kubernetes-upgrade-20220531191315-2108 returned with exit code 1
	I0531 19:19:11.185640    9456 cli_runner.go:217] Completed: docker network inspect kubernetes-upgrade-20220531191315-2108: (1.1311176s)
	I0531 19:19:11.185640    9456 network_create.go:275] error running [docker network inspect kubernetes-upgrade-20220531191315-2108]: docker network inspect kubernetes-upgrade-20220531191315-2108: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: kubernetes-upgrade-20220531191315-2108
	I0531 19:19:11.185640    9456 network_create.go:277] output of [docker network inspect kubernetes-upgrade-20220531191315-2108]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: kubernetes-upgrade-20220531191315-2108
	
	** /stderr **
	I0531 19:19:11.192638    9456 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0531 19:19:12.310143    9456 cli_runner.go:217] Completed: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}": (1.1174997s)
	I0531 19:19:12.327055    9456 network.go:284] reusing subnet 192.168.49.0 that has expired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc000c864e8] amended:false}} dirty:map[] misses:0}
	I0531 19:19:12.327194    9456 network.go:235] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0531 19:19:12.327194    9456 network_create.go:115] attempt to create docker network kubernetes-upgrade-20220531191315-2108 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0531 19:19:12.336568    9456 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true kubernetes-upgrade-20220531191315-2108
	I0531 19:19:13.578077    9456 cli_runner.go:217] Completed: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true kubernetes-upgrade-20220531191315-2108: (1.2412888s)
	I0531 19:19:13.578123    9456 network_create.go:99] docker network kubernetes-upgrade-20220531191315-2108 192.168.49.0/24 created
	I0531 19:19:13.578123    9456 kic.go:106] calculated static IP "192.168.49.2" for the "kubernetes-upgrade-20220531191315-2108" container
	I0531 19:19:13.592800    9456 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0531 19:19:14.752001    9456 cli_runner.go:217] Completed: docker ps -a --format {{.Names}}: (1.1589502s)
	I0531 19:19:14.759980    9456 cli_runner.go:164] Run: docker volume create kubernetes-upgrade-20220531191315-2108 --label name.minikube.sigs.k8s.io=kubernetes-upgrade-20220531191315-2108 --label created_by.minikube.sigs.k8s.io=true
	I0531 19:19:15.858136    9456 cli_runner.go:217] Completed: docker volume create kubernetes-upgrade-20220531191315-2108 --label name.minikube.sigs.k8s.io=kubernetes-upgrade-20220531191315-2108 --label created_by.minikube.sigs.k8s.io=true: (1.0981507s)
	I0531 19:19:15.858136    9456 oci.go:103] Successfully created a docker volume kubernetes-upgrade-20220531191315-2108
	I0531 19:19:15.867243    9456 cli_runner.go:164] Run: docker run --rm --name kubernetes-upgrade-20220531191315-2108-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=kubernetes-upgrade-20220531191315-2108 --entrypoint /usr/bin/test -v kubernetes-upgrade-20220531191315-2108:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653596720-14230@sha256:e953786303ac8350802546ee187d34e89f0007072a54fdbcc2f86a1fb8575418 -d /var/lib
	I0531 19:19:22.713288    9456 cli_runner.go:217] Completed: docker run --rm --name kubernetes-upgrade-20220531191315-2108-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=kubernetes-upgrade-20220531191315-2108 --entrypoint /usr/bin/test -v kubernetes-upgrade-20220531191315-2108:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653596720-14230@sha256:e953786303ac8350802546ee187d34e89f0007072a54fdbcc2f86a1fb8575418 -d /var/lib: (6.8460136s)
	I0531 19:19:22.713288    9456 oci.go:107] Successfully prepared a docker volume kubernetes-upgrade-20220531191315-2108
	I0531 19:19:22.713288    9456 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0531 19:19:22.713288    9456 kic.go:179] Starting extracting preloaded images to volume ...
	I0531 19:19:22.722276    9456 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v kubernetes-upgrade-20220531191315-2108:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653596720-14230@sha256:e953786303ac8350802546ee187d34e89f0007072a54fdbcc2f86a1fb8575418 -I lz4 -xf /preloaded.tar -C /extractDir
	I0531 19:19:50.344033    9456 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v kubernetes-upgrade-20220531191315-2108:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653596720-14230@sha256:e953786303ac8350802546ee187d34e89f0007072a54fdbcc2f86a1fb8575418 -I lz4 -xf /preloaded.tar -C /extractDir: (27.6216318s)
	I0531 19:19:50.344033    9456 kic.go:188] duration metric: took 27.630620 seconds to extract preloaded images to volume
	I0531 19:19:50.352025    9456 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0531 19:19:52.671605    9456 cli_runner.go:217] Completed: docker system info --format "{{json .}}": (2.3195695s)
	I0531 19:19:52.671605    9456 info.go:265] docker info: {ID:JKWR:L4LW:XYJC:G6AI:GZFU:RUGW:CCH6:OD2M:V572:4FTB:B7YC:DTUC Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:63 OomKillDisable:true NGoroutines:58 SystemTime:2022-05-31 19:19:51.4830474 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.102.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_
64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:53902323712 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.14 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3df54a852345ae127d1fa3092b95168e4a88e2f8 Expected:3df54a852345ae127d1fa3092b95168e4a88e2f8} RuncCommit:{ID:v1.0.3-0-gf46b6ba Expected:v1.0.3-0-gf46b6ba} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,
profile=default] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.5.1] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:C:\Program Files\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc.
Version:v0.17.0]] Warnings:<nil>}}
	I0531 19:19:52.679613    9456 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0531 19:19:54.960533    9456 cli_runner.go:217] Completed: docker info --format "'{{json .SecurityOptions}}'": (2.2809096s)
	I0531 19:19:54.969529    9456 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname kubernetes-upgrade-20220531191315-2108 --name kubernetes-upgrade-20220531191315-2108 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=kubernetes-upgrade-20220531191315-2108 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=kubernetes-upgrade-20220531191315-2108 --network kubernetes-upgrade-20220531191315-2108 --ip 192.168.49.2 --volume kubernetes-upgrade-20220531191315-2108:/var --security-opt apparmor=unconfined --memory=2200mb --memory-swap=2200mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653596720-14230@sha256:e953786303ac8350802546ee187d34e89f0007072a54fdbcc2f86a1fb8575418
	W0531 19:19:56.361408    9456 cli_runner.go:211] docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname kubernetes-upgrade-20220531191315-2108 --name kubernetes-upgrade-20220531191315-2108 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=kubernetes-upgrade-20220531191315-2108 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=kubernetes-upgrade-20220531191315-2108 --network kubernetes-upgrade-20220531191315-2108 --ip 192.168.49.2 --volume kubernetes-upgrade-20220531191315-2108:/var --security-opt apparmor=unconfined --memory=2200mb --memory-swap=2200mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653596720-14230@sha256:e953786303ac8350802546ee187d34e89f0007072a54fdbcc2f86a1fb8575418 returned with exit code 12
5
	I0531 19:19:56.361408    9456 cli_runner.go:217] Completed: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname kubernetes-upgrade-20220531191315-2108 --name kubernetes-upgrade-20220531191315-2108 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=kubernetes-upgrade-20220531191315-2108 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=kubernetes-upgrade-20220531191315-2108 --network kubernetes-upgrade-20220531191315-2108 --ip 192.168.49.2 --volume kubernetes-upgrade-20220531191315-2108:/var --security-opt apparmor=unconfined --memory=2200mb --memory-swap=2200mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653596720-14230@sha256:e953786303ac8350802546ee187d34e89f0007072a54fdbcc2f86a1fb8575418: (1.3918719s)
	I0531 19:19:56.361408    9456 client.go:171] LocalClient.Create took 47.4576389s
	I0531 19:19:58.384754    9456 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0531 19:19:58.391709    9456 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220531191315-2108
	W0531 19:19:59.540214    9456 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220531191315-2108 returned with exit code 1
	I0531 19:19:59.540214    9456 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220531191315-2108: (1.1484996s)
	I0531 19:19:59.540214    9456 retry.go:31] will retry after 231.159374ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0531 19:19:59.784730    9456 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220531191315-2108
	W0531 19:20:00.896129    9456 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220531191315-2108 returned with exit code 1
	I0531 19:20:00.896232    9456 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220531191315-2108: (1.1112698s)
	W0531 19:20:00.896316    9456 start.go:281] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	W0531 19:20:00.896316    9456 start.go:248] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0531 19:20:00.906623    9456 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0531 19:20:00.914450    9456 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220531191315-2108
	W0531 19:20:02.093928    9456 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220531191315-2108 returned with exit code 1
	I0531 19:20:02.093928    9456 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220531191315-2108: (1.1794721s)
	I0531 19:20:02.093928    9456 retry.go:31] will retry after 141.409254ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0531 19:20:02.248791    9456 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220531191315-2108
	W0531 19:20:03.436285    9456 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220531191315-2108 returned with exit code 1
	I0531 19:20:03.436285    9456 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220531191315-2108: (1.1874295s)
	W0531 19:20:03.436285    9456 start.go:296] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	W0531 19:20:03.436285    9456 start.go:253] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0531 19:20:03.436285    9456 start.go:134] duration metric: createHost completed in 54.5379948s
	I0531 19:20:03.447283    9456 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0531 19:20:03.455286    9456 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220531191315-2108
	W0531 19:20:04.652441    9456 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220531191315-2108 returned with exit code 1
	I0531 19:20:04.652624    9456 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220531191315-2108: (1.1971493s)
	I0531 19:20:04.652624    9456 retry.go:31] will retry after 149.242379ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0531 19:20:04.822302    9456 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220531191315-2108
	W0531 19:20:05.994211    9456 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220531191315-2108 returned with exit code 1
	I0531 19:20:05.994211    9456 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220531191315-2108: (1.1719032s)
	W0531 19:20:05.994211    9456 start.go:281] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	W0531 19:20:05.994211    9456 start.go:248] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0531 19:20:06.003204    9456 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0531 19:20:06.010199    9456 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220531191315-2108
	W0531 19:20:07.174752    9456 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220531191315-2108 returned with exit code 1
	I0531 19:20:07.174752    9456 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220531191315-2108: (1.1645472s)
	I0531 19:20:07.174752    9456 retry.go:31] will retry after 253.803157ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0531 19:20:07.453684    9456 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220531191315-2108
	W0531 19:20:08.664874    9456 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220531191315-2108 returned with exit code 1
	I0531 19:20:08.665150    9456 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220531191315-2108: (1.211185s)
	W0531 19:20:08.665333    9456 start.go:296] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	W0531 19:20:08.665425    9456 start.go:253] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0531 19:20:08.665425    9456 fix.go:57] fixHost completed within 5m41.6419197s
	I0531 19:20:08.665425    9456 start.go:81] releasing machines lock for "kubernetes-upgrade-20220531191315-2108", held for 5m41.6425482s
	W0531 19:20:08.666231    9456 out.go:239] * Failed to start docker container. Running "minikube delete -p kubernetes-upgrade-20220531191315-2108" may fix it: recreate: creating host: create: creating: create kic node: create container: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname kubernetes-upgrade-20220531191315-2108 --name kubernetes-upgrade-20220531191315-2108 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=kubernetes-upgrade-20220531191315-2108 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=kubernetes-upgrade-20220531191315-2108 --network kubernetes-upgrade-20220531191315-2108 --ip 192.168.49.2 --volume kubernetes-upgrade-20220531191315-2108:/var --security-opt apparmor=unconfined --memory=2200mb --memory-swap=2200mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::500
0 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653596720-14230@sha256:e953786303ac8350802546ee187d34e89f0007072a54fdbcc2f86a1fb8575418: exit status 125
	stdout:
	d208bf51202a9bdc978217cbda4726db76f458b9ccdfa37f212d2c84cec8196f
	
	stderr:
	docker: Error response from daemon: network kubernetes-upgrade-20220531191315-2108 not found.
	
	* Failed to start docker container. Running "minikube delete -p kubernetes-upgrade-20220531191315-2108" may fix it: recreate: creating host: create: creating: create kic node: create container: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname kubernetes-upgrade-20220531191315-2108 --name kubernetes-upgrade-20220531191315-2108 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=kubernetes-upgrade-20220531191315-2108 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=kubernetes-upgrade-20220531191315-2108 --network kubernetes-upgrade-20220531191315-2108 --ip 192.168.49.2 --volume kubernetes-upgrade-20220531191315-2108:/var --security-opt apparmor=unconfined --memory=2200mb --memory-swap=2200mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-mi
nikube/kicbase-builds:v0.0.31-1653596720-14230@sha256:e953786303ac8350802546ee187d34e89f0007072a54fdbcc2f86a1fb8575418: exit status 125
	stdout:
	d208bf51202a9bdc978217cbda4726db76f458b9ccdfa37f212d2c84cec8196f
	
	stderr:
	docker: Error response from daemon: network kubernetes-upgrade-20220531191315-2108 not found.
	
	I0531 19:20:08.670749    9456 out.go:177] 
	W0531 19:20:08.673753    9456 out.go:239] X Exiting due to GUEST_PROVISION: Failed to start host: recreate: creating host: create: creating: create kic node: create container: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname kubernetes-upgrade-20220531191315-2108 --name kubernetes-upgrade-20220531191315-2108 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=kubernetes-upgrade-20220531191315-2108 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=kubernetes-upgrade-20220531191315-2108 --network kubernetes-upgrade-20220531191315-2108 --ip 192.168.49.2 --volume kubernetes-upgrade-20220531191315-2108:/var --security-opt apparmor=unconfined --memory=2200mb --memory-swap=2200mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-bui
lds:v0.0.31-1653596720-14230@sha256:e953786303ac8350802546ee187d34e89f0007072a54fdbcc2f86a1fb8575418: exit status 125
	stdout:
	d208bf51202a9bdc978217cbda4726db76f458b9ccdfa37f212d2c84cec8196f
	
	stderr:
	docker: Error response from daemon: network kubernetes-upgrade-20220531191315-2108 not found.
	
	X Exiting due to GUEST_PROVISION: Failed to start host: recreate: creating host: create: creating: create kic node: create container: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname kubernetes-upgrade-20220531191315-2108 --name kubernetes-upgrade-20220531191315-2108 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=kubernetes-upgrade-20220531191315-2108 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=kubernetes-upgrade-20220531191315-2108 --network kubernetes-upgrade-20220531191315-2108 --ip 192.168.49.2 --volume kubernetes-upgrade-20220531191315-2108:/var --security-opt apparmor=unconfined --memory=2200mb --memory-swap=2200mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653596720-14230@sha256:e95378
6303ac8350802546ee187d34e89f0007072a54fdbcc2f86a1fb8575418: exit status 125
	stdout:
	d208bf51202a9bdc978217cbda4726db76f458b9ccdfa37f212d2c84cec8196f
	
	stderr:
	docker: Error response from daemon: network kubernetes-upgrade-20220531191315-2108 not found.
	
	W0531 19:20:08.673753    9456 out.go:239] * 
	* 
	W0531 19:20:08.674751    9456 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0531 19:20:08.678756    9456 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:231: failed to start minikube HEAD with oldest k8s version: out/minikube-windows-amd64.exe start -p kubernetes-upgrade-20220531191315-2108 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=docker: exit status 80
version_upgrade_test.go:234: (dbg) Run:  out/minikube-windows-amd64.exe stop -p kubernetes-upgrade-20220531191315-2108

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:234: (dbg) Non-zero exit: out/minikube-windows-amd64.exe stop -p kubernetes-upgrade-20220531191315-2108: exit status 82 (4m25.7637113s)

                                                
                                                
-- stdout --
	* Stopping node "kubernetes-upgrade-20220531191315-2108"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0531 19:20:14.900076    7728 daemonize_windows.go:38] error terminating scheduled stop for profile kubernetes-upgrade-20220531191315-2108: stopping schedule-stop service for profile kubernetes-upgrade-20220531191315-2108: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	X Exiting due to GUEST_STOP_TIMEOUT: Temporary Error: stop: Maximum number of retries (60) exceeded
	* 
	╭───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                       │
	│    * If the above advice does not help, please let us know:                                                           │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                         │
	│                                                                                                                       │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                              │
	│    * Please also attach the following file to the GitHub issue:                                                       │
	│    * - C:\Users\jenkins.minikube7\AppData\Local\Temp\minikube_stop_00bf5dc3d80d6eb9ce31b0df7efaf9ad81c5ff31_15.log    │
	│                                                                                                                       │
	╰───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
version_upgrade_test.go:236: out/minikube-windows-amd64.exe stop -p kubernetes-upgrade-20220531191315-2108 failed: exit status 82
panic.go:482: *** TestKubernetesUpgrade FAILED at 2022-05-31 19:24:34.6198188 +0000 GMT m=+7745.660224101
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestKubernetesUpgrade]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect kubernetes-upgrade-20220531191315-2108
helpers_test.go:231: (dbg) Done: docker inspect kubernetes-upgrade-20220531191315-2108: (1.1765913s)
helpers_test.go:235: (dbg) docker inspect kubernetes-upgrade-20220531191315-2108:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "d208bf51202a9bdc978217cbda4726db76f458b9ccdfa37f212d2c84cec8196f",
	        "Created": "2022-05-31T19:19:56.1065998Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "created",
	            "Running": false,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 0,
	            "ExitCode": 128,
	            "Error": "network kubernetes-upgrade-20220531191315-2108 not found",
	            "StartedAt": "0001-01-01T00:00:00Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:aedbaa58534633065a66af6f01ba15f6c7dc1b8a285b6938f9d04325ceab9ed4",
	        "ResolvConfPath": "",
	        "HostnamePath": "",
	        "HostsPath": "",
	        "LogPath": "",
	        "Name": "/kubernetes-upgrade-20220531191315-2108",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "kubernetes-upgrade-20220531191315-2108:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "kubernetes-upgrade-20220531191315-2108",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 2306867200,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/202a28681afc2711646e0524508ac497ecb25b22f752dc92dc27598590669ffa-init/diff:/var/lib/docker/overlay2/42ebd8012a176a6c9bc83a2b81ffb1eb5c8e01d5410cb5d59346522bbaddf2cc/diff:/var/lib/docker/overlay2/59dce173ea661e9679f479af711a101ab0e97afb60abfd3c5b7a199b5c3e2b3b/diff:/var/lib/docker/overlay2/0328b60a223ca9f8bab93e6b86106d8b64d16fa559a56e88abbdee372b3b6a70/diff:/var/lib/docker/overlay2/b781f2620a052ee02138337819bde18c09122be2f20b7cfefaf7688f18d0c559/diff:/var/lib/docker/overlay2/af966c145b90b1748180b9ffcb1521d6fa9914e1d0ca582b239123591ffd1527/diff:/var/lib/docker/overlay2/5cd2b511f6f3bc93855ed77b5510ca4c67426eea433ccda53ea8e864342a413e/diff:/var/lib/docker/overlay2/f896d291d0c004470c3e38ea0d3be8e2b2a48ea36d45662c40fe3e105cbf4dec/diff:/var/lib/docker/overlay2/9e8994dcf5b1692245d5e40982d040298bfa7f7977892cf4be8ba3697f2c1283/diff:/var/lib/docker/overlay2/a7da4130c1b629e2a737b34701c6d4dfe6c48f92771856a887e06a1edc5456f8/diff:/var/lib/docker/overlay2/4c2573
4b9c8459489256b5f70dbb446897b9510d1cf9187e903f845ffa2a7ec2/diff:/var/lib/docker/overlay2/5c6cef49a0d0d1a36777fa7e0955ecdffb41ce354b7984f232e9cd51916416f7/diff:/var/lib/docker/overlay2/b79c799ed97edb702ed4c4ccb55ef9c645ae162e30e8f297ca5dd1152c29de41/diff:/var/lib/docker/overlay2/c84b7bc7c79ffdedf2d1265e21eec011dc3215811fb0569f7eb7d6b9aec884e8/diff:/var/lib/docker/overlay2/df8e2c3af362fd04ee17cb8d67105cf489427b2ae7cec77b79a2778e6c8c0234/diff:/var/lib/docker/overlay2/e56e356f8425868b31ada978267de73f074f211985ff1849ece7ab8341c33bae/diff:/var/lib/docker/overlay2/82c032066e83d3297742c83dd29132974e9db73a0b0b0a8edd3bcbbdb29cd53c/diff:/var/lib/docker/overlay2/15532131f3e6d0b2faf705733b06ae0c869147f2ca9592e3a80b6eaadad23544/diff:/var/lib/docker/overlay2/73fa456f504732f46cbe49368167247ca47b3099a6a75a7023ba16e7f598aee5/diff:/var/lib/docker/overlay2/e5635e020aadcc8dd1e5e3cd2eaa45cb97147f47bf406211fc61d7cbfc531193/diff:/var/lib/docker/overlay2/40b76b3249d3f7a8a737e2db80ebc1ed3b76d59724641217e8aae414ad832781/diff:/var/lib/d
ocker/overlay2/50ea2ce78d4fe52f626b2755a14f71a3c4f9b5a4f929646d9200876bdb1652c1/diff:/var/lib/docker/overlay2/d0a6e94d1f4aa73824d39c6e655bc4bdcd6568cea821b5d0f71174591c9cbbb3/diff:/var/lib/docker/overlay2/20c8fbe37a8c89a03b7bffe8cbc507e888cd5886f86f43b551d6a09fee1ce5e7/diff:/var/lib/docker/overlay2/48942b31cfe24e44c65a8be1785cd90488444f8c420a79b72a123034b01dd3f8/diff:/var/lib/docker/overlay2/c90124ab97e02facd949bfbd45815d6d73a40303b47ba4a4bc035788f5ee2dc3/diff:/var/lib/docker/overlay2/38c82aeabee1c8f46551413ecabb24f2f22680bb623f79e40c751558747a03f5/diff:/var/lib/docker/overlay2/4fa8894d1c1d773bc2e0511f273eab03fb7b8be7489eab5cd3eb57cc0d12e855/diff:/var/lib/docker/overlay2/23319fcddb47e50928e2044bac662de8153728f3a2eefa9c6ad5a5f413efec88/diff:/var/lib/docker/overlay2/b7ecd073b5b747c21ecbd1ca61887899f7e227fac3e383e24f868549b7929d74/diff:/var/lib/docker/overlay2/29a5674b4bbabfd07c4ce0b2a8b84ce98af380bf984043a4a9a6cd0743e4630c/diff:/var/lib/docker/overlay2/86a10266979ed72dc4372ade724e64741de35702626642ba60a15cca143
3682e/diff:/var/lib/docker/overlay2/03a1af7f82f1cb2b6eadbd1f13c8e9f6ca281ef3a8968d6aa45d284f286aefca/diff:/var/lib/docker/overlay2/f36cce4566278d24128326f8ef6ea446884c0c6941ccdb763ddf936e178afbff/diff:/var/lib/docker/overlay2/e54a2a61ba3597af53ec65a822821ffca97788e4b1dbfeedf98bf4d12e78973d/diff:/var/lib/docker/overlay2/dd54a25b898b0d7952f0bcb99a0450ee3d6b4269599e9355b4ae5e0c540c2caa/diff:/var/lib/docker/overlay2/ae6c1d1e9e79e03382217f21886420e3118a3f18f7c44f76c19262a84a43e219/diff:/var/lib/docker/overlay2/82faa00f86c1fa99063466464f71cdd6d510aa3e45c6c43301b2119b5bd5285a/diff:/var/lib/docker/overlay2/9f54999972b485642f042b9ed4d00316be0a1d35c060e619aca79b1583180446/diff:/var/lib/docker/overlay2/b467240c20564ba44d0946c716cf18ab5be973b43b02c37ee3ddd8f94502f41b/diff:/var/lib/docker/overlay2/21217d4ff1c5cf81dd53cfd831e0961189fb9f86812e1f53843f0022383345e6/diff",
	                "MergedDir": "/var/lib/docker/overlay2/202a28681afc2711646e0524508ac497ecb25b22f752dc92dc27598590669ffa/merged",
	                "UpperDir": "/var/lib/docker/overlay2/202a28681afc2711646e0524508ac497ecb25b22f752dc92dc27598590669ffa/diff",
	                "WorkDir": "/var/lib/docker/overlay2/202a28681afc2711646e0524508ac497ecb25b22f752dc92dc27598590669ffa/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "kubernetes-upgrade-20220531191315-2108",
	                "Source": "/var/lib/docker/volumes/kubernetes-upgrade-20220531191315-2108/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "kubernetes-upgrade-20220531191315-2108",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653596720-14230@sha256:e953786303ac8350802546ee187d34e89f0007072a54fdbcc2f86a1fb8575418",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "kubernetes-upgrade-20220531191315-2108",
	                "name.minikube.sigs.k8s.io": "kubernetes-upgrade-20220531191315-2108",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {},
	            "SandboxKey": "",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "kubernetes-upgrade-20220531191315-2108": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "NetworkID": "",
	                    "EndpointID": "",
	                    "Gateway": "",
	                    "IPAddress": "",
	                    "IPPrefixLen": 0,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p kubernetes-upgrade-20220531191315-2108 -n kubernetes-upgrade-20220531191315-2108
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p kubernetes-upgrade-20220531191315-2108 -n kubernetes-upgrade-20220531191315-2108: exit status 7 (3.0268043s)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "kubernetes-upgrade-20220531191315-2108" host is not running, skipping log retrieval (state="Nonexistent")
helpers_test.go:175: Cleaning up "kubernetes-upgrade-20220531191315-2108" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p kubernetes-upgrade-20220531191315-2108

                                                
                                                
=== CONT  TestKubernetesUpgrade
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p kubernetes-upgrade-20220531191315-2108: (20.8939747s)
--- FAIL: TestKubernetesUpgrade (703.93s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (19.99s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-windows-amd64.exe start -p NoKubernetes-20220531190920-2108 --no-kubernetes --driver=docker

                                                
                                                
=== CONT  TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p NoKubernetes-20220531190920-2108 --no-kubernetes --driver=docker: exit status 1 (15.5957396s)

                                                
                                                
-- stdout --
	* [NoKubernetes-20220531190920-2108] minikube v1.26.0-beta.1 on Microsoft Windows 10 Enterprise N 10.0.19044 Build 19044
	  - KUBECONFIG=C:\Users\jenkins.minikube7\minikube-integration\kubeconfig
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube7\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=14079
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	* Using the docker driver based on user configuration
	* Using Docker Desktop driver with the root privilege
	* Starting minikube without Kubernetes NoKubernetes-20220531190920-2108 in cluster NoKubernetes-20220531190920-2108
	* Pulling base image ...
	* Creating docker container (CPUs=2, Memory=16300MB) ...

                                                
                                                
-- /stdout --
no_kubernetes_test.go:138: failed to start minikube with args: "out/minikube-windows-amd64.exe start -p NoKubernetes-20220531190920-2108 --no-kubernetes --driver=docker" : exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestNoKubernetes/serial/Start]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect NoKubernetes-20220531190920-2108
helpers_test.go:231: (dbg) Done: docker inspect NoKubernetes-20220531190920-2108: (1.2378623s)
helpers_test.go:235: (dbg) docker inspect NoKubernetes-20220531190920-2108:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "NoKubernetes-20220531190920-2108",
	        "Id": "c979c1adbea096163a6999c3c0eea729b4b8d4e6daf3406c5c572f4debc0feaa",
	        "Created": "2022-05-31T19:14:20.6451034Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.49.0/24",
	                    "Gateway": "192.168.49.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "1500"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p NoKubernetes-20220531190920-2108 -n NoKubernetes-20220531190920-2108
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p NoKubernetes-20220531190920-2108 -n NoKubernetes-20220531190920-2108: exit status 7 (3.144847s)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0531 19:14:24.821790    1884 status.go:247] status error: host: state: unknown state "NoKubernetes-20220531190920-2108": docker container inspect NoKubernetes-20220531190920-2108 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: NoKubernetes-20220531190920-2108

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-20220531190920-2108" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestNoKubernetes/serial/Start (19.99s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (22.05s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-windows-amd64.exe profile list --output json
pause_test.go:142: (dbg) Done: out/minikube-windows-amd64.exe profile list --output json: (9.7031605s)
pause_test.go:168: (dbg) Run:  docker ps -a
pause_test.go:168: (dbg) Done: docker ps -a: (1.1931133s)
pause_test.go:173: (dbg) Run:  docker volume inspect pause-20220531191437-2108
pause_test.go:173: (dbg) Done: docker volume inspect pause-20220531191437-2108: (2.7419445s)
pause_test.go:175: expected to see error and volume "docker volume inspect pause-20220531191437-2108" to not exist after deletion but got no error and this output: 
-- stdout --
	[
	    {
	        "CreatedAt": "2022-05-31T19:15:29Z",
	        "Driver": "local",
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true",
	            "name.minikube.sigs.k8s.io": "pause-20220531191437-2108"
	        },
	        "Mountpoint": "/var/lib/docker/volumes/pause-20220531191437-2108/_data",
	        "Name": "pause-20220531191437-2108",
	        "Options": {},
	        "Scope": "local"
	    }
	]

                                                
                                                
-- /stdout --
pause_test.go:178: (dbg) Run:  docker network ls
pause_test.go:178: (dbg) Done: docker network ls: (3.2328609s)
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestPause/serial/VerifyDeletedResources]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect pause-20220531191437-2108
helpers_test.go:235: (dbg) docker inspect pause-20220531191437-2108:

                                                
                                                
-- stdout --
	[
	    {
	        "CreatedAt": "2022-05-31T19:15:29Z",
	        "Driver": "local",
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true",
	            "name.minikube.sigs.k8s.io": "pause-20220531191437-2108"
	        },
	        "Mountpoint": "/var/lib/docker/volumes/pause-20220531191437-2108/_data",
	        "Name": "pause-20220531191437-2108",
	        "Options": {},
	        "Scope": "local"
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p pause-20220531191437-2108 -n pause-20220531191437-2108
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p pause-20220531191437-2108 -n pause-20220531191437-2108: exit status 85 (517.9535ms)

                                                
                                                
-- stdout --
	* Profile "pause-20220531191437-2108" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p pause-20220531191437-2108"

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 85 (may be ok)
helpers_test.go:241: "pause-20220531191437-2108" host is not running, skipping log retrieval (state="* Profile \"pause-20220531191437-2108\" not found. Run \"minikube profile list\" to view all profiles.\n  To start a cluster, run: \"minikube start -p pause-20220531191437-2108\"")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestPause/serial/VerifyDeletedResources]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect pause-20220531191437-2108
helpers_test.go:231: (dbg) Done: docker inspect pause-20220531191437-2108: (3.9484534s)
helpers_test.go:235: (dbg) docker inspect pause-20220531191437-2108:

                                                
                                                
-- stdout --
	[
	    {
	        "CreatedAt": "2022-05-31T19:15:29Z",
	        "Driver": "local",
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true",
	            "name.minikube.sigs.k8s.io": "pause-20220531191437-2108"
	        },
	        "Mountpoint": "/var/lib/docker/volumes/pause-20220531191437-2108/_data",
	        "Name": "pause-20220531191437-2108",
	        "Options": {},
	        "Scope": "local"
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p pause-20220531191437-2108 -n pause-20220531191437-2108
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p pause-20220531191437-2108 -n pause-20220531191437-2108: exit status 85 (385.0044ms)

                                                
                                                
-- stdout --
	* Profile "pause-20220531191437-2108" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p pause-20220531191437-2108"

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 85 (may be ok)
helpers_test.go:241: "pause-20220531191437-2108" host is not running, skipping log retrieval (state="* Profile \"pause-20220531191437-2108\" not found. Run \"minikube profile list\" to view all profiles.\n  To start a cluster, run: \"minikube start -p pause-20220531191437-2108\"")
--- FAIL: TestPause/serial/VerifyDeletedResources (22.05s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (69.22s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:313: (dbg) Run:  out/minikube-windows-amd64.exe pause -p old-k8s-version-20220531192531-2108 --alsologtostderr -v=1

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:313: (dbg) Non-zero exit: out/minikube-windows-amd64.exe pause -p old-k8s-version-20220531192531-2108 --alsologtostderr -v=1: exit status 80 (9.2355283s)

                                                
                                                
-- stdout --
	* Pausing node old-k8s-version-20220531192531-2108 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0531 19:43:50.796534    9208 out.go:296] Setting OutFile to fd 1816 ...
	I0531 19:43:50.866522    9208 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0531 19:43:50.866522    9208 out.go:309] Setting ErrFile to fd 1840...
	I0531 19:43:50.867526    9208 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0531 19:43:50.879525    9208 out.go:303] Setting JSON to false
	I0531 19:43:50.879525    9208 mustload.go:65] Loading cluster: old-k8s-version-20220531192531-2108
	I0531 19:43:50.880530    9208 config.go:178] Loaded profile config "old-k8s-version-20220531192531-2108": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.16.0
	I0531 19:43:50.897510    9208 cli_runner.go:164] Run: docker container inspect old-k8s-version-20220531192531-2108 --format={{.State.Status}}
	I0531 19:43:54.319795    9208 cli_runner.go:217] Completed: docker container inspect old-k8s-version-20220531192531-2108 --format={{.State.Status}}: (3.4221697s)
	I0531 19:43:54.320024    9208 host.go:66] Checking if "old-k8s-version-20220531192531-2108" exists ...
	I0531 19:43:54.344735    9208 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" old-k8s-version-20220531192531-2108
	I0531 19:43:55.819724    9208 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" old-k8s-version-20220531192531-2108: (1.4748795s)
	I0531 19:43:55.824646    9208 pause.go:58] "namespaces" ="keys" ="(MISSING)"
	I0531 19:43:55.827940    9208 out.go:177] * Pausing node old-k8s-version-20220531192531-2108 ... 
	I0531 19:43:55.835033    9208 host.go:66] Checking if "old-k8s-version-20220531192531-2108" exists ...
	I0531 19:43:55.858651    9208 ssh_runner.go:195] Run: systemctl --version
	I0531 19:43:55.869643    9208 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220531192531-2108
	I0531 19:43:57.379567    9208 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220531192531-2108: (1.5099176s)
	I0531 19:43:57.379693    9208 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:54198 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\old-k8s-version-20220531192531-2108\id_rsa Username:docker}
	I0531 19:43:57.508180    9208 ssh_runner.go:235] Completed: systemctl --version: (1.6495219s)
	I0531 19:43:57.519167    9208 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0531 19:43:57.561836    9208 pause.go:50] kubelet running: true
	I0531 19:43:57.578853    9208 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I0531 19:43:57.921839    9208 retry.go:31] will retry after 276.165072ms: kubelet disable --now: sudo systemctl disable --now kubelet: Process exited with status 1
	stdout:
	
	stderr:
	Synchronizing state of kubelet.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install disable kubelet
	update-rc.d: error: kubelet Default-Start contains no runlevels, aborting.
	I0531 19:43:58.212884    9208 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0531 19:43:58.256050    9208 pause.go:50] kubelet running: true
	I0531 19:43:58.269206    9208 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I0531 19:43:58.609400    9208 retry.go:31] will retry after 540.190908ms: kubelet disable --now: sudo systemctl disable --now kubelet: Process exited with status 1
	stdout:
	
	stderr:
	Synchronizing state of kubelet.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install disable kubelet
	update-rc.d: error: kubelet Default-Start contains no runlevels, aborting.
	I0531 19:43:59.170670    9208 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0531 19:43:59.202429    9208 pause.go:50] kubelet running: true
	I0531 19:43:59.215078    9208 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I0531 19:43:59.567810    9208 out.go:177] 
	W0531 19:43:59.570826    9208 out.go:239] X Exiting due to GUEST_PAUSE: kubelet disable --now: sudo systemctl disable --now kubelet: Process exited with status 1
	stdout:
	
	stderr:
	Synchronizing state of kubelet.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install disable kubelet
	update-rc.d: error: kubelet Default-Start contains no runlevels, aborting.
	
	X Exiting due to GUEST_PAUSE: kubelet disable --now: sudo systemctl disable --now kubelet: Process exited with status 1
	stdout:
	
	stderr:
	Synchronizing state of kubelet.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install disable kubelet
	update-rc.d: error: kubelet Default-Start contains no runlevels, aborting.
	
	W0531 19:43:59.570826    9208 out.go:239] * 
	* 
	W0531 19:43:59.700343    9208 out.go:239] ╭───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                       │
	│    * If the above advice does not help, please let us know:                                                           │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                         │
	│                                                                                                                       │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                              │
	│    * Please also attach the following file to the GitHub issue:                                                       │
	│    * - C:\Users\jenkins.minikube7\AppData\Local\Temp\minikube_pause_8a34b101973a5475dd3f2895f630b939c2202307_5.log    │
	│                                                                                                                       │
	╰───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                       │
	│    * If the above advice does not help, please let us know:                                                           │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                         │
	│                                                                                                                       │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                              │
	│    * Please also attach the following file to the GitHub issue:                                                       │
	│    * - C:\Users\jenkins.minikube7\AppData\Local\Temp\minikube_pause_8a34b101973a5475dd3f2895f630b939c2202307_5.log    │
	│                                                                                                                       │
	╰───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I0531 19:43:59.705995    9208 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:313: out/minikube-windows-amd64.exe pause -p old-k8s-version-20220531192531-2108 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect old-k8s-version-20220531192531-2108

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/Pause
helpers_test.go:231: (dbg) Done: docker inspect old-k8s-version-20220531192531-2108: (1.3714537s)
helpers_test.go:235: (dbg) docker inspect old-k8s-version-20220531192531-2108:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "73d9dd1c979f96bb5332755b3731cd7b8a50346468819b2bc362eb6e60c2bebe",
	        "Created": "2022-05-31T19:32:10.7783868Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 209389,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2022-05-31T19:35:54.7458106Z",
	            "FinishedAt": "2022-05-31T19:35:32.9751099Z"
	        },
	        "Image": "sha256:aedbaa58534633065a66af6f01ba15f6c7dc1b8a285b6938f9d04325ceab9ed4",
	        "ResolvConfPath": "/var/lib/docker/containers/73d9dd1c979f96bb5332755b3731cd7b8a50346468819b2bc362eb6e60c2bebe/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/73d9dd1c979f96bb5332755b3731cd7b8a50346468819b2bc362eb6e60c2bebe/hostname",
	        "HostsPath": "/var/lib/docker/containers/73d9dd1c979f96bb5332755b3731cd7b8a50346468819b2bc362eb6e60c2bebe/hosts",
	        "LogPath": "/var/lib/docker/containers/73d9dd1c979f96bb5332755b3731cd7b8a50346468819b2bc362eb6e60c2bebe/73d9dd1c979f96bb5332755b3731cd7b8a50346468819b2bc362eb6e60c2bebe-json.log",
	        "Name": "/old-k8s-version-20220531192531-2108",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-20220531192531-2108:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "old-k8s-version-20220531192531-2108",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 2306867200,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/62b6f32bd3ed97def9fa39268e79cead11ff11a89372be02429bef84bcf65810-init/diff:/var/lib/docker/overlay2/42ebd8012a176a6c9bc83a2b81ffb1eb5c8e01d5410cb5d59346522bbaddf2cc/diff:/var/lib/docker/overlay2/59dce173ea661e9679f479af711a101ab0e97afb60abfd3c5b7a199b5c3e2b3b/diff:/var/lib/docker/overlay2/0328b60a223ca9f8bab93e6b86106d8b64d16fa559a56e88abbdee372b3b6a70/diff:/var/lib/docker/overlay2/b781f2620a052ee02138337819bde18c09122be2f20b7cfefaf7688f18d0c559/diff:/var/lib/docker/overlay2/af966c145b90b1748180b9ffcb1521d6fa9914e1d0ca582b239123591ffd1527/diff:/var/lib/docker/overlay2/5cd2b511f6f3bc93855ed77b5510ca4c67426eea433ccda53ea8e864342a413e/diff:/var/lib/docker/overlay2/f896d291d0c004470c3e38ea0d3be8e2b2a48ea36d45662c40fe3e105cbf4dec/diff:/var/lib/docker/overlay2/9e8994dcf5b1692245d5e40982d040298bfa7f7977892cf4be8ba3697f2c1283/diff:/var/lib/docker/overlay2/a7da4130c1b629e2a737b34701c6d4dfe6c48f92771856a887e06a1edc5456f8/diff:/var/lib/docker/overlay2/4c2573
4b9c8459489256b5f70dbb446897b9510d1cf9187e903f845ffa2a7ec2/diff:/var/lib/docker/overlay2/5c6cef49a0d0d1a36777fa7e0955ecdffb41ce354b7984f232e9cd51916416f7/diff:/var/lib/docker/overlay2/b79c799ed97edb702ed4c4ccb55ef9c645ae162e30e8f297ca5dd1152c29de41/diff:/var/lib/docker/overlay2/c84b7bc7c79ffdedf2d1265e21eec011dc3215811fb0569f7eb7d6b9aec884e8/diff:/var/lib/docker/overlay2/df8e2c3af362fd04ee17cb8d67105cf489427b2ae7cec77b79a2778e6c8c0234/diff:/var/lib/docker/overlay2/e56e356f8425868b31ada978267de73f074f211985ff1849ece7ab8341c33bae/diff:/var/lib/docker/overlay2/82c032066e83d3297742c83dd29132974e9db73a0b0b0a8edd3bcbbdb29cd53c/diff:/var/lib/docker/overlay2/15532131f3e6d0b2faf705733b06ae0c869147f2ca9592e3a80b6eaadad23544/diff:/var/lib/docker/overlay2/73fa456f504732f46cbe49368167247ca47b3099a6a75a7023ba16e7f598aee5/diff:/var/lib/docker/overlay2/e5635e020aadcc8dd1e5e3cd2eaa45cb97147f47bf406211fc61d7cbfc531193/diff:/var/lib/docker/overlay2/40b76b3249d3f7a8a737e2db80ebc1ed3b76d59724641217e8aae414ad832781/diff:/var/lib/d
ocker/overlay2/50ea2ce78d4fe52f626b2755a14f71a3c4f9b5a4f929646d9200876bdb1652c1/diff:/var/lib/docker/overlay2/d0a6e94d1f4aa73824d39c6e655bc4bdcd6568cea821b5d0f71174591c9cbbb3/diff:/var/lib/docker/overlay2/20c8fbe37a8c89a03b7bffe8cbc507e888cd5886f86f43b551d6a09fee1ce5e7/diff:/var/lib/docker/overlay2/48942b31cfe24e44c65a8be1785cd90488444f8c420a79b72a123034b01dd3f8/diff:/var/lib/docker/overlay2/c90124ab97e02facd949bfbd45815d6d73a40303b47ba4a4bc035788f5ee2dc3/diff:/var/lib/docker/overlay2/38c82aeabee1c8f46551413ecabb24f2f22680bb623f79e40c751558747a03f5/diff:/var/lib/docker/overlay2/4fa8894d1c1d773bc2e0511f273eab03fb7b8be7489eab5cd3eb57cc0d12e855/diff:/var/lib/docker/overlay2/23319fcddb47e50928e2044bac662de8153728f3a2eefa9c6ad5a5f413efec88/diff:/var/lib/docker/overlay2/b7ecd073b5b747c21ecbd1ca61887899f7e227fac3e383e24f868549b7929d74/diff:/var/lib/docker/overlay2/29a5674b4bbabfd07c4ce0b2a8b84ce98af380bf984043a4a9a6cd0743e4630c/diff:/var/lib/docker/overlay2/86a10266979ed72dc4372ade724e64741de35702626642ba60a15cca143
3682e/diff:/var/lib/docker/overlay2/03a1af7f82f1cb2b6eadbd1f13c8e9f6ca281ef3a8968d6aa45d284f286aefca/diff:/var/lib/docker/overlay2/f36cce4566278d24128326f8ef6ea446884c0c6941ccdb763ddf936e178afbff/diff:/var/lib/docker/overlay2/e54a2a61ba3597af53ec65a822821ffca97788e4b1dbfeedf98bf4d12e78973d/diff:/var/lib/docker/overlay2/dd54a25b898b0d7952f0bcb99a0450ee3d6b4269599e9355b4ae5e0c540c2caa/diff:/var/lib/docker/overlay2/ae6c1d1e9e79e03382217f21886420e3118a3f18f7c44f76c19262a84a43e219/diff:/var/lib/docker/overlay2/82faa00f86c1fa99063466464f71cdd6d510aa3e45c6c43301b2119b5bd5285a/diff:/var/lib/docker/overlay2/9f54999972b485642f042b9ed4d00316be0a1d35c060e619aca79b1583180446/diff:/var/lib/docker/overlay2/b467240c20564ba44d0946c716cf18ab5be973b43b02c37ee3ddd8f94502f41b/diff:/var/lib/docker/overlay2/21217d4ff1c5cf81dd53cfd831e0961189fb9f86812e1f53843f0022383345e6/diff",
	                "MergedDir": "/var/lib/docker/overlay2/62b6f32bd3ed97def9fa39268e79cead11ff11a89372be02429bef84bcf65810/merged",
	                "UpperDir": "/var/lib/docker/overlay2/62b6f32bd3ed97def9fa39268e79cead11ff11a89372be02429bef84bcf65810/diff",
	                "WorkDir": "/var/lib/docker/overlay2/62b6f32bd3ed97def9fa39268e79cead11ff11a89372be02429bef84bcf65810/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-20220531192531-2108",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-20220531192531-2108/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-20220531192531-2108",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653596720-14230@sha256:e953786303ac8350802546ee187d34e89f0007072a54fdbcc2f86a1fb8575418",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-20220531192531-2108",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-20220531192531-2108",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "9d19ad06dc07a20673d48b61192d7c0e0905e29621856aeb09b8f9b0410f62d1",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "54198"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "54199"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "54200"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "54201"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "54202"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/9d19ad06dc07",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-20220531192531-2108": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "73d9dd1c979f",
	                        "old-k8s-version-20220531192531-2108"
	                    ],
	                    "NetworkID": "ee4a2a412a92078b653cbcaf1d57d8604149789ff8c7d75dfb2ed03e6ea10fc2",
	                    "EndpointID": "4270c98919ae2493bf9ef2d067f5322982869ac9e0583f0317a4898c720e0680",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:4c:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p old-k8s-version-20220531192531-2108 -n old-k8s-version-20220531192531-2108

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/Pause
helpers_test.go:239: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.Host}} -p old-k8s-version-20220531192531-2108 -n old-k8s-version-20220531192531-2108: (8.3818537s)
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-windows-amd64.exe -p old-k8s-version-20220531192531-2108 logs -n 25
E0531 19:44:12.281863    2108 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\no-preload-20220531192611-2108\client.crt: The system cannot find the path specified.
E0531 19:44:13.058811    2108 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\addons-20220531171726-2108\client.crt: The system cannot find the path specified.
helpers_test.go:247: (dbg) Done: out/minikube-windows-amd64.exe -p old-k8s-version-20220531192531-2108 logs -n 25: (10.5571464s)
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/Pause logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|------------------------------------------------------------|------------------------------------------------|-------------------|----------------|---------------------|---------------------|
	| Command |                            Args                            |                    Profile                     |       User        |    Version     |     Start Time      |      End Time       |
	|---------|------------------------------------------------------------|------------------------------------------------|-------------------|----------------|---------------------|---------------------|
	| addons  | enable dashboard -p                                        | old-k8s-version-20220531192531-2108            | minikube7\jenkins | v1.26.0-beta.1 | 31 May 22 19:35 GMT | 31 May 22 19:35 GMT |
	|         | old-k8s-version-20220531192531-2108                        |                                                |                   |                |                     |                     |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4          |                                                |                   |                |                     |                     |
	| start   | -p                                                         | no-preload-20220531192611-2108                 | minikube7\jenkins | v1.26.0-beta.1 | 31 May 22 19:29 GMT | 31 May 22 19:36 GMT |
	|         | no-preload-20220531192611-2108                             |                                                |                   |                |                     |                     |
	|         | --memory=2200                                              |                                                |                   |                |                     |                     |
	|         | --alsologtostderr                                          |                                                |                   |                |                     |                     |
	|         | --wait=true --preload=false                                |                                                |                   |                |                     |                     |
	|         | --driver=docker                                            |                                                |                   |                |                     |                     |
	|         | --kubernetes-version=v1.23.6                               |                                                |                   |                |                     |                     |
	| start   | -p                                                         | default-k8s-different-port-20220531193451-2108 | minikube7\jenkins | v1.26.0-beta.1 | 31 May 22 19:34 GMT | 31 May 22 19:37 GMT |
	|         | default-k8s-different-port-20220531193451-2108             |                                                |                   |                |                     |                     |
	|         | --memory=2200 --alsologtostderr --wait=true                |                                                |                   |                |                     |                     |
	|         | --apiserver-port=8444 --driver=docker                      |                                                |                   |                |                     |                     |
	|         | --kubernetes-version=v1.23.6                               |                                                |                   |                |                     |                     |
	| addons  | enable metrics-server -p                                   | default-k8s-different-port-20220531193451-2108 | minikube7\jenkins | v1.26.0-beta.1 | 31 May 22 19:37 GMT | 31 May 22 19:37 GMT |
	|         | default-k8s-different-port-20220531193451-2108             |                                                |                   |                |                     |                     |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4           |                                                |                   |                |                     |                     |
	|         | --registries=MetricsServer=fake.domain                     |                                                |                   |                |                     |                     |
	| ssh     | -p                                                         | no-preload-20220531192611-2108                 | minikube7\jenkins | v1.26.0-beta.1 | 31 May 22 19:37 GMT | 31 May 22 19:37 GMT |
	|         | no-preload-20220531192611-2108                             |                                                |                   |                |                     |                     |
	|         | sudo crictl images -o json                                 |                                                |                   |                |                     |                     |
	| pause   | -p                                                         | no-preload-20220531192611-2108                 | minikube7\jenkins | v1.26.0-beta.1 | 31 May 22 19:37 GMT | 31 May 22 19:37 GMT |
	|         | no-preload-20220531192611-2108                             |                                                |                   |                |                     |                     |
	|         | --alsologtostderr -v=1                                     |                                                |                   |                |                     |                     |
	| stop    | -p                                                         | default-k8s-different-port-20220531193451-2108 | minikube7\jenkins | v1.26.0-beta.1 | 31 May 22 19:37 GMT | 31 May 22 19:37 GMT |
	|         | default-k8s-different-port-20220531193451-2108             |                                                |                   |                |                     |                     |
	|         | --alsologtostderr -v=3                                     |                                                |                   |                |                     |                     |
	| addons  | enable dashboard -p                                        | default-k8s-different-port-20220531193451-2108 | minikube7\jenkins | v1.26.0-beta.1 | 31 May 22 19:37 GMT | 31 May 22 19:37 GMT |
	|         | default-k8s-different-port-20220531193451-2108             |                                                |                   |                |                     |                     |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4          |                                                |                   |                |                     |                     |
	| unpause | -p                                                         | no-preload-20220531192611-2108                 | minikube7\jenkins | v1.26.0-beta.1 | 31 May 22 19:37 GMT | 31 May 22 19:37 GMT |
	|         | no-preload-20220531192611-2108                             |                                                |                   |                |                     |                     |
	|         | --alsologtostderr -v=1                                     |                                                |                   |                |                     |                     |
	| delete  | -p                                                         | no-preload-20220531192611-2108                 | minikube7\jenkins | v1.26.0-beta.1 | 31 May 22 19:38 GMT | 31 May 22 19:38 GMT |
	|         | no-preload-20220531192611-2108                             |                                                |                   |                |                     |                     |
	| delete  | -p                                                         | no-preload-20220531192611-2108                 | minikube7\jenkins | v1.26.0-beta.1 | 31 May 22 19:38 GMT | 31 May 22 19:38 GMT |
	|         | no-preload-20220531192611-2108                             |                                                |                   |                |                     |                     |
	| start   | -p newest-cni-20220531193849-2108 --memory=2200            | newest-cni-20220531193849-2108                 | minikube7\jenkins | v1.26.0-beta.1 | 31 May 22 19:38 GMT | 31 May 22 19:41 GMT |
	|         | --alsologtostderr --wait=apiserver,system_pods,default_sa  |                                                |                   |                |                     |                     |
	|         | --feature-gates ServerSideApply=true --network-plugin=cni  |                                                |                   |                |                     |                     |
	|         | --extra-config=kubelet.network-plugin=cni                  |                                                |                   |                |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 |                                                |                   |                |                     |                     |
	|         | --driver=docker --kubernetes-version=v1.23.6               |                                                |                   |                |                     |                     |
	| addons  | enable metrics-server -p                                   | newest-cni-20220531193849-2108                 | minikube7\jenkins | v1.26.0-beta.1 | 31 May 22 19:41 GMT | 31 May 22 19:41 GMT |
	|         | newest-cni-20220531193849-2108                             |                                                |                   |                |                     |                     |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4           |                                                |                   |                |                     |                     |
	|         | --registries=MetricsServer=fake.domain                     |                                                |                   |                |                     |                     |
	| stop    | -p                                                         | newest-cni-20220531193849-2108                 | minikube7\jenkins | v1.26.0-beta.1 | 31 May 22 19:41 GMT | 31 May 22 19:41 GMT |
	|         | newest-cni-20220531193849-2108                             |                                                |                   |                |                     |                     |
	|         | --alsologtostderr -v=3                                     |                                                |                   |                |                     |                     |
	| addons  | enable dashboard -p                                        | newest-cni-20220531193849-2108                 | minikube7\jenkins | v1.26.0-beta.1 | 31 May 22 19:41 GMT | 31 May 22 19:41 GMT |
	|         | newest-cni-20220531193849-2108                             |                                                |                   |                |                     |                     |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4          |                                                |                   |                |                     |                     |
	| start   | -p                                                         | embed-certs-20220531193346-2108                | minikube7\jenkins | v1.26.0-beta.1 | 31 May 22 19:33 GMT | 31 May 22 19:42 GMT |
	|         | embed-certs-20220531193346-2108                            |                                                |                   |                |                     |                     |
	|         | --memory=2200 --alsologtostderr                            |                                                |                   |                |                     |                     |
	|         | --wait=true --embed-certs                                  |                                                |                   |                |                     |                     |
	|         | --driver=docker                                            |                                                |                   |                |                     |                     |
	|         | --kubernetes-version=v1.23.6                               |                                                |                   |                |                     |                     |
	| addons  | enable metrics-server -p                                   | embed-certs-20220531193346-2108                | minikube7\jenkins | v1.26.0-beta.1 | 31 May 22 19:42 GMT | 31 May 22 19:42 GMT |
	|         | embed-certs-20220531193346-2108                            |                                                |                   |                |                     |                     |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4           |                                                |                   |                |                     |                     |
	|         | --registries=MetricsServer=fake.domain                     |                                                |                   |                |                     |                     |
	| stop    | -p                                                         | embed-certs-20220531193346-2108                | minikube7\jenkins | v1.26.0-beta.1 | 31 May 22 19:42 GMT | 31 May 22 19:42 GMT |
	|         | embed-certs-20220531193346-2108                            |                                                |                   |                |                     |                     |
	|         | --alsologtostderr -v=3                                     |                                                |                   |                |                     |                     |
	| addons  | enable dashboard -p                                        | embed-certs-20220531193346-2108                | minikube7\jenkins | v1.26.0-beta.1 | 31 May 22 19:42 GMT | 31 May 22 19:42 GMT |
	|         | embed-certs-20220531193346-2108                            |                                                |                   |                |                     |                     |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4          |                                                |                   |                |                     |                     |
	| start   | -p newest-cni-20220531193849-2108 --memory=2200            | newest-cni-20220531193849-2108                 | minikube7\jenkins | v1.26.0-beta.1 | 31 May 22 19:41 GMT | 31 May 22 19:43 GMT |
	|         | --alsologtostderr --wait=apiserver,system_pods,default_sa  |                                                |                   |                |                     |                     |
	|         | --feature-gates ServerSideApply=true --network-plugin=cni  |                                                |                   |                |                     |                     |
	|         | --extra-config=kubelet.network-plugin=cni                  |                                                |                   |                |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 |                                                |                   |                |                     |                     |
	|         | --driver=docker --kubernetes-version=v1.23.6               |                                                |                   |                |                     |                     |
	| ssh     | -p                                                         | newest-cni-20220531193849-2108                 | minikube7\jenkins | v1.26.0-beta.1 | 31 May 22 19:43 GMT | 31 May 22 19:43 GMT |
	|         | newest-cni-20220531193849-2108                             |                                                |                   |                |                     |                     |
	|         | sudo crictl images -o json                                 |                                                |                   |                |                     |                     |
	| start   | -p                                                         | old-k8s-version-20220531192531-2108            | minikube7\jenkins | v1.26.0-beta.1 | 31 May 22 19:35 GMT | 31 May 22 19:43 GMT |
	|         | old-k8s-version-20220531192531-2108                        |                                                |                   |                |                     |                     |
	|         | --memory=2200 --alsologtostderr                            |                                                |                   |                |                     |                     |
	|         | --wait=true --kvm-network=default                          |                                                |                   |                |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                              |                                                |                   |                |                     |                     |
	|         | --disable-driver-mounts                                    |                                                |                   |                |                     |                     |
	|         | --keep-context=false                                       |                                                |                   |                |                     |                     |
	|         | --driver=docker                                            |                                                |                   |                |                     |                     |
	|         | --kubernetes-version=v1.16.0                               |                                                |                   |                |                     |                     |
	| pause   | -p                                                         | newest-cni-20220531193849-2108                 | minikube7\jenkins | v1.26.0-beta.1 | 31 May 22 19:43 GMT | 31 May 22 19:43 GMT |
	|         | newest-cni-20220531193849-2108                             |                                                |                   |                |                     |                     |
	|         | --alsologtostderr -v=1                                     |                                                |                   |                |                     |                     |
	| ssh     | -p                                                         | old-k8s-version-20220531192531-2108            | minikube7\jenkins | v1.26.0-beta.1 | 31 May 22 19:43 GMT | 31 May 22 19:43 GMT |
	|         | old-k8s-version-20220531192531-2108                        |                                                |                   |                |                     |                     |
	|         | sudo crictl images -o json                                 |                                                |                   |                |                     |                     |
	| unpause | -p                                                         | newest-cni-20220531193849-2108                 | minikube7\jenkins | v1.26.0-beta.1 | 31 May 22 19:43 GMT | 31 May 22 19:43 GMT |
	|         | newest-cni-20220531193849-2108                             |                                                |                   |                |                     |                     |
	|         | --alsologtostderr -v=1                                     |                                                |                   |                |                     |                     |
	|---------|------------------------------------------------------------|------------------------------------------------|-------------------|----------------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2022/05/31 19:42:50
	Running on machine: minikube7
	Binary: Built with gc go1.18.2 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0531 19:42:50.127342    8616 out.go:296] Setting OutFile to fd 2016 ...
	I0531 19:42:50.216255    8616 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0531 19:42:50.216255    8616 out.go:309] Setting ErrFile to fd 712...
	I0531 19:42:50.216255    8616 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0531 19:42:50.238305    8616 out.go:303] Setting JSON to false
	I0531 19:42:50.243280    8616 start.go:115] hostinfo: {"hostname":"minikube7","uptime":84440,"bootTime":1653941730,"procs":156,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19044 Build 19044","kernelVersion":"10.0.19044 Build 19044","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f66ed2ea-6c04-4a6b-8eea-b2fb0953e990"}
	W0531 19:42:50.243280    8616 start.go:123] gopshost.Virtualization returned error: not implemented yet
	I0531 19:42:50.247300    8616 out.go:177] * [embed-certs-20220531193346-2108] minikube v1.26.0-beta.1 on Microsoft Windows 10 Enterprise N 10.0.19044 Build 19044
	I0531 19:42:50.251277    8616 notify.go:193] Checking for updates...
	I0531 19:42:50.254277    8616 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube7\minikube-integration\kubeconfig
	I0531 19:42:50.257263    8616 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube7\minikube-integration\.minikube
	I0531 19:42:50.259281    8616 out.go:177]   - MINIKUBE_LOCATION=14079
	I0531 19:42:50.262281    8616 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0531 19:42:46.136944    7556 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:54458/healthz ...
	I0531 19:42:46.155664    7556 api_server.go:266] https://127.0.0.1:54458/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0531 19:42:46.155664    7556 api_server.go:102] status: https://127.0.0.1:54458/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0531 19:42:46.638060    7556 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:54458/healthz ...
	I0531 19:42:46.725795    7556 api_server.go:266] https://127.0.0.1:54458/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0531 19:42:46.725795    7556 api_server.go:102] status: https://127.0.0.1:54458/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0531 19:42:47.143046    7556 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:54458/healthz ...
	I0531 19:42:47.242770    7556 api_server.go:266] https://127.0.0.1:54458/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0531 19:42:47.242770    7556 api_server.go:102] status: https://127.0.0.1:54458/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0531 19:42:47.638251    7556 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:54458/healthz ...
	I0531 19:42:47.830869    7556 api_server.go:266] https://127.0.0.1:54458/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0531 19:42:47.831617    7556 api_server.go:102] status: https://127.0.0.1:54458/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0531 19:42:48.140734    7556 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:54458/healthz ...
	I0531 19:42:48.245672    7556 api_server.go:266] https://127.0.0.1:54458/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0531 19:42:48.245672    7556 api_server.go:102] status: https://127.0.0.1:54458/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0531 19:42:48.641307    7556 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:54458/healthz ...
	I0531 19:42:48.733322    7556 api_server.go:266] https://127.0.0.1:54458/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0531 19:42:48.733322    7556 api_server.go:102] status: https://127.0.0.1:54458/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0531 19:42:49.147453    7556 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:54458/healthz ...
	I0531 19:42:49.240676    7556 api_server.go:266] https://127.0.0.1:54458/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0531 19:42:49.240676    7556 api_server.go:102] status: https://127.0.0.1:54458/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0531 19:42:49.648266    7556 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:54458/healthz ...
	I0531 19:42:49.743281    7556 api_server.go:266] https://127.0.0.1:54458/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0531 19:42:49.743281    7556 api_server.go:102] status: https://127.0.0.1:54458/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0531 19:42:50.146279    7556 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:54458/healthz ...
	I0531 19:42:50.240267    7556 api_server.go:266] https://127.0.0.1:54458/healthz returned 200:
	ok
	I0531 19:42:50.325883    7556 api_server.go:140] control plane version: v1.23.6
	I0531 19:42:50.325883    7556 api_server.go:130] duration metric: took 12.194093s to wait for apiserver health ...
	I0531 19:42:50.325883    7556 cni.go:95] Creating CNI manager for ""
	I0531 19:42:50.325883    7556 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0531 19:42:50.325883    7556 system_pods.go:43] waiting for kube-system pods to appear ...
	I0531 19:42:50.433899    7556 system_pods.go:59] 8 kube-system pods found
	I0531 19:42:50.433899    7556 system_pods.go:61] "coredns-64897985d-qpzx5" [801bd65d-655d-451b-a3ff-79295aaeaf09] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0531 19:42:50.433899    7556 system_pods.go:61] "etcd-newest-cni-20220531193849-2108" [663ce3bf-8420-4d38-a012-54abf228ce78] Running
	I0531 19:42:50.433899    7556 system_pods.go:61] "kube-apiserver-newest-cni-20220531193849-2108" [e713d4e4-3356-41ac-b8af-52bf19d65052] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0531 19:42:50.433899    7556 system_pods.go:61] "kube-controller-manager-newest-cni-20220531193849-2108" [fd22af45-85a2-484a-a239-0e45831e8df8] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0531 19:42:50.433899    7556 system_pods.go:61] "kube-proxy-mh9ct" [85823877-90a8-4ad0-b0ad-b4ed75f4845e] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0531 19:42:50.433899    7556 system_pods.go:61] "kube-scheduler-newest-cni-20220531193849-2108" [34deeb35-60bc-42b6-a91d-cedda9a76363] Running
	I0531 19:42:50.433899    7556 system_pods.go:61] "metrics-server-b955d9d8-rt44k" [d0c2b733-0fd1-4e34-88aa-d73b77704a89] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0531 19:42:50.433899    7556 system_pods.go:61] "storage-provisioner" [a104bd0d-cf9b-4dd3-b127-d3e84c4ae96b] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0531 19:42:50.433899    7556 system_pods.go:74] duration metric: took 108.0157ms to wait for pod list to return data ...
	I0531 19:42:50.433899    7556 node_conditions.go:102] verifying NodePressure condition ...
	I0531 19:42:50.538880    7556 node_conditions.go:122] node storage ephemeral capacity is 263174212Ki
	I0531 19:42:50.538880    7556 node_conditions.go:123] node cpu capacity is 16
	I0531 19:42:50.538880    7556 node_conditions.go:105] duration metric: took 104.9809ms to run NodePressure ...
	I0531 19:42:50.538880    7556 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0531 19:42:49.037358    1340 system_pods.go:86] 4 kube-system pods found
	I0531 19:42:49.037358    1340 system_pods.go:89] "coredns-5644d7b6d9-jxp72" [f831437f-5aa7-42cd-bb72-b1a20942f56a] Running
	I0531 19:42:49.037358    1340 system_pods.go:89] "kube-proxy-r556l" [86e67749-a177-45b6-8857-e4f6c55b8dd8] Running
	I0531 19:42:49.037358    1340 system_pods.go:89] "metrics-server-6f89b5864b-v8mjx" [c248957e-7215-4044-9e73-acc8998b61f1] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0531 19:42:49.037358    1340 system_pods.go:89] "storage-provisioner" [8001c475-f873-4298-ad3e-ac9b8d56927b] Running
	I0531 19:42:49.037358    1340 retry.go:31] will retry after 6.402197611s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0531 19:42:52.545436    7556 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml": (2.0065467s)
	I0531 19:42:52.545436    7556 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0531 19:42:52.637427    7556 ops.go:34] apiserver oom_adj: -16
	I0531 19:42:52.637427    7556 kubeadm.go:630] restartCluster took 27.8311331s
	I0531 19:42:52.637427    7556 kubeadm.go:397] StartCluster complete in 27.9633793s
	I0531 19:42:52.637427    7556 settings.go:142] acquiring lock: {Name:mk2f48f1c2db86c45c5c20d13312e07e9c171d09 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0531 19:42:52.638421    7556 settings.go:150] Updating kubeconfig:  C:\Users\jenkins.minikube7\minikube-integration\kubeconfig
	I0531 19:42:52.642395    7556 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube7\minikube-integration\kubeconfig: {Name:mk3b9816be6135fef42a073128e9ed356868417a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0531 19:42:52.750403    7556 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "newest-cni-20220531193849-2108" rescaled to 1
	I0531 19:42:52.750403    7556 start.go:208] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.23.6 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0531 19:42:52.750403    7556 addons.go:415] enableAddons start: toEnable=map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true], additional=[]
	I0531 19:42:52.756465    7556 out.go:177] * Verifying Kubernetes components...
	I0531 19:42:52.750403    7556 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.23.6/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0531 19:42:52.750403    7556 addons.go:65] Setting storage-provisioner=true in profile "newest-cni-20220531193849-2108"
	I0531 19:42:52.750403    7556 addons.go:65] Setting metrics-server=true in profile "newest-cni-20220531193849-2108"
	I0531 19:42:52.750403    7556 addons.go:65] Setting default-storageclass=true in profile "newest-cni-20220531193849-2108"
	I0531 19:42:52.750403    7556 addons.go:65] Setting dashboard=true in profile "newest-cni-20220531193849-2108"
	I0531 19:42:52.751430    7556 config.go:178] Loaded profile config "newest-cni-20220531193849-2108": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.6
	I0531 19:42:51.472496    9560 pod_ready.go:102] pod "metrics-server-b955d9d8-9rlw5" in "kube-system" namespace has status "Ready":"False"
	I0531 19:42:53.953088    9560 pod_ready.go:81] duration metric: took 4m0.1785878s waiting for pod "metrics-server-b955d9d8-9rlw5" in "kube-system" namespace to be "Ready" ...
	E0531 19:42:53.953088    9560 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-b955d9d8-9rlw5" in "kube-system" namespace to be "Ready" (will not retry!)
	I0531 19:42:53.953088    9560 pod_ready.go:38] duration metric: took 4m5.5039717s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0531 19:42:53.953088    9560 kubeadm.go:630] restartCluster took 4m28.9618863s
	W0531 19:42:53.953731    9560 out.go:239] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I0531 19:42:53.953731    9560 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6:$PATH" kubeadm reset --cri-socket /var/run/dockershim.sock --force"
	I0531 19:42:52.756465    7556 addons.go:153] Setting addon storage-provisioner=true in "newest-cni-20220531193849-2108"
	W0531 19:42:52.759410    7556 addons.go:165] addon storage-provisioner should already be in state true
	I0531 19:42:52.756465    7556 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-20220531193849-2108"
	I0531 19:42:52.760420    7556 host.go:66] Checking if "newest-cni-20220531193849-2108" exists ...
	I0531 19:42:52.756465    7556 addons.go:153] Setting addon metrics-server=true in "newest-cni-20220531193849-2108"
	W0531 19:42:52.760420    7556 addons.go:165] addon metrics-server should already be in state true
	I0531 19:42:52.756465    7556 addons.go:153] Setting addon dashboard=true in "newest-cni-20220531193849-2108"
	W0531 19:42:52.760420    7556 addons.go:165] addon dashboard should already be in state true
	I0531 19:42:52.760420    7556 host.go:66] Checking if "newest-cni-20220531193849-2108" exists ...
	I0531 19:42:52.760420    7556 host.go:66] Checking if "newest-cni-20220531193849-2108" exists ...
	I0531 19:42:52.787289    7556 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0531 19:42:52.798232    7556 cli_runner.go:164] Run: docker container inspect newest-cni-20220531193849-2108 --format={{.State.Status}}
	I0531 19:42:52.802209    7556 cli_runner.go:164] Run: docker container inspect newest-cni-20220531193849-2108 --format={{.State.Status}}
	I0531 19:42:52.803210    7556 cli_runner.go:164] Run: docker container inspect newest-cni-20220531193849-2108 --format={{.State.Status}}
	I0531 19:42:52.807213    7556 cli_runner.go:164] Run: docker container inspect newest-cni-20220531193849-2108 --format={{.State.Status}}
	I0531 19:42:53.346951    7556 start.go:786] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I0531 19:42:53.362835    7556 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" newest-cni-20220531193849-2108
	I0531 19:42:54.408200    7556 cli_runner.go:217] Completed: docker container inspect newest-cni-20220531193849-2108 --format={{.State.Status}}: (1.6099613s)
	I0531 19:42:54.435301    7556 cli_runner.go:217] Completed: docker container inspect newest-cni-20220531193849-2108 --format={{.State.Status}}: (1.6330855s)
	I0531 19:42:54.439314    7556 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0531 19:42:54.442291    7556 addons.go:348] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0531 19:42:54.442291    7556 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0531 19:42:54.446291    7556 cli_runner.go:217] Completed: docker container inspect newest-cni-20220531193849-2108 --format={{.State.Status}}: (1.6430744s)
	I0531 19:42:54.451275    7556 out.go:177]   - Using image kubernetesui/dashboard:v2.5.1
	I0531 19:42:54.454298    7556 out.go:177]   - Using image k8s.gcr.io/echoserver:1.4
	I0531 19:42:54.456303    7556 addons.go:348] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0531 19:42:54.456303    7556 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0531 19:42:54.456303    7556 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220531193849-2108
	I0531 19:42:54.461309    7556 cli_runner.go:217] Completed: docker container inspect newest-cni-20220531193849-2108 --format={{.State.Status}}: (1.6540886s)
	I0531 19:42:54.464270    7556 out.go:177]   - Using image fake.domain/k8s.gcr.io/echoserver:1.4
	I0531 19:42:50.265252    8616 config.go:178] Loaded profile config "embed-certs-20220531193346-2108": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.6
	I0531 19:42:50.267300    8616 driver.go:358] Setting default libvirt URI to qemu:///system
	I0531 19:42:54.110723    8616 docker.go:137] docker version: linux-20.10.14
	I0531 19:42:54.118695    8616 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0531 19:42:54.465305    7556 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220531193849-2108
	I0531 19:42:54.466270    7556 addons.go:348] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0531 19:42:54.466270    7556 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0531 19:42:54.477291    7556 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220531193849-2108
	I0531 19:42:54.530792    7556 addons.go:153] Setting addon default-storageclass=true in "newest-cni-20220531193849-2108"
	W0531 19:42:54.530929    7556 addons.go:165] addon default-storageclass should already be in state true
	I0531 19:42:54.530929    7556 host.go:66] Checking if "newest-cni-20220531193849-2108" exists ...
	I0531 19:42:54.565279    7556 cli_runner.go:164] Run: docker container inspect newest-cni-20220531193849-2108 --format={{.State.Status}}
	I0531 19:42:55.004826    7556 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" newest-cni-20220531193849-2108: (1.6419837s)
	I0531 19:42:55.004826    7556 api_server.go:51] waiting for apiserver process to appear ...
	I0531 19:42:55.017835    7556 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 19:42:55.060968    7556 api_server.go:71] duration metric: took 2.3105545s to wait for apiserver process to appear ...
	I0531 19:42:55.060968    7556 api_server.go:87] waiting for apiserver healthz status ...
	I0531 19:42:55.060968    7556 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:54458/healthz ...
	I0531 19:42:55.159964    7556 api_server.go:266] https://127.0.0.1:54458/healthz returned 200:
	ok
	I0531 19:42:55.165977    7556 api_server.go:140] control plane version: v1.23.6
	I0531 19:42:55.165977    7556 api_server.go:130] duration metric: took 105.0091ms to wait for apiserver health ...
	I0531 19:42:55.165977    7556 system_pods.go:43] waiting for kube-system pods to appear ...
	I0531 19:42:55.244153    7556 system_pods.go:59] 8 kube-system pods found
	I0531 19:42:55.244153    7556 system_pods.go:61] "coredns-64897985d-qpzx5" [801bd65d-655d-451b-a3ff-79295aaeaf09] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0531 19:42:55.244153    7556 system_pods.go:61] "etcd-newest-cni-20220531193849-2108" [663ce3bf-8420-4d38-a012-54abf228ce78] Running
	I0531 19:42:55.244153    7556 system_pods.go:61] "kube-apiserver-newest-cni-20220531193849-2108" [e713d4e4-3356-41ac-b8af-52bf19d65052] Running
	I0531 19:42:55.244153    7556 system_pods.go:61] "kube-controller-manager-newest-cni-20220531193849-2108" [fd22af45-85a2-484a-a239-0e45831e8df8] Running
	I0531 19:42:55.244153    7556 system_pods.go:61] "kube-proxy-mh9ct" [85823877-90a8-4ad0-b0ad-b4ed75f4845e] Running
	I0531 19:42:55.244153    7556 system_pods.go:61] "kube-scheduler-newest-cni-20220531193849-2108" [34deeb35-60bc-42b6-a91d-cedda9a76363] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0531 19:42:55.244153    7556 system_pods.go:61] "metrics-server-b955d9d8-rt44k" [d0c2b733-0fd1-4e34-88aa-d73b77704a89] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0531 19:42:55.244153    7556 system_pods.go:61] "storage-provisioner" [a104bd0d-cf9b-4dd3-b127-d3e84c4ae96b] Running
	I0531 19:42:55.244153    7556 system_pods.go:74] duration metric: took 78.1757ms to wait for pod list to return data ...
	I0531 19:42:55.244153    7556 default_sa.go:34] waiting for default service account to be created ...
	I0531 19:42:55.253122    7556 default_sa.go:45] found service account: "default"
	I0531 19:42:55.253122    7556 default_sa.go:55] duration metric: took 7.9853ms for default service account to be created ...
	I0531 19:42:55.254122    7556 kubeadm.go:572] duration metric: took 2.5037081s to wait for : map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] ...
	I0531 19:42:55.254122    7556 node_conditions.go:102] verifying NodePressure condition ...
	I0531 19:42:55.265141    7556 node_conditions.go:122] node storage ephemeral capacity is 263174212Ki
	I0531 19:42:55.265141    7556 node_conditions.go:123] node cpu capacity is 16
	I0531 19:42:55.265141    7556 node_conditions.go:105] duration metric: took 11.0188ms to run NodePressure ...
	I0531 19:42:55.265141    7556 start.go:213] waiting for startup goroutines ...
	I0531 19:42:56.082328    7556 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220531193849-2108: (1.6160519s)
	I0531 19:42:56.082328    7556 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:54454 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\newest-cni-20220531193849-2108\id_rsa Username:docker}
	I0531 19:42:56.098315    7556 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220531193849-2108: (1.6210174s)
	I0531 19:42:56.098315    7556 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:54454 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\newest-cni-20220531193849-2108\id_rsa Username:docker}
	I0531 19:42:56.129090    7556 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220531193849-2108: (1.6726375s)
	I0531 19:42:57.078202    8616 cli_runner.go:217] Completed: docker system info --format "{{json .}}": (2.9594937s)
	I0531 19:42:57.079004    8616 info.go:265] docker info: {ID:JKWR:L4LW:XYJC:G6AI:GZFU:RUGW:CCH6:OD2M:V572:4FTB:B7YC:DTUC Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:92 OomKillDisable:true NGoroutines:63 SystemTime:2022-05-31 19:42:55.6928806 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.102.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_
64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:53902323712 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.14 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3df54a852345ae127d1fa3092b95168e4a88e2f8 Expected:3df54a852345ae127d1fa3092b95168e4a88e2f8} RuncCommit:{ID:v1.0.3-0-gf46b6ba Expected:v1.0.3-0-gf46b6ba} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,
profile=default] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.5.1] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:C:\Program Files\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc.
Version:v0.17.0]] Warnings:<nil>}}
	I0531 19:42:57.083679    8616 out.go:177] * Using the docker driver based on existing profile
	I0531 19:42:55.542888    1340 system_pods.go:86] 5 kube-system pods found
	I0531 19:42:55.542888    1340 system_pods.go:89] "coredns-5644d7b6d9-jxp72" [f831437f-5aa7-42cd-bb72-b1a20942f56a] Running
	I0531 19:42:55.542888    1340 system_pods.go:89] "etcd-old-k8s-version-20220531192531-2108" [d26cf30c-3701-4ea4-8a08-6aa0010138c2] Pending
	I0531 19:42:55.542888    1340 system_pods.go:89] "kube-proxy-r556l" [86e67749-a177-45b6-8857-e4f6c55b8dd8] Running
	I0531 19:42:55.542888    1340 system_pods.go:89] "metrics-server-6f89b5864b-v8mjx" [c248957e-7215-4044-9e73-acc8998b61f1] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0531 19:42:55.542888    1340 system_pods.go:89] "storage-provisioner" [8001c475-f873-4298-ad3e-ac9b8d56927b] Running
	I0531 19:42:55.543887    1340 retry.go:31] will retry after 6.062999549s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0531 19:42:57.086388    8616 start.go:284] selected driver: docker
	I0531 19:42:57.086388    8616 start.go:806] validating driver "docker" against &{Name:embed-certs-20220531193346-2108 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653596720-14230@sha256:e953786303ac8350802546ee187d34e89f0007072a54fdbcc2f86a1fb8575418 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.6 ClusterName:embed-certs-20220531193346-2108 Namespace:d
efault APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.23.6 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.5.1@sha256:cc746e7a0b1eec0db01cbabbb6386b23d7af97e79fa9e36bb883a95b7eb96fe2 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s Schedule
dStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube7:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0531 19:42:57.086388    8616 start.go:817] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0531 19:42:57.174246    8616 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0531 19:42:59.712379    8616 cli_runner.go:217] Completed: docker system info --format "{{json .}}": (2.5379525s)
	I0531 19:42:59.712780    8616 info.go:265] docker info: {ID:JKWR:L4LW:XYJC:G6AI:GZFU:RUGW:CCH6:OD2M:V572:4FTB:B7YC:DTUC Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:92 OomKillDisable:true NGoroutines:63 SystemTime:2022-05-31 19:42:58.4856581 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.102.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_
64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:53902323712 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.14 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3df54a852345ae127d1fa3092b95168e4a88e2f8 Expected:3df54a852345ae127d1fa3092b95168e4a88e2f8} RuncCommit:{ID:v1.0.3-0-gf46b6ba Expected:v1.0.3-0-gf46b6ba} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,
profile=default] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.5.1] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:C:\Program Files\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc.
Version:v0.17.0]] Warnings:<nil>}}
	I0531 19:42:59.713307    8616 start_flags.go:847] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0531 19:42:59.713408    8616 cni.go:95] Creating CNI manager for ""
	I0531 19:42:59.713408    8616 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0531 19:42:59.713435    8616 start_flags.go:306] config:
	{Name:embed-certs-20220531193346-2108 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653596720-14230@sha256:e953786303ac8350802546ee187d34e89f0007072a54fdbcc2f86a1fb8575418 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.6 ClusterName:embed-certs-20220531193346-2108 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:clus
ter.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.23.6 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.5.1@sha256:cc746e7a0b1eec0db01cbabbb6386b23d7af97e79fa9e36bb883a95b7eb96fe2 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:f
alse ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube7:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0531 19:42:59.719162    8616 out.go:177] * Starting control plane node embed-certs-20220531193346-2108 in cluster embed-certs-20220531193346-2108
	I0531 19:42:59.722725    8616 cache.go:120] Beginning downloading kic base image for docker with docker
	I0531 19:42:59.725806    8616 out.go:177] * Pulling base image ...
	I0531 19:42:59.729482    8616 preload.go:132] Checking if preload exists for k8s version v1.23.6 and runtime docker
	I0531 19:42:59.729687    8616 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653596720-14230@sha256:e953786303ac8350802546ee187d34e89f0007072a54fdbcc2f86a1fb8575418 in local docker daemon
	I0531 19:42:59.729848    8616 preload.go:148] Found local preload: C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.23.6-docker-overlay2-amd64.tar.lz4
	I0531 19:42:59.729948    8616 cache.go:57] Caching tarball of preloaded images
	I0531 19:42:59.730619    8616 preload.go:174] Found C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.23.6-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0531 19:42:59.731046    8616 cache.go:60] Finished verifying existence of preloaded tar for  v1.23.6 on docker
	I0531 19:42:59.731505    8616 profile.go:148] Saving config to C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\embed-certs-20220531193346-2108\config.json ...
	I0531 19:42:56.129266    7556 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:54454 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\newest-cni-20220531193849-2108\id_rsa Username:docker}
	I0531 19:42:56.152573    7556 cli_runner.go:217] Completed: docker container inspect newest-cni-20220531193849-2108 --format={{.State.Status}}: (1.5871038s)
	I0531 19:42:56.152573    7556 addons.go:348] installing /etc/kubernetes/addons/storageclass.yaml
	I0531 19:42:56.152573    7556 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0531 19:42:56.160019    7556 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220531193849-2108
	I0531 19:42:56.540386    7556 addons.go:348] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0531 19:42:56.540938    7556 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0531 19:42:56.555337    7556 addons.go:348] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0531 19:42:56.555337    7556 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1849 bytes)
	I0531 19:42:56.568986    7556 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.6/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0531 19:42:56.824720    7556 addons.go:348] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0531 19:42:56.824720    7556 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0531 19:42:56.831199    7556 addons.go:348] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0531 19:42:56.831199    7556 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0531 19:42:57.036004    7556 addons.go:348] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0531 19:42:57.036038    7556 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0531 19:42:57.052974    7556 addons.go:348] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0531 19:42:57.053103    7556 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0531 19:42:57.174246    7556 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.6/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0531 19:42:57.230987    7556 addons.go:348] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0531 19:42:57.231080    7556 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4196 bytes)
	I0531 19:42:57.340153    7556 addons.go:348] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0531 19:42:57.340153    7556 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0531 19:42:57.529392    7556 addons.go:348] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0531 19:42:57.529392    7556 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0531 19:42:57.597366    7556 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220531193849-2108: (1.4373411s)
	I0531 19:42:57.597597    7556 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:54454 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\newest-cni-20220531193849-2108\id_rsa Username:docker}
	I0531 19:42:57.642507    7556 addons.go:348] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0531 19:42:57.642507    7556 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0531 19:42:57.745306    7556 addons.go:348] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0531 19:42:57.745306    7556 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0531 19:42:57.841187    7556 addons.go:348] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0531 19:42:57.841187    7556 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0531 19:42:57.950588    7556 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.6/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0531 19:42:58.061914    7556 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.6/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0531 19:43:01.060754    8616 image.go:79] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653596720-14230@sha256:e953786303ac8350802546ee187d34e89f0007072a54fdbcc2f86a1fb8575418 in local docker daemon, skipping pull
	I0531 19:43:01.060754    8616 cache.go:141] gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653596720-14230@sha256:e953786303ac8350802546ee187d34e89f0007072a54fdbcc2f86a1fb8575418 exists in daemon, skipping load
	I0531 19:43:01.060932    8616 cache.go:206] Successfully downloaded all kic artifacts
	I0531 19:43:01.061977    8616 start.go:352] acquiring machines lock for embed-certs-20220531193346-2108: {Name:mk63e14faf496672dd0ff3bf59fb3b27b51120db Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0531 19:43:01.061977    8616 start.go:356] acquired machines lock for "embed-certs-20220531193346-2108" in 0s
	I0531 19:43:01.061977    8616 start.go:94] Skipping create...Using existing machine configuration
	I0531 19:43:01.061977    8616 fix.go:55] fixHost starting: 
	I0531 19:43:01.077901    8616 cli_runner.go:164] Run: docker container inspect embed-certs-20220531193346-2108 --format={{.State.Status}}
	I0531 19:43:02.412373    8616 cli_runner.go:217] Completed: docker container inspect embed-certs-20220531193346-2108 --format={{.State.Status}}: (1.3344665s)
	I0531 19:43:02.412373    8616 fix.go:103] recreateIfNeeded on embed-certs-20220531193346-2108: state=Stopped err=<nil>
	W0531 19:43:02.412373    8616 fix.go:129] unexpected machine state, will restart: <nil>
	I0531 19:43:02.432375    8616 out.go:177] * Restarting existing docker container for "embed-certs-20220531193346-2108" ...
	I0531 19:43:01.632478    1340 system_pods.go:86] 6 kube-system pods found
	I0531 19:43:01.632478    1340 system_pods.go:89] "coredns-5644d7b6d9-jxp72" [f831437f-5aa7-42cd-bb72-b1a20942f56a] Running
	I0531 19:43:01.632478    1340 system_pods.go:89] "etcd-old-k8s-version-20220531192531-2108" [d26cf30c-3701-4ea4-8a08-6aa0010138c2] Pending
	I0531 19:43:01.632478    1340 system_pods.go:89] "kube-controller-manager-old-k8s-version-20220531192531-2108" [f99468bb-525f-4752-9dab-f40607db0d10] Pending
	I0531 19:43:01.632478    1340 system_pods.go:89] "kube-proxy-r556l" [86e67749-a177-45b6-8857-e4f6c55b8dd8] Running
	I0531 19:43:01.632478    1340 system_pods.go:89] "metrics-server-6f89b5864b-v8mjx" [c248957e-7215-4044-9e73-acc8998b61f1] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0531 19:43:01.632478    1340 system_pods.go:89] "storage-provisioner" [8001c475-f873-4298-ad3e-ac9b8d56927b] Running
	I0531 19:43:01.632478    1340 retry.go:31] will retry after 10.504197539s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0531 19:43:01.236824    7556 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.6/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (4.6678186s)
	I0531 19:43:01.635485    7556 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.6/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (4.4612198s)
	I0531 19:43:01.635485    7556 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.6/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (3.6848808s)
	I0531 19:43:01.635485    7556 addons.go:386] Verifying addon metrics-server=true in "newest-cni-20220531193849-2108"
	I0531 19:43:02.727980    7556 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.6/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (4.6660457s)
	I0531 19:43:02.732954    7556 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server, dashboard
	I0531 19:43:02.735959    7556 addons.go:417] enableAddons completed in 9.9855123s
	I0531 19:43:02.974430    7556 start.go:504] kubectl: 1.18.2, cluster: 1.23.6 (minor skew: 5)
	I0531 19:43:02.977020    7556 out.go:177] 
	W0531 19:43:02.978721    7556 out.go:239] ! C:\ProgramData\chocolatey\bin\kubectl.exe is version 1.18.2, which may have incompatibilites with Kubernetes 1.23.6.
	I0531 19:43:02.982712    7556 out.go:177]   - Want kubectl v1.23.6? Try 'minikube kubectl -- get pods -A'
	I0531 19:43:02.986727    7556 out.go:177] * Done! kubectl is now configured to use "newest-cni-20220531193849-2108" cluster and "default" namespace by default
	I0531 19:43:02.454413    8616 cli_runner.go:164] Run: docker start embed-certs-20220531193346-2108
	I0531 19:43:04.888118    8616 cli_runner.go:217] Completed: docker start embed-certs-20220531193346-2108: (2.4336945s)
	I0531 19:43:04.899149    8616 cli_runner.go:164] Run: docker container inspect embed-certs-20220531193346-2108 --format={{.State.Status}}
	I0531 19:43:06.460445    8616 cli_runner.go:217] Completed: docker container inspect embed-certs-20220531193346-2108 --format={{.State.Status}}: (1.5602977s)
	I0531 19:43:06.460445    8616 kic.go:416] container "embed-certs-20220531193346-2108" state is running.
	I0531 19:43:06.474428    8616 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-20220531193346-2108
	I0531 19:43:07.957441    8616 cli_runner.go:217] Completed: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-20220531193346-2108: (1.4830066s)
	I0531 19:43:07.957441    8616 profile.go:148] Saving config to C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\embed-certs-20220531193346-2108\config.json ...
	I0531 19:43:07.960439    8616 machine.go:88] provisioning docker machine ...
	I0531 19:43:07.960439    8616 ubuntu.go:169] provisioning hostname "embed-certs-20220531193346-2108"
	I0531 19:43:07.970438    8616 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220531193346-2108
	I0531 19:43:09.409622    8616 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220531193346-2108: (1.4380322s)
	I0531 19:43:09.412613    8616 main.go:134] libmachine: Using SSH client type: native
	I0531 19:43:09.413634    8616 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x1322ea0] 0x1325d00 <nil>  [] 0s} 127.0.0.1 54560 <nil> <nil>}
	I0531 19:43:09.413634    8616 main.go:134] libmachine: About to run SSH command:
	sudo hostname embed-certs-20220531193346-2108 && echo "embed-certs-20220531193346-2108" | sudo tee /etc/hostname
	I0531 19:43:09.664937    8616 main.go:134] libmachine: SSH cmd err, output: <nil>: embed-certs-20220531193346-2108
	
	I0531 19:43:09.674907    8616 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220531193346-2108
	I0531 19:43:12.159805    1340 system_pods.go:86] 7 kube-system pods found
	I0531 19:43:12.159805    1340 system_pods.go:89] "coredns-5644d7b6d9-jxp72" [f831437f-5aa7-42cd-bb72-b1a20942f56a] Running
	I0531 19:43:12.159805    1340 system_pods.go:89] "etcd-old-k8s-version-20220531192531-2108" [d26cf30c-3701-4ea4-8a08-6aa0010138c2] Running
	I0531 19:43:12.159805    1340 system_pods.go:89] "kube-controller-manager-old-k8s-version-20220531192531-2108" [f99468bb-525f-4752-9dab-f40607db0d10] Running
	I0531 19:43:12.159805    1340 system_pods.go:89] "kube-proxy-r556l" [86e67749-a177-45b6-8857-e4f6c55b8dd8] Running
	I0531 19:43:12.159805    1340 system_pods.go:89] "kube-scheduler-old-k8s-version-20220531192531-2108" [7bbd9c04-c349-4dae-8a28-e0127457afce] Pending
	I0531 19:43:12.159805    1340 system_pods.go:89] "metrics-server-6f89b5864b-v8mjx" [c248957e-7215-4044-9e73-acc8998b61f1] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0531 19:43:12.159805    1340 system_pods.go:89] "storage-provisioner" [8001c475-f873-4298-ad3e-ac9b8d56927b] Running
	I0531 19:43:12.159805    1340 retry.go:31] will retry after 12.194240946s: missing components: kube-apiserver, kube-scheduler
	I0531 19:43:11.086438    8616 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220531193346-2108: (1.4115245s)
	I0531 19:43:11.092434    8616 main.go:134] libmachine: Using SSH client type: native
	I0531 19:43:11.093440    8616 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x1322ea0] 0x1325d00 <nil>  [] 0s} 127.0.0.1 54560 <nil> <nil>}
	I0531 19:43:11.093440    8616 main.go:134] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-20220531193346-2108' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-20220531193346-2108/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-20220531193346-2108' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0531 19:43:11.255429    8616 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I0531 19:43:11.255429    8616 ubuntu.go:175] set auth options {CertDir:C:\Users\jenkins.minikube7\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube7\minikube-integration\.minikube}
	I0531 19:43:11.255429    8616 ubuntu.go:177] setting up certificates
	I0531 19:43:11.255429    8616 provision.go:83] configureAuth start
	I0531 19:43:11.265440    8616 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-20220531193346-2108
	I0531 19:43:12.708520    8616 cli_runner.go:217] Completed: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-20220531193346-2108: (1.4430742s)
	I0531 19:43:12.708520    8616 provision.go:138] copyHostCerts
	I0531 19:43:12.708520    8616 exec_runner.go:144] found C:\Users\jenkins.minikube7\minikube-integration\.minikube/cert.pem, removing ...
	I0531 19:43:12.708520    8616 exec_runner.go:207] rm: C:\Users\jenkins.minikube7\minikube-integration\.minikube\cert.pem
	I0531 19:43:12.709415    8616 exec_runner.go:151] cp: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube7\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0531 19:43:12.710359    8616 exec_runner.go:144] found C:\Users\jenkins.minikube7\minikube-integration\.minikube/key.pem, removing ...
	I0531 19:43:12.710359    8616 exec_runner.go:207] rm: C:\Users\jenkins.minikube7\minikube-integration\.minikube\key.pem
	I0531 19:43:12.711169    8616 exec_runner.go:151] cp: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube7\minikube-integration\.minikube/key.pem (1679 bytes)
	I0531 19:43:12.712053    8616 exec_runner.go:144] found C:\Users\jenkins.minikube7\minikube-integration\.minikube/ca.pem, removing ...
	I0531 19:43:12.712053    8616 exec_runner.go:207] rm: C:\Users\jenkins.minikube7\minikube-integration\.minikube\ca.pem
	I0531 19:43:12.712716    8616 exec_runner.go:151] cp: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube7\minikube-integration\.minikube/ca.pem (1078 bytes)
	I0531 19:43:12.713400    8616 provision.go:112] generating server cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.embed-certs-20220531193346-2108 san=[192.168.67.2 127.0.0.1 localhost 127.0.0.1 minikube embed-certs-20220531193346-2108]
	I0531 19:43:13.011580    8616 provision.go:172] copyRemoteCerts
	I0531 19:43:13.023550    8616 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0531 19:43:13.038556    8616 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220531193346-2108
	I0531 19:43:14.338732    8616 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220531193346-2108: (1.3001706s)
	I0531 19:43:14.338732    8616 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:54560 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\embed-certs-20220531193346-2108\id_rsa Username:docker}
	I0531 19:43:14.551697    8616 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (1.5281407s)
	I0531 19:43:14.552675    8616 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0531 19:43:14.613469    8616 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1265 bytes)
	I0531 19:43:14.684854    8616 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0531 19:43:14.747113    8616 provision.go:86] duration metric: configureAuth took 3.4916684s
	I0531 19:43:14.747113    8616 ubuntu.go:193] setting minikube options for container-runtime
	I0531 19:43:14.748088    8616 config.go:178] Loaded profile config "embed-certs-20220531193346-2108": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.6
	I0531 19:43:14.761866    8616 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220531193346-2108
	I0531 19:43:16.087899    8616 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220531193346-2108: (1.3258419s)
	I0531 19:43:16.091824    8616 main.go:134] libmachine: Using SSH client type: native
	I0531 19:43:16.092214    8616 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x1322ea0] 0x1325d00 <nil>  [] 0s} 127.0.0.1 54560 <nil> <nil>}
	I0531 19:43:16.092298    8616 main.go:134] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0531 19:43:16.307822    8616 main.go:134] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0531 19:43:16.307822    8616 ubuntu.go:71] root file system type: overlay
	I0531 19:43:16.308825    8616 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0531 19:43:16.317850    8616 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220531193346-2108
	I0531 19:43:17.695543    8616 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220531193346-2108: (1.3776872s)
	I0531 19:43:17.701536    8616 main.go:134] libmachine: Using SSH client type: native
	I0531 19:43:17.701536    8616 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x1322ea0] 0x1325d00 <nil>  [] 0s} 127.0.0.1 54560 <nil> <nil>}
	I0531 19:43:17.701536    8616 main.go:134] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0531 19:43:17.874809    8616 main.go:134] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0531 19:43:17.883833    8616 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220531193346-2108
	I0531 19:43:19.149206    8616 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220531193346-2108: (1.2653668s)
	I0531 19:43:19.152226    8616 main.go:134] libmachine: Using SSH client type: native
	I0531 19:43:19.153214    8616 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x1322ea0] 0x1325d00 <nil>  [] 0s} 127.0.0.1 54560 <nil> <nil>}
	I0531 19:43:19.153214    8616 main.go:134] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0531 19:43:19.357762    8616 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I0531 19:43:19.357762    8616 machine.go:91] provisioned docker machine in 11.397274s
	I0531 19:43:19.357762    8616 start.go:306] post-start starting for "embed-certs-20220531193346-2108" (driver="docker")
	I0531 19:43:19.357762    8616 start.go:316] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0531 19:43:19.368634    8616 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0531 19:43:19.375625    8616 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220531193346-2108
	I0531 19:43:24.393133    1340 system_pods.go:86] 8 kube-system pods found
	I0531 19:43:24.393133    1340 system_pods.go:89] "coredns-5644d7b6d9-jxp72" [f831437f-5aa7-42cd-bb72-b1a20942f56a] Running
	I0531 19:43:24.393133    1340 system_pods.go:89] "etcd-old-k8s-version-20220531192531-2108" [d26cf30c-3701-4ea4-8a08-6aa0010138c2] Running
	I0531 19:43:24.393133    1340 system_pods.go:89] "kube-apiserver-old-k8s-version-20220531192531-2108" [d699c547-2f4a-4e3d-8249-2018ca3f8fb7] Running
	I0531 19:43:24.393133    1340 system_pods.go:89] "kube-controller-manager-old-k8s-version-20220531192531-2108" [f99468bb-525f-4752-9dab-f40607db0d10] Running
	I0531 19:43:24.393133    1340 system_pods.go:89] "kube-proxy-r556l" [86e67749-a177-45b6-8857-e4f6c55b8dd8] Running
	I0531 19:43:24.393133    1340 system_pods.go:89] "kube-scheduler-old-k8s-version-20220531192531-2108" [7bbd9c04-c349-4dae-8a28-e0127457afce] Running
	I0531 19:43:24.393133    1340 system_pods.go:89] "metrics-server-6f89b5864b-v8mjx" [c248957e-7215-4044-9e73-acc8998b61f1] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0531 19:43:24.393133    1340 system_pods.go:89] "storage-provisioner" [8001c475-f873-4298-ad3e-ac9b8d56927b] Running
	I0531 19:43:24.393133    1340 system_pods.go:126] duration metric: took 56.8618258s to wait for k8s-apps to be running ...
	I0531 19:43:24.393133    1340 system_svc.go:44] waiting for kubelet service to be running ....
	I0531 19:43:24.403125    1340 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0531 19:43:24.445798    1340 system_svc.go:56] duration metric: took 52.6647ms WaitForService to wait for kubelet.
	I0531 19:43:24.445798    1340 kubeadm.go:572] duration metric: took 1m7.5077501s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0531 19:43:24.445798    1340 node_conditions.go:102] verifying NodePressure condition ...
	I0531 19:43:24.453799    1340 node_conditions.go:122] node storage ephemeral capacity is 263174212Ki
	I0531 19:43:24.453799    1340 node_conditions.go:123] node cpu capacity is 16
	I0531 19:43:24.453799    1340 node_conditions.go:105] duration metric: took 8.0013ms to run NodePressure ...
	I0531 19:43:24.453799    1340 start.go:213] waiting for startup goroutines ...
	I0531 19:43:24.669491    1340 start.go:504] kubectl: 1.18.2, cluster: 1.16.0 (minor skew: 2)
	I0531 19:43:24.674088    1340 out.go:177] 
	W0531 19:43:24.677260    1340 out.go:239] ! C:\ProgramData\chocolatey\bin\kubectl.exe is version 1.18.2, which may have incompatibilites with Kubernetes 1.16.0.
	I0531 19:43:24.680532    1340 out.go:177]   - Want kubectl v1.16.0? Try 'minikube kubectl -- get pods -A'
	I0531 19:43:24.685300    1340 out.go:177] * Done! kubectl is now configured to use "old-k8s-version-20220531192531-2108" cluster and "default" namespace by default
	I0531 19:43:20.624671    8616 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220531193346-2108: (1.2490409s)
	I0531 19:43:20.624671    8616 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:54560 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\embed-certs-20220531193346-2108\id_rsa Username:docker}
	I0531 19:43:20.759134    8616 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (1.3904934s)
	I0531 19:43:20.769129    8616 ssh_runner.go:195] Run: cat /etc/os-release
	I0531 19:43:20.781134    8616 main.go:134] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0531 19:43:20.781134    8616 main.go:134] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0531 19:43:20.781134    8616 main.go:134] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0531 19:43:20.781134    8616 info.go:137] Remote host: Ubuntu 20.04.4 LTS
	I0531 19:43:20.781134    8616 filesync.go:126] Scanning C:\Users\jenkins.minikube7\minikube-integration\.minikube\addons for local assets ...
	I0531 19:43:20.782134    8616 filesync.go:126] Scanning C:\Users\jenkins.minikube7\minikube-integration\.minikube\files for local assets ...
	I0531 19:43:20.783138    8616 filesync.go:149] local asset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\files\etc\ssl\certs\21082.pem -> 21082.pem in /etc/ssl/certs
	I0531 19:43:20.797122    8616 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0531 19:43:20.827072    8616 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\files\etc\ssl\certs\21082.pem --> /etc/ssl/certs/21082.pem (1708 bytes)
	I0531 19:43:20.887348    8616 start.go:309] post-start completed in 1.5295791s
	I0531 19:43:20.898339    8616 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0531 19:43:20.904357    8616 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220531193346-2108
	I0531 19:43:22.218797    8616 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220531193346-2108: (1.3134488s)
	I0531 19:43:22.218797    8616 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:54560 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\embed-certs-20220531193346-2108\id_rsa Username:docker}
	I0531 19:43:22.370144    8616 ssh_runner.go:235] Completed: sh -c "df -h /var | awk 'NR==2{print $5}'": (1.4717985s)
	I0531 19:43:22.382114    8616 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0531 19:43:22.399454    8616 fix.go:57] fixHost completed within 21.3373847s
	I0531 19:43:22.399454    8616 start.go:81] releasing machines lock for "embed-certs-20220531193346-2108", held for 21.3373847s
	I0531 19:43:22.406442    8616 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-20220531193346-2108
	I0531 19:43:23.736218    8616 cli_runner.go:217] Completed: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-20220531193346-2108: (1.3297705s)
	I0531 19:43:23.739211    8616 ssh_runner.go:195] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0531 19:43:23.747232    8616 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220531193346-2108
	I0531 19:43:23.748219    8616 ssh_runner.go:195] Run: systemctl --version
	I0531 19:43:23.757214    8616 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220531193346-2108
	I0531 19:43:25.191855    8616 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220531193346-2108: (1.4446175s)
	I0531 19:43:25.192881    8616 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:54560 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\embed-certs-20220531193346-2108\id_rsa Username:docker}
	I0531 19:43:25.216528    8616 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220531193346-2108: (1.4593082s)
	I0531 19:43:25.216528    8616 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:54560 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\embed-certs-20220531193346-2108\id_rsa Username:docker}
	I0531 19:43:25.434737    8616 ssh_runner.go:235] Completed: curl -sS -m 2 https://k8s.gcr.io/: (1.6955191s)
	I0531 19:43:25.434737    8616 ssh_runner.go:235] Completed: systemctl --version: (1.6865115s)
	I0531 19:43:25.450724    8616 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0531 19:43:25.564263    8616 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0531 19:43:25.598655    8616 cruntime.go:273] skipping containerd shutdown because we are bound to it
	I0531 19:43:25.608101    8616 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0531 19:43:25.652695    8616 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/dockershim.sock
	image-endpoint: unix:///var/run/dockershim.sock
	" | sudo tee /etc/crictl.yaml"
	I0531 19:43:25.706768    8616 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0531 19:43:25.913648    8616 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0531 19:43:26.130777    8616 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0531 19:43:26.181761    8616 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0531 19:43:26.375718    8616 ssh_runner.go:195] Run: sudo systemctl start docker
	I0531 19:43:26.412452    8616 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0531 19:43:26.527203    8616 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0531 19:43:26.636205    8616 out.go:204] * Preparing Kubernetes v1.23.6 on Docker 20.10.16 ...
	I0531 19:43:26.651207    8616 cli_runner.go:164] Run: docker exec -t embed-certs-20220531193346-2108 dig +short host.docker.internal
	I0531 19:43:28.191872    8616 cli_runner.go:217] Completed: docker exec -t embed-certs-20220531193346-2108 dig +short host.docker.internal: (1.5406583s)
	I0531 19:43:28.191872    8616 network.go:96] got host ip for mount in container by digging dns: 192.168.65.2
	I0531 19:43:28.202880    8616 ssh_runner.go:195] Run: grep 192.168.65.2	host.minikube.internal$ /etc/hosts
	I0531 19:43:28.218418    8616 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.65.2	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0531 19:43:28.262706    8616 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" embed-certs-20220531193346-2108
	I0531 19:43:29.498586    8616 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" embed-certs-20220531193346-2108: (1.2358751s)
	I0531 19:43:29.498586    8616 preload.go:132] Checking if preload exists for k8s version v1.23.6 and runtime docker
	I0531 19:43:29.505594    8616 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0531 19:43:29.585592    8616 docker.go:610] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.23.6
	k8s.gcr.io/kube-proxy:v1.23.6
	k8s.gcr.io/kube-scheduler:v1.23.6
	k8s.gcr.io/kube-controller-manager:v1.23.6
	k8s.gcr.io/etcd:3.5.1-0
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28.4-glibc
	
	-- /stdout --
	I0531 19:43:29.585592    8616 docker.go:541] Images already preloaded, skipping extraction
	I0531 19:43:29.592584    8616 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0531 19:43:29.692450    8616 docker.go:610] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.23.6
	k8s.gcr.io/kube-controller-manager:v1.23.6
	k8s.gcr.io/kube-proxy:v1.23.6
	k8s.gcr.io/kube-scheduler:v1.23.6
	k8s.gcr.io/etcd:3.5.1-0
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28.4-glibc
	
	-- /stdout --
	I0531 19:43:29.692450    8616 cache_images.go:84] Images are preloaded, skipping loading
	I0531 19:43:29.698436    8616 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0531 19:43:29.922432    8616 cni.go:95] Creating CNI manager for ""
	I0531 19:43:29.922549    8616 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0531 19:43:29.922549    8616 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0531 19:43:29.922549    8616 kubeadm.go:158] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.67.2 APIServerPort:8443 KubernetesVersion:v1.23.6 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-20220531193346-2108 NodeName:embed-certs-20220531193346-2108 DNSDomain:cluster.local CRISocket:/var/run/dockershim.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.67.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.67.2 CgroupDriver:cgroupfs ClientCAFile:/
var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	I0531 19:43:29.922775    8616 kubeadm.go:162] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.67.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/dockershim.sock
	  name: "embed-certs-20220531193346-2108"
	  kubeletExtraArgs:
	    node-ip: 192.168.67.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.67.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.23.6
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0531 19:43:29.922863    8616 kubeadm.go:961] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.23.6/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=docker --hostname-override=embed-certs-20220531193346-2108 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.67.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.23.6 ClusterName:embed-certs-20220531193346-2108 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0531 19:43:29.937834    8616 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.23.6
	I0531 19:43:29.970126    8616 binaries.go:44] Found k8s binaries, skipping transfer
	I0531 19:43:29.980712    8616 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0531 19:43:30.003043    8616 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (357 bytes)
	I0531 19:43:30.040432    8616 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0531 19:43:30.080432    8616 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2053 bytes)
	I0531 19:43:30.128472    8616 ssh_runner.go:195] Run: grep 192.168.67.2	control-plane.minikube.internal$ /etc/hosts
	I0531 19:43:30.144212    8616 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.67.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0531 19:43:30.174247    8616 certs.go:54] Setting up C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\embed-certs-20220531193346-2108 for IP: 192.168.67.2
	I0531 19:43:30.174247    8616 certs.go:182] skipping minikubeCA CA generation: C:\Users\jenkins.minikube7\minikube-integration\.minikube\ca.key
	I0531 19:43:30.175233    8616 certs.go:182] skipping proxyClientCA CA generation: C:\Users\jenkins.minikube7\minikube-integration\.minikube\proxy-client-ca.key
	I0531 19:43:30.175233    8616 certs.go:298] skipping minikube-user signed cert generation: C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\embed-certs-20220531193346-2108\client.key
	I0531 19:43:30.176258    8616 certs.go:298] skipping minikube signed cert generation: C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\embed-certs-20220531193346-2108\apiserver.key.c7fa3a9e
	I0531 19:43:30.176258    8616 certs.go:298] skipping aggregator signed cert generation: C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\embed-certs-20220531193346-2108\proxy-client.key
	I0531 19:43:30.177245    8616 certs.go:388] found cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\2108.pem (1338 bytes)
	W0531 19:43:30.177245    8616 certs.go:384] ignoring C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\2108_empty.pem, impossibly tiny 0 bytes
	I0531 19:43:30.178242    8616 certs.go:388] found cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca-key.pem (1675 bytes)
	I0531 19:43:30.178242    8616 certs.go:388] found cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem (1078 bytes)
	I0531 19:43:30.178242    8616 certs.go:388] found cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0531 19:43:30.178242    8616 certs.go:388] found cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\key.pem (1679 bytes)
	I0531 19:43:30.179223    8616 certs.go:388] found cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\files\etc\ssl\certs\C:\Users\jenkins.minikube7\minikube-integration\.minikube\files\etc\ssl\certs\21082.pem (1708 bytes)
	I0531 19:43:30.181232    8616 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\embed-certs-20220531193346-2108\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0531 19:43:30.231446    8616 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\embed-certs-20220531193346-2108\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0531 19:43:30.286026    8616 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\embed-certs-20220531193346-2108\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0531 19:43:30.347980    8616 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\embed-certs-20220531193346-2108\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0531 19:43:30.409568    8616 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0531 19:43:30.468376    8616 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0531 19:43:30.523994    8616 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0531 19:43:30.580310    8616 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0531 19:43:30.649449    8616 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0531 19:43:30.719603    8616 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\2108.pem --> /usr/share/ca-certificates/2108.pem (1338 bytes)
	I0531 19:43:30.786513    8616 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\files\etc\ssl\certs\21082.pem --> /usr/share/ca-certificates/21082.pem (1708 bytes)
	I0531 19:43:30.859521    8616 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (752 bytes)
	I0531 19:43:30.907080    8616 ssh_runner.go:195] Run: openssl version
	I0531 19:43:30.929093    8616 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0531 19:43:30.986864    8616 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0531 19:43:31.005336    8616 certs.go:431] hashing: -rw-r--r-- 1 root root 1111 May 31 17:19 /usr/share/ca-certificates/minikubeCA.pem
	I0531 19:43:31.014329    8616 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0531 19:43:31.040343    8616 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0531 19:43:31.077357    8616 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2108.pem && ln -fs /usr/share/ca-certificates/2108.pem /etc/ssl/certs/2108.pem"
	I0531 19:43:31.111349    8616 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2108.pem
	I0531 19:43:31.121345    8616 certs.go:431] hashing: -rw-r--r-- 1 root root 1338 May 31 17:31 /usr/share/ca-certificates/2108.pem
	I0531 19:43:31.132335    8616 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2108.pem
	I0531 19:43:31.156335    8616 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/2108.pem /etc/ssl/certs/51391683.0"
	I0531 19:43:31.198345    8616 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/21082.pem && ln -fs /usr/share/ca-certificates/21082.pem /etc/ssl/certs/21082.pem"
	I0531 19:43:31.230395    8616 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/21082.pem
	I0531 19:43:31.245833    8616 certs.go:431] hashing: -rw-r--r-- 1 root root 1708 May 31 17:31 /usr/share/ca-certificates/21082.pem
	I0531 19:43:31.256203    8616 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/21082.pem
	I0531 19:43:31.279205    8616 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/21082.pem /etc/ssl/certs/3ec20f2e.0"
	I0531 19:43:31.309204    8616 kubeadm.go:395] StartCluster: {Name:embed-certs-20220531193346-2108 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653596720-14230@sha256:e953786303ac8350802546ee187d34e89f0007072a54fdbcc2f86a1fb8575418 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.6 ClusterName:embed-certs-20220531193346-2108 Namespace:default APIServerName
:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.23.6 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.5.1@sha256:cc746e7a0b1eec0db01cbabbb6386b23d7af97e79fa9e36bb883a95b7eb96fe2 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedP
orts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube7:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0531 19:43:31.319672    8616 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0531 19:43:31.403071    8616 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0531 19:43:31.428156    8616 kubeadm.go:410] found existing configuration files, will attempt cluster restart
	I0531 19:43:31.429149    8616 kubeadm.go:626] restartCluster start
	I0531 19:43:31.439155    8616 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0531 19:43:31.459646    8616 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0531 19:43:31.471196    8616 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" embed-certs-20220531193346-2108
	I0531 19:43:32.683774    8616 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" embed-certs-20220531193346-2108: (1.2125727s)
	I0531 19:43:32.685818    8616 kubeconfig.go:116] verify returned: extract IP: "embed-certs-20220531193346-2108" does not appear in C:\Users\jenkins.minikube7\minikube-integration\kubeconfig
	I0531 19:43:32.687090    8616 kubeconfig.go:127] "embed-certs-20220531193346-2108" context is missing from C:\Users\jenkins.minikube7\minikube-integration\kubeconfig - will repair!
	I0531 19:43:32.689552    8616 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube7\minikube-integration\kubeconfig: {Name:mk3b9816be6135fef42a073128e9ed356868417a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0531 19:43:32.711443    8616 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0531 19:43:32.741481    8616 api_server.go:165] Checking apiserver status ...
	I0531 19:43:32.755718    8616 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0531 19:43:32.782840    8616 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0531 19:43:32.998467    8616 api_server.go:165] Checking apiserver status ...
	I0531 19:43:33.009814    8616 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0531 19:43:33.038110    8616 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0531 19:43:33.187690    8616 api_server.go:165] Checking apiserver status ...
	I0531 19:43:33.198728    8616 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0531 19:43:33.231759    8616 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0531 19:43:33.389082    8616 api_server.go:165] Checking apiserver status ...
	I0531 19:43:33.401128    8616 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0531 19:43:33.437991    8616 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0531 19:43:33.596003    8616 api_server.go:165] Checking apiserver status ...
	I0531 19:43:33.610434    8616 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0531 19:43:33.649694    8616 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0531 19:43:33.783691    8616 api_server.go:165] Checking apiserver status ...
	I0531 19:43:33.794460    8616 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0531 19:43:33.820957    8616 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0531 19:43:33.989461    8616 api_server.go:165] Checking apiserver status ...
	I0531 19:43:34.000602    8616 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0531 19:43:34.029721    8616 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0531 19:43:34.197752    8616 api_server.go:165] Checking apiserver status ...
	I0531 19:43:34.208129    8616 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0531 19:43:34.240212    8616 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0531 19:43:34.388876    8616 api_server.go:165] Checking apiserver status ...
	I0531 19:43:34.398381    8616 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0531 19:43:34.427592    8616 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0531 19:43:34.592690    8616 api_server.go:165] Checking apiserver status ...
	I0531 19:43:34.603583    8616 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0531 19:43:34.630736    8616 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0531 19:43:34.793731    8616 api_server.go:165] Checking apiserver status ...
	I0531 19:43:34.802574    8616 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0531 19:43:34.840450    8616 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0531 19:43:34.982936    8616 api_server.go:165] Checking apiserver status ...
	I0531 19:43:34.994510    8616 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0531 19:43:35.019918    8616 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0531 19:43:35.194019    8616 api_server.go:165] Checking apiserver status ...
	I0531 19:43:35.206505    8616 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0531 19:43:35.239207    8616 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0531 19:43:35.383180    8616 api_server.go:165] Checking apiserver status ...
	I0531 19:43:35.395711    8616 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0531 19:43:35.424130    8616 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0531 19:43:35.588664    8616 api_server.go:165] Checking apiserver status ...
	I0531 19:43:35.599428    8616 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0531 19:43:35.628476    8616 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0531 19:43:35.794694    8616 api_server.go:165] Checking apiserver status ...
	I0531 19:43:35.804703    8616 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0531 19:43:35.835435    8616 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0531 19:43:35.835435    8616 api_server.go:165] Checking apiserver status ...
	I0531 19:43:35.845430    8616 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0531 19:43:35.872922    8616 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0531 19:43:35.872966    8616 kubeadm.go:601] needs reconfigure: apiserver error: timed out waiting for the condition
	I0531 19:43:35.872966    8616 kubeadm.go:1092] stopping kube-system containers ...
	I0531 19:43:35.888288    8616 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0531 19:43:36.000416    8616 docker.go:442] Stopping containers: [18f322b433f7 51e6610a10f8 a57711de2363 b6d1008f5a7a 05f4ee855b48 5f2d7ca41131 a4c5949a368e 76eefca54c8e 938b770e6a22 7d10308b7ec6 284f4ae8d238 af695fbb3afb 09ad7210375d 74454f5b27c5 95eae5077bdb 459d9f4673a9]
	I0531 19:43:36.009398    8616 ssh_runner.go:195] Run: docker stop 18f322b433f7 51e6610a10f8 a57711de2363 b6d1008f5a7a 05f4ee855b48 5f2d7ca41131 a4c5949a368e 76eefca54c8e 938b770e6a22 7d10308b7ec6 284f4ae8d238 af695fbb3afb 09ad7210375d 74454f5b27c5 95eae5077bdb 459d9f4673a9
	I0531 19:43:36.107408    8616 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0531 19:43:36.149402    8616 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0531 19:43:36.172404    8616 kubeadm.go:155] found existing configuration files:
	-rw------- 1 root root 5643 May 31 19:41 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5652 May 31 19:41 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2063 May 31 19:41 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5604 May 31 19:41 /etc/kubernetes/scheduler.conf
	
	I0531 19:43:36.182417    8616 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0531 19:43:36.212422    8616 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0531 19:43:36.244428    8616 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0531 19:43:36.266416    8616 kubeadm.go:166] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0531 19:43:36.278403    8616 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0531 19:43:36.311760    8616 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0531 19:43:36.334758    8616 kubeadm.go:166] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0531 19:43:36.343790    8616 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0531 19:43:36.383787    8616 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0531 19:43:36.405748    8616 kubeadm.go:703] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0531 19:43:36.405748    8616 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0531 19:43:36.547878    8616 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0531 19:43:37.853479    8616 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.3055954s)
	I0531 19:43:37.854020    8616 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0531 19:43:38.190227    8616 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0531 19:43:38.364256    8616 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0531 19:43:38.640714    8616 api_server.go:51] waiting for apiserver process to appear ...
	I0531 19:43:38.654240    8616 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 19:43:39.203285    8616 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 19:43:39.698006    8616 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 19:43:40.197265    8616 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 19:43:40.699188    8616 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 19:43:41.204693    8616 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 19:43:41.696036    8616 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 19:43:42.200147    8616 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 19:43:42.701351    8616 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 19:43:42.745446    8616 api_server.go:71] duration metric: took 4.1047578s to wait for apiserver process to appear ...
	I0531 19:43:42.745446    8616 api_server.go:87] waiting for apiserver healthz status ...
	I0531 19:43:42.745446    8616 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:54559/healthz ...
	I0531 19:43:42.750441    8616 api_server.go:256] stopped: https://127.0.0.1:54559/healthz: Get "https://127.0.0.1:54559/healthz": EOF
	I0531 19:43:43.260468    8616 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:54559/healthz ...
	I0531 19:43:44.816816    9560 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6:$PATH" kubeadm reset --cri-socket /var/run/dockershim.sock --force": (50.8628666s)
	I0531 19:43:44.834841    9560 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0531 19:43:44.875054    9560 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0531 19:43:44.910403    9560 kubeadm.go:221] ignoring SystemVerification for kubeadm because of docker driver
	I0531 19:43:44.926341    9560 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0531 19:43:44.952402    9560 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0531 19:43:44.952402    9560 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0531 19:43:48.268195    8616 api_server.go:256] stopped: https://127.0.0.1:54559/healthz: Get "https://127.0.0.1:54559/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0531 19:43:48.760057    8616 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:54559/healthz ...
	I0531 19:43:49.742236    8616 api_server.go:266] https://127.0.0.1:54559/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0531 19:43:49.742236    8616 api_server.go:102] status: https://127.0.0.1:54559/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0531 19:43:49.762242    8616 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:54559/healthz ...
	I0531 19:43:49.834262    8616 api_server.go:266] https://127.0.0.1:54559/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0531 19:43:49.834262    8616 api_server.go:102] status: https://127.0.0.1:54559/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0531 19:43:50.265125    8616 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:54559/healthz ...
	I0531 19:43:50.325042    8616 api_server.go:266] https://127.0.0.1:54559/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0531 19:43:50.325997    8616 api_server.go:102] status: https://127.0.0.1:54559/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0531 19:43:50.754281    8616 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:54559/healthz ...
	I0531 19:43:50.778515    8616 api_server.go:266] https://127.0.0.1:54559/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0531 19:43:50.778515    8616 api_server.go:102] status: https://127.0.0.1:54559/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0531 19:43:51.256439    8616 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:54559/healthz ...
	I0531 19:43:51.326599    8616 api_server.go:266] https://127.0.0.1:54559/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0531 19:43:51.326599    8616 api_server.go:102] status: https://127.0.0.1:54559/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0531 19:43:51.759811    8616 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:54559/healthz ...
	I0531 19:43:51.871137    8616 api_server.go:266] https://127.0.0.1:54559/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0531 19:43:51.871223    8616 api_server.go:102] status: https://127.0.0.1:54559/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0531 19:43:52.250743    8616 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:54559/healthz ...
	I0531 19:43:52.368759    8616 api_server.go:266] https://127.0.0.1:54559/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0531 19:43:52.368759    8616 api_server.go:102] status: https://127.0.0.1:54559/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0531 19:43:52.756360    8616 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:54559/healthz ...
	I0531 19:43:52.847081    8616 api_server.go:266] https://127.0.0.1:54559/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0531 19:43:52.847081    8616 api_server.go:102] status: https://127.0.0.1:54559/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0531 19:43:53.262170    8616 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:54559/healthz ...
	I0531 19:43:53.433182    8616 api_server.go:266] https://127.0.0.1:54559/healthz returned 200:
	ok
	I0531 19:43:53.526458    8616 api_server.go:140] control plane version: v1.23.6
	I0531 19:43:53.526458    8616 api_server.go:130] duration metric: took 10.7809658s to wait for apiserver health ...
	I0531 19:43:53.526458    8616 cni.go:95] Creating CNI manager for ""
	I0531 19:43:53.526458    8616 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0531 19:43:53.526780    8616 system_pods.go:43] waiting for kube-system pods to appear ...
	I0531 19:43:53.834702    8616 system_pods.go:59] 8 kube-system pods found
	I0531 19:43:53.834702    8616 system_pods.go:61] "coredns-64897985d-h6l4d" [45e6521b-b5ba-4365-bf64-ed7f35254f8d] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0531 19:43:53.834702    8616 system_pods.go:61] "etcd-embed-certs-20220531193346-2108" [8748c975-baa9-452c-8caa-89e8ff59a91a] Running
	I0531 19:43:53.834702    8616 system_pods.go:61] "kube-apiserver-embed-certs-20220531193346-2108" [df786508-0ac8-406a-b97b-c6650f016ceb] Running
	I0531 19:43:53.834702    8616 system_pods.go:61] "kube-controller-manager-embed-certs-20220531193346-2108" [fc4dc9a6-b756-47c3-93c8-b37e4a6af2ea] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0531 19:43:53.834702    8616 system_pods.go:61] "kube-proxy-qmdlz" [e19359f3-a2a5-4148-a7a1-ba356b861a3a] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0531 19:43:53.834702    8616 system_pods.go:61] "kube-scheduler-embed-certs-20220531193346-2108" [f3efddc7-a2d0-44d8-a33b-a9b19295fe12] Running
	I0531 19:43:53.834702    8616 system_pods.go:61] "metrics-server-b955d9d8-n88dp" [eb244db6-b3bb-4838-a675-474d5d7a17d7] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0531 19:43:53.834702    8616 system_pods.go:61] "storage-provisioner" [1f06e411-8a16-4f79-b44d-1c60e7d37395] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0531 19:43:53.834702    8616 system_pods.go:74] duration metric: took 307.92ms to wait for pod list to return data ...
	I0531 19:43:53.834702    8616 node_conditions.go:102] verifying NodePressure condition ...
	I0531 19:43:53.929683    8616 node_conditions.go:122] node storage ephemeral capacity is 263174212Ki
	I0531 19:43:53.929683    8616 node_conditions.go:123] node cpu capacity is 16
	I0531 19:43:53.929683    8616 node_conditions.go:105] duration metric: took 94.9807ms to run NodePressure ...
	I0531 19:43:53.929683    8616 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0531 19:43:56.629261    8616 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml": (2.6995662s)
	I0531 19:43:56.629261    8616 kubeadm.go:762] waiting for restarted kubelet to initialise ...
	I0531 19:43:56.740265    8616 kubeadm.go:777] kubelet initialised
	I0531 19:43:56.740265    8616 kubeadm.go:778] duration metric: took 111.0043ms waiting for restarted kubelet to initialise ...
	I0531 19:43:56.740265    8616 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0531 19:43:56.844460    8616 pod_ready.go:78] waiting up to 4m0s for pod "coredns-64897985d-h6l4d" in "kube-system" namespace to be "Ready" ...
	I0531 19:43:58.488158    8616 pod_ready.go:92] pod "coredns-64897985d-h6l4d" in "kube-system" namespace has status "Ready":"True"
	I0531 19:43:58.488158    8616 pod_ready.go:81] duration metric: took 1.6436914s waiting for pod "coredns-64897985d-h6l4d" in "kube-system" namespace to be "Ready" ...
	I0531 19:43:58.488158    8616 pod_ready.go:78] waiting up to 4m0s for pod "etcd-embed-certs-20220531193346-2108" in "kube-system" namespace to be "Ready" ...
	I0531 19:43:58.510159    8616 pod_ready.go:92] pod "etcd-embed-certs-20220531193346-2108" in "kube-system" namespace has status "Ready":"True"
	I0531 19:43:58.510159    8616 pod_ready.go:81] duration metric: took 22.0001ms waiting for pod "etcd-embed-certs-20220531193346-2108" in "kube-system" namespace to be "Ready" ...
	I0531 19:43:58.510159    8616 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-embed-certs-20220531193346-2108" in "kube-system" namespace to be "Ready" ...
	I0531 19:43:58.527166    8616 pod_ready.go:92] pod "kube-apiserver-embed-certs-20220531193346-2108" in "kube-system" namespace has status "Ready":"True"
	I0531 19:43:58.527166    8616 pod_ready.go:81] duration metric: took 17.0071ms waiting for pod "kube-apiserver-embed-certs-20220531193346-2108" in "kube-system" namespace to be "Ready" ...
	I0531 19:43:58.527166    8616 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-20220531193346-2108" in "kube-system" namespace to be "Ready" ...
	I0531 19:43:58.550165    8616 pod_ready.go:92] pod "kube-controller-manager-embed-certs-20220531193346-2108" in "kube-system" namespace has status "Ready":"True"
	I0531 19:43:58.550165    8616 pod_ready.go:81] duration metric: took 22.9989ms waiting for pod "kube-controller-manager-embed-certs-20220531193346-2108" in "kube-system" namespace to be "Ready" ...
	I0531 19:43:58.550165    8616 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-qmdlz" in "kube-system" namespace to be "Ready" ...
	I0531 19:43:58.565167    8616 pod_ready.go:92] pod "kube-proxy-qmdlz" in "kube-system" namespace has status "Ready":"True"
	I0531 19:43:58.566182    8616 pod_ready.go:81] duration metric: took 16.0169ms waiting for pod "kube-proxy-qmdlz" in "kube-system" namespace to be "Ready" ...
	I0531 19:43:58.566182    8616 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-embed-certs-20220531193346-2108" in "kube-system" namespace to be "Ready" ...
	I0531 19:43:58.885714    8616 pod_ready.go:92] pod "kube-scheduler-embed-certs-20220531193346-2108" in "kube-system" namespace has status "Ready":"True"
	I0531 19:43:58.885714    8616 pod_ready.go:81] duration metric: took 319.5308ms waiting for pod "kube-scheduler-embed-certs-20220531193346-2108" in "kube-system" namespace to be "Ready" ...
	I0531 19:43:58.885714    8616 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-b955d9d8-n88dp" in "kube-system" namespace to be "Ready" ...
	I0531 19:44:01.299568    8616 pod_ready.go:102] pod "metrics-server-b955d9d8-n88dp" in "kube-system" namespace has status "Ready":"False"
	I0531 19:44:03.307027    8616 pod_ready.go:102] pod "metrics-server-b955d9d8-n88dp" in "kube-system" namespace has status "Ready":"False"
	I0531 19:44:07.756660    9560 out.go:204]   - Generating certificates and keys ...
	I0531 19:44:07.763661    9560 out.go:204]   - Booting up control plane ...
	I0531 19:44:07.770647    9560 out.go:204]   - Configuring RBAC rules ...
	I0531 19:44:07.776631    9560 cni.go:95] Creating CNI manager for ""
	I0531 19:44:07.776631    9560 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0531 19:44:07.776631    9560 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0531 19:44:07.790660    9560 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl label nodes minikube.k8s.io/version=v1.26.0-beta.1 minikube.k8s.io/commit=bd46569bd7cb517fcad2e704abdebd2826bd8454 minikube.k8s.io/name=default-k8s-different-port-20220531193451-2108 minikube.k8s.io/updated_at=2022_05_31T19_44_07_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 19:44:07.790660    9560 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 19:44:07.840339    9560 ops.go:34] apiserver oom_adj: -16
	I0531 19:44:08.352099    9560 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	
	* 
	* ==> Docker <==
	* -- Logs begin at Tue 2022-05-31 19:35:55 UTC, end at Tue 2022-05-31 19:44:16 UTC. --
	May 31 19:41:34 old-k8s-version-20220531192531-2108 dockerd[250]: time="2022-05-31T19:41:34.111328500Z" level=info msg="ignoring event" container=56f2c3f6f12d593c333cb56a645be9bccf6ec84c695bd35b566c7eeb8726b9eb module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	May 31 19:41:34 old-k8s-version-20220531192531-2108 dockerd[250]: time="2022-05-31T19:41:34.438347700Z" level=info msg="ignoring event" container=1e1669e95902d78a506bfba3e4980acc39f4e7c136bac62f41c3780d8f0816a0 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	May 31 19:41:34 old-k8s-version-20220531192531-2108 dockerd[250]: time="2022-05-31T19:41:34.771144900Z" level=info msg="ignoring event" container=2dd05fb9ca85f1c865a22a2973b8fb5a65db284592e4e14e2ddd5e91443dbedc module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	May 31 19:41:35 old-k8s-version-20220531192531-2108 dockerd[250]: time="2022-05-31T19:41:35.070201300Z" level=info msg="ignoring event" container=edfbcbda899056e2410c4acf8ea6a853cdd0b2fbe327f91acc1611a8d3d3391c module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	May 31 19:42:18 old-k8s-version-20220531192531-2108 dockerd[250]: time="2022-05-31T19:42:18.649088800Z" level=error msg="stream copy error: reading from a closed fifo"
	May 31 19:42:18 old-k8s-version-20220531192531-2108 dockerd[250]: time="2022-05-31T19:42:18.719649300Z" level=error msg="stream copy error: reading from a closed fifo"
	May 31 19:42:19 old-k8s-version-20220531192531-2108 dockerd[250]: time="2022-05-31T19:42:19.821922300Z" level=error msg="247dabaa3d5f823c5b66c2062b47692e5b49e91435959ae09de0f032a5623800 cleanup: failed to delete container from containerd: no such container"
	May 31 19:42:19 old-k8s-version-20220531192531-2108 dockerd[250]: time="2022-05-31T19:42:19.822450700Z" level=error msg="Handler for POST /containers/247dabaa3d5f823c5b66c2062b47692e5b49e91435959ae09de0f032a5623800/start returned error: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: writing syncT \"procResume\": write init-p: broken pipe: unknown"
	May 31 19:42:28 old-k8s-version-20220531192531-2108 dockerd[250]: time="2022-05-31T19:42:28.645661400Z" level=warning msg="Error getting v2 registry: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host"
	May 31 19:42:28 old-k8s-version-20220531192531-2108 dockerd[250]: time="2022-05-31T19:42:28.645743100Z" level=info msg="Attempting next endpoint for pull after error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host"
	May 31 19:42:28 old-k8s-version-20220531192531-2108 dockerd[250]: time="2022-05-31T19:42:28.655547400Z" level=error msg="Handler for POST /images/create returned error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host"
	May 31 19:42:29 old-k8s-version-20220531192531-2108 dockerd[250]: time="2022-05-31T19:42:29.873960300Z" level=warning msg="reference for unknown type: application/vnd.docker.distribution.manifest.v1+prettyjws" digest="sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb" remote="k8s.gcr.io/echoserver:1.4"
	May 31 19:42:30 old-k8s-version-20220531192531-2108 dockerd[250]: time="2022-05-31T19:42:30.049928000Z" level=warning msg="Error persisting manifest" digest="sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb" error="error committing manifest to content store: commit failed: unexpected commit digest sha256:eaee4c452b076cdb05b391ed7e75e1ad0aca136665875ab5d7e2f3d9f4675769, expected sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb: failed precondition" remote="k8s.gcr.io/echoserver:1.4"
	May 31 19:42:47 old-k8s-version-20220531192531-2108 dockerd[250]: time="2022-05-31T19:42:47.980007500Z" level=warning msg="reference for unknown type: " digest="sha256:cc746e7a0b1eec0db01cbabbb6386b23d7af97e79fa9e36bb883a95b7eb96fe2" remote="docker.io/kubernetesui/dashboard@sha256:cc746e7a0b1eec0db01cbabbb6386b23d7af97e79fa9e36bb883a95b7eb96fe2"
	May 31 19:42:48 old-k8s-version-20220531192531-2108 dockerd[250]: time="2022-05-31T19:42:48.345433300Z" level=info msg="ignoring event" container=9baa5613dc437df519abd371e1c81dd506a65210c0049a8f8c6a60cba02b84c9 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	May 31 19:42:49 old-k8s-version-20220531192531-2108 dockerd[250]: time="2022-05-31T19:42:49.050869500Z" level=info msg="ignoring event" container=a82f5f7cd3eb6a6365072d16275bd8268df4e96e619a79fa57e043015adfdee5 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	May 31 19:43:01 old-k8s-version-20220531192531-2108 dockerd[250]: time="2022-05-31T19:43:01.022851800Z" level=info msg="ignoring event" container=817b790393050f42b01ddb3442a99a6c5ba58651a8df3b91e5f0c0ffb80c4666 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	May 31 19:43:11 old-k8s-version-20220531192531-2108 dockerd[250]: time="2022-05-31T19:43:11.258016700Z" level=warning msg="Error getting v2 registry: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host"
	May 31 19:43:11 old-k8s-version-20220531192531-2108 dockerd[250]: time="2022-05-31T19:43:11.258272000Z" level=info msg="Attempting next endpoint for pull after error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host"
	May 31 19:43:11 old-k8s-version-20220531192531-2108 dockerd[250]: time="2022-05-31T19:43:11.270055200Z" level=error msg="Handler for POST /images/create returned error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host"
	May 31 19:43:22 old-k8s-version-20220531192531-2108 dockerd[250]: time="2022-05-31T19:43:22.420615000Z" level=info msg="ignoring event" container=03167ba0f08fdff581ad496cd4bbb5dbff52a4602923c0fecd2f201ef3c775cb module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	May 31 19:43:39 old-k8s-version-20220531192531-2108 dockerd[250]: time="2022-05-31T19:43:39.644305000Z" level=warning msg="Error getting v2 registry: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host"
	May 31 19:43:39 old-k8s-version-20220531192531-2108 dockerd[250]: time="2022-05-31T19:43:39.644465200Z" level=info msg="Attempting next endpoint for pull after error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host"
	May 31 19:43:39 old-k8s-version-20220531192531-2108 dockerd[250]: time="2022-05-31T19:43:39.661137500Z" level=error msg="Handler for POST /images/create returned error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host"
	May 31 19:44:09 old-k8s-version-20220531192531-2108 dockerd[250]: time="2022-05-31T19:44:09.254101000Z" level=info msg="ignoring event" container=d9bab4566384602eebba16adb5527bbde86b9377f00da7d2e1af2ddc1ea2b2a8 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                                                            CREATED              STATE               NAME                        ATTEMPT             POD ID
	d9bab45663846       a90209bb39e3d                                                                                    9 seconds ago        Exited              dashboard-metrics-scraper   4                   10762210a70b3
	8c65f88737998       kubernetesui/dashboard@sha256:cc746e7a0b1eec0db01cbabbb6386b23d7af97e79fa9e36bb883a95b7eb96fe2   About a minute ago   Running             kubernetes-dashboard        0                   363a1b018898e
	78333097362ef       6e38f40d628db                                                                                    About a minute ago   Running             storage-provisioner         0                   8357a697373e0
	fb1686d28fd2f       bf261d1579144                                                                                    About a minute ago   Running             coredns                     0                   e64f972d867d6
	49b44baa515c4       c21b0c7400f98                                                                                    2 minutes ago        Running             kube-proxy                  0                   46895f03742d0
	be55be2991218       b2756210eeabf                                                                                    2 minutes ago        Running             etcd                        0                   2f1d5465012c0
	2e7bf308b9082       301ddc62b80b1                                                                                    2 minutes ago        Running             kube-scheduler              0                   2b7ee67a9db7a
	86f1dffe85081       b305571ca60a5                                                                                    2 minutes ago        Running             kube-apiserver              0                   28f7e7bc70e63
	963332154a4f7       06a629a7e51cd                                                                                    2 minutes ago        Running             kube-controller-manager     0                   b80cecadcecf5
	
	* 
	* ==> coredns [fb1686d28fd2] <==
	* .:53
	2022-05-31T19:42:20.924Z [INFO] plugin/reload: Running configuration MD5 = f64cb9b977c7dfca58c4fab108535a76
	2022-05-31T19:42:20.925Z [INFO] CoreDNS-1.6.2
	2022-05-31T19:42:20.929Z [INFO] linux/amd64, go1.12.8, 795a3eb
	CoreDNS-1.6.2
	linux/amd64, go1.12.8, 795a3eb
	[INFO] Reloading
	2022-05-31T19:42:57.534Z [INFO] plugin/reload: Running configuration MD5 = 034a4984a79adc08e57427d1bc08b68f
	[INFO] Reloading complete
	
	* 
	* ==> describe nodes <==
	* Name:               old-k8s-version-20220531192531-2108
	Roles:              master
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=old-k8s-version-20220531192531-2108
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=bd46569bd7cb517fcad2e704abdebd2826bd8454
	                    minikube.k8s.io/name=old-k8s-version-20220531192531-2108
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2022_05_31T19_41_59_0700
	                    minikube.k8s.io/version=v1.26.0-beta.1
	                    node-role.kubernetes.io/master=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 31 May 2022 19:41:54 +0000
	Taints:             <none>
	Unschedulable:      false
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 31 May 2022 19:44:16 +0000   Tue, 31 May 2022 19:41:49 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 31 May 2022 19:44:16 +0000   Tue, 31 May 2022 19:41:49 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 31 May 2022 19:44:16 +0000   Tue, 31 May 2022 19:41:49 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 31 May 2022 19:44:16 +0000   Tue, 31 May 2022 19:41:49 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    old-k8s-version-20220531192531-2108
	Capacity:
	 cpu:                16
	 ephemeral-storage:  263174212Ki
	 hugepages-1Gi:      0
	 hugepages-2Mi:      0
	 memory:             52638988Ki
	 pods:               110
	Allocatable:
	 cpu:                16
	 ephemeral-storage:  263174212Ki
	 hugepages-1Gi:      0
	 hugepages-2Mi:      0
	 memory:             52638988Ki
	 pods:               110
	System Info:
	 Machine ID:                 bfc82849fe6e4a6a9236307a23a8b5f1
	 System UUID:                bfc82849fe6e4a6a9236307a23a8b5f1
	 Boot ID:                    99d8680c-6839-4c5e-a5fa-8740ef80d5ef
	 Kernel Version:             5.10.102.1-microsoft-standard-WSL2
	 OS Image:                   Ubuntu 20.04.4 LTS
	 Operating System:           linux
	 Architecture:               amd64
	 Container Runtime Version:  docker://20.10.16
	 Kubelet Version:            v1.16.0
	 Kube-Proxy Version:         v1.16.0
	PodCIDR:                     10.244.0.0/24
	PodCIDRs:                    10.244.0.0/24
	Non-terminated Pods:         (10 in total)
	  Namespace                  Name                                                           CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE
	  ---------                  ----                                                           ------------  ----------  ---------------  -------------  ---
	  kube-system                coredns-5644d7b6d9-jxp72                                       100m (0%!)(MISSING)     0 (0%!)(MISSING)      70Mi (0%!)(MISSING)        170Mi (0%!)(MISSING)     2m4s
	  kube-system                etcd-old-k8s-version-20220531192531-2108                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         86s
	  kube-system                kube-apiserver-old-k8s-version-20220531192531-2108             250m (1%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         65s
	  kube-system                kube-controller-manager-old-k8s-version-20220531192531-2108    200m (1%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         84s
	  kube-system                kube-proxy-r556l                                               0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m4s
	  kube-system                kube-scheduler-old-k8s-version-20220531192531-2108             100m (0%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         68s
	  kube-system                metrics-server-6f89b5864b-v8mjx                                100m (0%!)(MISSING)     0 (0%!)(MISSING)      200Mi (0%!)(MISSING)       0 (0%!)(MISSING)         115s
	  kube-system                storage-provisioner                                            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         117s
	  kubernetes-dashboard       dashboard-metrics-scraper-6b84985989-cs5v7                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         113s
	  kubernetes-dashboard       kubernetes-dashboard-6fb5469cf5-spl8w                          0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         112s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (4%!)(MISSING)   0 (0%!)(MISSING)
	  memory             270Mi (0%!)(MISSING)  170Mi (0%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From                                             Message
	  ----    ------                   ----                   ----                                             -------
	  Normal  NodeHasSufficientMemory  2m37s (x7 over 2m38s)  kubelet, old-k8s-version-20220531192531-2108     Node old-k8s-version-20220531192531-2108 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m37s (x8 over 2m38s)  kubelet, old-k8s-version-20220531192531-2108     Node old-k8s-version-20220531192531-2108 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m37s (x8 over 2m38s)  kubelet, old-k8s-version-20220531192531-2108     Node old-k8s-version-20220531192531-2108 status is now: NodeHasSufficientPID
	  Normal  Starting                 2m                     kube-proxy, old-k8s-version-20220531192531-2108  Starting kube-proxy.
	
	* 
	* ==> dmesg <==
	* [  +0.001366] overlayfs: upperdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
	[  +0.000006] overlayfs: workdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
	[  +0.000932] overlayfs: upperdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
	[  +0.000006] overlayfs: workdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
	[  +0.089750] overlayfs: upperdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
	[  +0.000005] overlayfs: workdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
	[  +0.002712] overlayfs: upperdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
	[  +0.000005] overlayfs: workdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
	[  +0.106424] overlayfs: upperdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
	[  +0.000004] overlayfs: workdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
	[  +0.091580] overlayfs: upperdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
	[  +0.000006] overlayfs: workdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
	[May31 19:22] WSL2: Performing memory compaction.
	[May31 19:23] WSL2: Performing memory compaction.
	[May31 19:24] WSL2: Performing memory compaction.
	[May31 19:25] WSL2: Performing memory compaction.
	[May31 19:26] WSL2: Performing memory compaction.
	[May31 19:27] WSL2: Performing memory compaction.
	[May31 19:28] WSL2: Performing memory compaction.
	[May31 19:30] WSL2: Performing memory compaction.
	[May31 19:32] WSL2: Performing memory compaction.
	[May31 19:34] WSL2: Performing memory compaction.
	[May31 19:37] WSL2: Performing memory compaction.
	[May31 19:39] WSL2: Performing memory compaction.
	[May31 19:40] WSL2: Performing memory compaction.
	
	* 
	* ==> etcd [be55be299121] <==
	* 2022-05-31 19:42:22.743121 W | etcdserver: read-only range request "key:\"/registry/deployments/kube-system/metrics-server\" " with result "range_response_count:0 size:5" took too long (106.7399ms) to execute
	2022-05-31 19:42:22.950495 W | etcdserver: read-only range request "key:\"/registry/serviceaccounts/kube-system/metrics-server\" " with result "range_response_count:0 size:5" took too long (108.5051ms) to execute
	2022-05-31 19:42:24.137531 W | etcdserver: read-only range request "key:\"/registry/namespaces/kubernetes-dashboard\" " with result "range_response_count:0 size:5" took too long (105.795ms) to execute
	2022-05-31 19:42:24.246233 W | etcdserver: read-only range request "key:\"/registry/storageclasses/standard\" " with result "range_response_count:0 size:5" took too long (110.6488ms) to execute
	2022-05-31 19:42:24.441034 W | etcdserver: read-only range request "key:\"/registry/replicasets/kube-system/metrics-server-6f89b5864b\" " with result "range_response_count:1 size:1313" took too long (111.79ms) to execute
	2022-05-31 19:42:24.441959 W | etcdserver: read-only range request "key:\"/registry/pods/kube-system/kube-proxy-r556l\" " with result "range_response_count:1 size:2188" took too long (110.2984ms) to execute
	2022-05-31 19:42:24.443824 W | etcdserver: read-only range request "key:\"/registry/clusterroles/kubernetes-dashboard\" " with result "range_response_count:0 size:5" took too long (112.0249ms) to execute
	2022-05-31 19:42:24.724190 W | etcdserver: read-only range request "key:\"/registry/clusterrolebindings/kubernetes-dashboard\" " with result "range_response_count:0 size:5" took too long (100.0251ms) to execute
	2022-05-31 19:42:25.350706 W | etcdserver: read-only range request "key:\"/registry/roles/kubernetes-dashboard/kubernetes-dashboard\" " with result "range_response_count:0 size:5" took too long (231.0518ms) to execute
	2022-05-31 19:42:25.623252 W | etcdserver: read-only range request "key:\"/registry/deployments/kubernetes-dashboard/kubernetes-dashboard\" " with result "range_response_count:1 size:2624" took too long (191.1718ms) to execute
	2022-05-31 19:42:25.623439 W | etcdserver: request "header:<ID:15638328711796955181 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/deployments/kubernetes-dashboard/dashboard-metrics-scraper\" mod_revision:436 > success:<request_put:<key:\"/registry/deployments/kubernetes-dashboard/dashboard-metrics-scraper\" value_size:2711 >> failure:<request_range:<key:\"/registry/deployments/kubernetes-dashboard/dashboard-metrics-scraper\" > >>" with result "size:16" took too long (103.4478ms) to execute
	2022-05-31 19:42:25.624064 W | etcdserver: read-only range request "key:\"/registry/rolebindings/kubernetes-dashboard/kubernetes-dashboard\" " with result "range_response_count:0 size:5" took too long (182.9407ms) to execute
	2022-05-31 19:42:25.828303 W | etcdserver: read-only range request "key:\"/registry/serviceaccounts/kubernetes-dashboard/kubernetes-dashboard\" " with result "range_response_count:0 size:5" took too long (194.2906ms) to execute
	2022-05-31 19:42:25.828526 W | etcdserver: read-only range request "key:\"/registry/serviceaccounts/kubernetes-dashboard/kubernetes-dashboard\" " with result "range_response_count:0 size:5" took too long (100.0597ms) to execute
	2022-05-31 19:42:25.828730 W | etcdserver: read-only range request "key:\"/registry/events/kubernetes-dashboard/dashboard-metrics-scraper-6b84985989.16f44803f955b5cc\" " with result "range_response_count:1 size:695" took too long (108.0618ms) to execute
	2022-05-31 19:42:26.630811 W | etcdserver: read-only range request "key:\"/registry/pods/kubernetes-dashboard/dashboard-metrics-scraper-6b84985989-cs5v7\" " with result "range_response_count:1 size:1332" took too long (107.6487ms) to execute
	2022-05-31 19:42:26.830639 W | etcdserver: read-only range request "key:\"/registry/services/specs/kubernetes-dashboard/dashboard-metrics-scraper\" " with result "range_response_count:0 size:5" took too long (105.1522ms) to execute
	2022-05-31 19:42:26.840350 W | etcdserver: read-only range request "key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" " with result "range_response_count:1 size:765" took too long (107.9713ms) to execute
	2022-05-31 19:42:27.442251 W | etcdserver: request "header:<ID:15638328711796955265 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/deployments/kubernetes-dashboard/kubernetes-dashboard\" mod_revision:446 > success:<request_put:<key:\"/registry/deployments/kubernetes-dashboard/kubernetes-dashboard\" value_size:2654 >> failure:<request_range:<key:\"/registry/deployments/kubernetes-dashboard/kubernetes-dashboard\" > >>" with result "size:16" took too long (107.2543ms) to execute
	2022-05-31 19:42:27.442940 W | etcdserver: read-only range request "key:\"/registry/replicasets/kubernetes-dashboard/kubernetes-dashboard-6fb5469cf5\" " with result "range_response_count:1 size:1341" took too long (114.3883ms) to execute
	2022-05-31 19:42:27.443132 W | etcdserver: read-only range request "key:\"/registry/pods/kubernetes-dashboard/kubernetes-dashboard-6fb5469cf5-spl8w\" " with result "range_response_count:1 size:1433" took too long (118.4499ms) to execute
	2022-05-31 19:42:27.443515 W | etcdserver: read-only range request "key:\"/registry/pods/kube-system/\" range_end:\"/registry/pods/kube-system0\" " with result "range_response_count:4 size:7727" took too long (103.0409ms) to execute
	2022-05-31 19:42:27.726784 W | etcdserver: read-only range request "key:\"/registry/pods/kube-system/\" range_end:\"/registry/pods/kube-system0\" " with result "range_response_count:4 size:7727" took too long (191.366ms) to execute
	2022-05-31 19:42:59.737641 W | etcdserver: read-only range request "key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" " with result "range_response_count:1 size:765" took too long (192.8333ms) to execute
	2022-05-31 19:43:46.615330 W | etcdserver: read-only range request "key:\"/registry/services/endpoints\" range_end:\"/registry/services/endpointt\" count_only:true " with result "range_response_count:0 size:7" took too long (179.4081ms) to execute
	
	* 
	* ==> kernel <==
	*  19:44:19 up  2:32,  0 users,  load average: 11.37, 6.91, 4.69
	Linux old-k8s-version-20220531192531-2108 5.10.102.1-microsoft-standard-WSL2 #1 SMP Wed Mar 2 00:30:59 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 20.04.4 LTS"
	
	* 
	* ==> kube-apiserver [86f1dffe8508] <==
	* I0531 19:41:54.923124       1 storage_scheduling.go:139] created PriorityClass system-cluster-critical with value 2000000000
	I0531 19:41:54.923189       1 storage_scheduling.go:148] all system priority classes are created successfully or already exist.
	I0531 19:41:56.856445       1 controller.go:606] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0531 19:41:57.121158       1 controller.go:606] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	W0531 19:41:57.373567       1 lease.go:222] Resetting endpoints for master service "kubernetes" to [192.168.76.2]
	I0531 19:41:57.374922       1 controller.go:606] quota admission added evaluator for: endpoints
	I0531 19:41:58.391090       1 controller.go:606] quota admission added evaluator for: serviceaccounts
	I0531 19:41:58.838399       1 controller.go:606] quota admission added evaluator for: deployments.apps
	I0531 19:41:59.067007       1 controller.go:606] quota admission added evaluator for: daemonsets.apps
	I0531 19:42:00.831630       1 controller.go:606] quota admission added evaluator for: leases.coordination.k8s.io
	I0531 19:42:15.123502       1 controller.go:606] quota admission added evaluator for: replicasets.apps
	I0531 19:42:15.127329       1 controller.go:606] quota admission added evaluator for: controllerrevisions.apps
	I0531 19:42:15.540631       1 controller.go:606] quota admission added evaluator for: events.events.k8s.io
	E0531 19:42:21.037408       1 status.go:71] apiserver received an error that is not an metav1.Status: &errors.errorString{s:"context canceled"}
	E0531 19:42:24.321308       1 available_controller.go:416] v1beta1.metrics.k8s.io failed with: Operation cannot be fulfilled on apiservices.apiregistration.k8s.io "v1beta1.metrics.k8s.io": the object has been modified; please apply your changes to the latest version and try again
	I0531 19:42:28.041510       1 controller.go:107] OpenAPI AggregationController: Processing item v1beta1.metrics.k8s.io
	W0531 19:42:28.042270       1 handler_proxy.go:99] no RequestInfo found in the context
	E0531 19:42:28.042505       1 controller.go:114] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0531 19:42:28.042586       1 controller.go:127] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0531 19:43:28.042562       1 controller.go:107] OpenAPI AggregationController: Processing item v1beta1.metrics.k8s.io
	W0531 19:43:28.042881       1 handler_proxy.go:99] no RequestInfo found in the context
	E0531 19:43:28.042943       1 controller.go:114] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0531 19:43:28.042960       1 controller.go:127] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	* 
	* ==> kube-controller-manager [963332154a4f] <==
	* I0531 19:42:25.629311       1 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kubernetes-dashboard", Name:"dashboard-metrics-scraper-6b84985989", UID:"9c398431-17d0-4314-8e73-409c72e6fd2f", APIVersion:"apps/v1", ResourceVersion:"435", FieldPath:""}): type: 'Warning' reason: 'FailedCreate' Error creating: pods "dashboard-metrics-scraper-6b84985989-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	E0531 19:42:25.629361       1 replica_set.go:450] Sync "kubernetes-dashboard/kubernetes-dashboard-6fb5469cf5" failed with pods "kubernetes-dashboard-6fb5469cf5-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	E0531 19:42:25.723920       1 replica_set.go:450] Sync "kubernetes-dashboard/kubernetes-dashboard-6fb5469cf5" failed with pods "kubernetes-dashboard-6fb5469cf5-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0531 19:42:25.724140       1 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kubernetes-dashboard", Name:"kubernetes-dashboard-6fb5469cf5", UID:"40a45895-1dcf-45f7-a413-5594ec6dfeae", APIVersion:"apps/v1", ResourceVersion:"442", FieldPath:""}): type: 'Warning' reason: 'FailedCreate' Error creating: pods "kubernetes-dashboard-6fb5469cf5-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	E0531 19:42:25.831478       1 replica_set.go:450] Sync "kubernetes-dashboard/kubernetes-dashboard-6fb5469cf5" failed with pods "kubernetes-dashboard-6fb5469cf5-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0531 19:42:25.831589       1 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kubernetes-dashboard", Name:"dashboard-metrics-scraper-6b84985989", UID:"9c398431-17d0-4314-8e73-409c72e6fd2f", APIVersion:"apps/v1", ResourceVersion:"435", FieldPath:""}): type: 'Warning' reason: 'FailedCreate' Error creating: pods "dashboard-metrics-scraper-6b84985989-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0531 19:42:25.831621       1 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kubernetes-dashboard", Name:"kubernetes-dashboard-6fb5469cf5", UID:"40a45895-1dcf-45f7-a413-5594ec6dfeae", APIVersion:"apps/v1", ResourceVersion:"442", FieldPath:""}): type: 'Warning' reason: 'FailedCreate' Error creating: pods "kubernetes-dashboard-6fb5469cf5-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	E0531 19:42:25.831478       1 replica_set.go:450] Sync "kubernetes-dashboard/dashboard-metrics-scraper-6b84985989" failed with pods "dashboard-metrics-scraper-6b84985989-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	E0531 19:42:25.845070       1 replica_set.go:450] Sync "kubernetes-dashboard/kubernetes-dashboard-6fb5469cf5" failed with pods "kubernetes-dashboard-6fb5469cf5-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	E0531 19:42:25.845070       1 replica_set.go:450] Sync "kubernetes-dashboard/dashboard-metrics-scraper-6b84985989" failed with pods "dashboard-metrics-scraper-6b84985989-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0531 19:42:25.845111       1 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kubernetes-dashboard", Name:"dashboard-metrics-scraper-6b84985989", UID:"9c398431-17d0-4314-8e73-409c72e6fd2f", APIVersion:"apps/v1", ResourceVersion:"435", FieldPath:""}): type: 'Warning' reason: 'FailedCreate' Error creating: pods "dashboard-metrics-scraper-6b84985989-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0531 19:42:25.845338       1 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kubernetes-dashboard", Name:"kubernetes-dashboard-6fb5469cf5", UID:"40a45895-1dcf-45f7-a413-5594ec6dfeae", APIVersion:"apps/v1", ResourceVersion:"442", FieldPath:""}): type: 'Warning' reason: 'FailedCreate' Error creating: pods "kubernetes-dashboard-6fb5469cf5-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	E0531 19:42:25.925957       1 replica_set.go:450] Sync "kubernetes-dashboard/kubernetes-dashboard-6fb5469cf5" failed with pods "kubernetes-dashboard-6fb5469cf5-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0531 19:42:25.926050       1 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kubernetes-dashboard", Name:"kubernetes-dashboard-6fb5469cf5", UID:"40a45895-1dcf-45f7-a413-5594ec6dfeae", APIVersion:"apps/v1", ResourceVersion:"442", FieldPath:""}): type: 'Warning' reason: 'FailedCreate' Error creating: pods "kubernetes-dashboard-6fb5469cf5-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	E0531 19:42:25.926273       1 replica_set.go:450] Sync "kubernetes-dashboard/dashboard-metrics-scraper-6b84985989" failed with pods "dashboard-metrics-scraper-6b84985989-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0531 19:42:25.926443       1 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kubernetes-dashboard", Name:"dashboard-metrics-scraper-6b84985989", UID:"9c398431-17d0-4314-8e73-409c72e6fd2f", APIVersion:"apps/v1", ResourceVersion:"435", FieldPath:""}): type: 'Warning' reason: 'FailedCreate' Error creating: pods "dashboard-metrics-scraper-6b84985989-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0531 19:42:26.334974       1 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kubernetes-dashboard", Name:"dashboard-metrics-scraper-6b84985989", UID:"9c398431-17d0-4314-8e73-409c72e6fd2f", APIVersion:"apps/v1", ResourceVersion:"435", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: dashboard-metrics-scraper-6b84985989-cs5v7
	I0531 19:42:27.125010       1 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kubernetes-dashboard", Name:"kubernetes-dashboard-6fb5469cf5", UID:"40a45895-1dcf-45f7-a413-5594ec6dfeae", APIVersion:"apps/v1", ResourceVersion:"442", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: kubernetes-dashboard-6fb5469cf5-spl8w
	E0531 19:42:45.384462       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0531 19:42:47.140581       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0531 19:43:15.637500       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0531 19:43:19.143177       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0531 19:43:45.891581       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0531 19:43:51.148696       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0531 19:44:16.143760       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	
	* 
	* ==> kube-proxy [49b44baa515c] <==
	* W0531 19:42:19.444246       1 proxier.go:584] Failed to read file /lib/modules/5.10.102.1-microsoft-standard-WSL2/modules.builtin with error open /lib/modules/5.10.102.1-microsoft-standard-WSL2/modules.builtin: no such file or directory. You can ignore this message when kube-proxy is running inside container without mounting /lib/modules
	W0531 19:42:19.446817       1 proxier.go:597] Failed to load kernel module ip_vs with modprobe. You can ignore this message when kube-proxy is running inside container without mounting /lib/modules
	W0531 19:42:19.448464       1 proxier.go:597] Failed to load kernel module ip_vs_rr with modprobe. You can ignore this message when kube-proxy is running inside container without mounting /lib/modules
	W0531 19:42:19.450411       1 proxier.go:597] Failed to load kernel module ip_vs_wrr with modprobe. You can ignore this message when kube-proxy is running inside container without mounting /lib/modules
	W0531 19:42:19.452862       1 proxier.go:597] Failed to load kernel module ip_vs_sh with modprobe. You can ignore this message when kube-proxy is running inside container without mounting /lib/modules
	W0531 19:42:19.455345       1 proxier.go:597] Failed to load kernel module nf_conntrack with modprobe. You can ignore this message when kube-proxy is running inside container without mounting /lib/modules
	W0531 19:42:19.467121       1 server_others.go:329] Flag proxy-mode="" unknown, assuming iptables proxy
	I0531 19:42:19.544240       1 node.go:135] Successfully retrieved node IP: 192.168.76.2
	I0531 19:42:19.544403       1 server_others.go:149] Using iptables Proxier.
	I0531 19:42:19.546875       1 server.go:529] Version: v1.16.0
	I0531 19:42:19.549594       1 config.go:313] Starting service config controller
	I0531 19:42:19.549892       1 shared_informer.go:197] Waiting for caches to sync for service config
	I0531 19:42:19.550030       1 config.go:131] Starting endpoints config controller
	I0531 19:42:19.550059       1 shared_informer.go:197] Waiting for caches to sync for endpoints config
	I0531 19:42:19.719990       1 shared_informer.go:204] Caches are synced for service config 
	I0531 19:42:19.720004       1 shared_informer.go:204] Caches are synced for endpoints config 
	
	* 
	* ==> kube-scheduler [2e7bf308b908] <==
	* E0531 19:41:54.325388       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0531 19:41:54.325504       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0531 19:41:54.325639       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1beta1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0531 19:41:54.329115       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0531 19:41:54.329869       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0531 19:41:54.330888       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0531 19:41:54.330887       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0531 19:41:54.332887       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0531 19:41:54.332892       1 reflector.go:123] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:236: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0531 19:41:55.329102       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0531 19:41:55.332862       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0531 19:41:55.332977       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0531 19:41:55.334522       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0531 19:41:55.334630       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1beta1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0531 19:41:55.423075       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0531 19:41:55.423889       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0531 19:41:55.425687       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0531 19:41:55.425758       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0531 19:41:55.431862       1 reflector.go:123] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:236: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0531 19:41:55.432247       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0531 19:41:56.333544       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0531 19:41:56.336222       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0531 19:41:56.338066       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0531 19:41:56.342236       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1beta1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0531 19:41:56.342315       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	
	* 
	* ==> kubelet <==
	* -- Logs begin at Tue 2022-05-31 19:35:55 UTC, end at Tue 2022-05-31 19:44:19 UTC. --
	May 31 19:43:11 old-k8s-version-20220531192531-2108 kubelet[5553]: E0531 19:43:11.271608    5553 kuberuntime_image.go:50] Pull image "fake.domain/k8s.gcr.io/echoserver:1.4" failed: rpc error: code = Unknown desc = Error response from daemon: Get "https://fake.domain/v2/": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host
	May 31 19:43:11 old-k8s-version-20220531192531-2108 kubelet[5553]: E0531 19:43:11.271930    5553 kuberuntime_manager.go:783] container start failed: ErrImagePull: rpc error: code = Unknown desc = Error response from daemon: Get "https://fake.domain/v2/": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host
	May 31 19:43:11 old-k8s-version-20220531192531-2108 kubelet[5553]: E0531 19:43:11.272001    5553 pod_workers.go:191] Error syncing pod c248957e-7215-4044-9e73-acc8998b61f1 ("metrics-server-6f89b5864b-v8mjx_kube-system(c248957e-7215-4044-9e73-acc8998b61f1)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = Error response from daemon: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host"
	May 31 19:43:11 old-k8s-version-20220531192531-2108 kubelet[5553]: W0531 19:43:11.623208    5553 docker_sandbox.go:394] failed to read pod IP from plugin/docker: Couldn't find network status for kubernetes-dashboard/kubernetes-dashboard-6fb5469cf5-spl8w through plugin: invalid network status for
	May 31 19:43:12 old-k8s-version-20220531192531-2108 kubelet[5553]: W0531 19:43:12.849219    5553 docker_sandbox.go:394] failed to read pod IP from plugin/docker: Couldn't find network status for kubernetes-dashboard/kubernetes-dashboard-6fb5469cf5-spl8w through plugin: invalid network status for
	May 31 19:43:21 old-k8s-version-20220531192531-2108 kubelet[5553]: W0531 19:43:21.993681    5553 docker_sandbox.go:394] failed to read pod IP from plugin/docker: Couldn't find network status for kubernetes-dashboard/dashboard-metrics-scraper-6b84985989-cs5v7 through plugin: invalid network status for
	May 31 19:43:23 old-k8s-version-20220531192531-2108 kubelet[5553]: W0531 19:43:23.275984    5553 docker_sandbox.go:394] failed to read pod IP from plugin/docker: Couldn't find network status for kubernetes-dashboard/dashboard-metrics-scraper-6b84985989-cs5v7 through plugin: invalid network status for
	May 31 19:43:23 old-k8s-version-20220531192531-2108 kubelet[5553]: E0531 19:43:23.290492    5553 pod_workers.go:191] Error syncing pod 82fa1259-4596-4392-9f56-571eafe449d1 ("dashboard-metrics-scraper-6b84985989-cs5v7_kubernetes-dashboard(82fa1259-4596-4392-9f56-571eafe449d1)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6b84985989-cs5v7_kubernetes-dashboard(82fa1259-4596-4392-9f56-571eafe449d1)"
	May 31 19:43:24 old-k8s-version-20220531192531-2108 kubelet[5553]: W0531 19:43:24.306814    5553 docker_sandbox.go:394] failed to read pod IP from plugin/docker: Couldn't find network status for kubernetes-dashboard/dashboard-metrics-scraper-6b84985989-cs5v7 through plugin: invalid network status for
	May 31 19:43:25 old-k8s-version-20220531192531-2108 kubelet[5553]: E0531 19:43:25.560752    5553 pod_workers.go:191] Error syncing pod c248957e-7215-4044-9e73-acc8998b61f1 ("metrics-server-6f89b5864b-v8mjx_kube-system(c248957e-7215-4044-9e73-acc8998b61f1)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/k8s.gcr.io/echoserver:1.4\""
	May 31 19:43:29 old-k8s-version-20220531192531-2108 kubelet[5553]: E0531 19:43:29.445512    5553 pod_workers.go:191] Error syncing pod 82fa1259-4596-4392-9f56-571eafe449d1 ("dashboard-metrics-scraper-6b84985989-cs5v7_kubernetes-dashboard(82fa1259-4596-4392-9f56-571eafe449d1)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6b84985989-cs5v7_kubernetes-dashboard(82fa1259-4596-4392-9f56-571eafe449d1)"
	May 31 19:43:39 old-k8s-version-20220531192531-2108 kubelet[5553]: E0531 19:43:39.662275    5553 remote_image.go:113] PullImage "fake.domain/k8s.gcr.io/echoserver:1.4" from image service failed: rpc error: code = Unknown desc = Error response from daemon: Get "https://fake.domain/v2/": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host
	May 31 19:43:39 old-k8s-version-20220531192531-2108 kubelet[5553]: E0531 19:43:39.662417    5553 kuberuntime_image.go:50] Pull image "fake.domain/k8s.gcr.io/echoserver:1.4" failed: rpc error: code = Unknown desc = Error response from daemon: Get "https://fake.domain/v2/": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host
	May 31 19:43:39 old-k8s-version-20220531192531-2108 kubelet[5553]: E0531 19:43:39.662599    5553 kuberuntime_manager.go:783] container start failed: ErrImagePull: rpc error: code = Unknown desc = Error response from daemon: Get "https://fake.domain/v2/": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host
	May 31 19:43:39 old-k8s-version-20220531192531-2108 kubelet[5553]: E0531 19:43:39.662670    5553 pod_workers.go:191] Error syncing pod c248957e-7215-4044-9e73-acc8998b61f1 ("metrics-server-6f89b5864b-v8mjx_kube-system(c248957e-7215-4044-9e73-acc8998b61f1)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = Error response from daemon: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host"
	May 31 19:43:42 old-k8s-version-20220531192531-2108 kubelet[5553]: E0531 19:43:42.551936    5553 pod_workers.go:191] Error syncing pod 82fa1259-4596-4392-9f56-571eafe449d1 ("dashboard-metrics-scraper-6b84985989-cs5v7_kubernetes-dashboard(82fa1259-4596-4392-9f56-571eafe449d1)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6b84985989-cs5v7_kubernetes-dashboard(82fa1259-4596-4392-9f56-571eafe449d1)"
	May 31 19:43:51 old-k8s-version-20220531192531-2108 kubelet[5553]: E0531 19:43:51.568837    5553 pod_workers.go:191] Error syncing pod c248957e-7215-4044-9e73-acc8998b61f1 ("metrics-server-6f89b5864b-v8mjx_kube-system(c248957e-7215-4044-9e73-acc8998b61f1)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/k8s.gcr.io/echoserver:1.4\""
	May 31 19:43:54 old-k8s-version-20220531192531-2108 kubelet[5553]: E0531 19:43:54.552084    5553 pod_workers.go:191] Error syncing pod 82fa1259-4596-4392-9f56-571eafe449d1 ("dashboard-metrics-scraper-6b84985989-cs5v7_kubernetes-dashboard(82fa1259-4596-4392-9f56-571eafe449d1)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6b84985989-cs5v7_kubernetes-dashboard(82fa1259-4596-4392-9f56-571eafe449d1)"
	May 31 19:44:06 old-k8s-version-20220531192531-2108 kubelet[5553]: E0531 19:44:06.557836    5553 pod_workers.go:191] Error syncing pod c248957e-7215-4044-9e73-acc8998b61f1 ("metrics-server-6f89b5864b-v8mjx_kube-system(c248957e-7215-4044-9e73-acc8998b61f1)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/k8s.gcr.io/echoserver:1.4\""
	May 31 19:44:08 old-k8s-version-20220531192531-2108 kubelet[5553]: W0531 19:44:08.934232    5553 docker_sandbox.go:394] failed to read pod IP from plugin/docker: Couldn't find network status for kubernetes-dashboard/dashboard-metrics-scraper-6b84985989-cs5v7 through plugin: invalid network status for
	May 31 19:44:09 old-k8s-version-20220531192531-2108 kubelet[5553]: W0531 19:44:09.402928    5553 container.go:409] Failed to create summary reader for "/kubepods/besteffort/pod82fa1259-4596-4392-9f56-571eafe449d1/d9bab4566384602eebba16adb5527bbde86b9377f00da7d2e1af2ddc1ea2b2a8": none of the resources are being tracked.
	May 31 19:44:10 old-k8s-version-20220531192531-2108 kubelet[5553]: W0531 19:44:10.250330    5553 docker_sandbox.go:394] failed to read pod IP from plugin/docker: Couldn't find network status for kubernetes-dashboard/dashboard-metrics-scraper-6b84985989-cs5v7 through plugin: invalid network status for
	May 31 19:44:10 old-k8s-version-20220531192531-2108 kubelet[5553]: E0531 19:44:10.268374    5553 pod_workers.go:191] Error syncing pod 82fa1259-4596-4392-9f56-571eafe449d1 ("dashboard-metrics-scraper-6b84985989-cs5v7_kubernetes-dashboard(82fa1259-4596-4392-9f56-571eafe449d1)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6b84985989-cs5v7_kubernetes-dashboard(82fa1259-4596-4392-9f56-571eafe449d1)"
	May 31 19:44:11 old-k8s-version-20220531192531-2108 kubelet[5553]: W0531 19:44:11.282563    5553 docker_sandbox.go:394] failed to read pod IP from plugin/docker: Couldn't find network status for kubernetes-dashboard/dashboard-metrics-scraper-6b84985989-cs5v7 through plugin: invalid network status for
	May 31 19:44:19 old-k8s-version-20220531192531-2108 kubelet[5553]: E0531 19:44:19.443677    5553 pod_workers.go:191] Error syncing pod 82fa1259-4596-4392-9f56-571eafe449d1 ("dashboard-metrics-scraper-6b84985989-cs5v7_kubernetes-dashboard(82fa1259-4596-4392-9f56-571eafe449d1)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6b84985989-cs5v7_kubernetes-dashboard(82fa1259-4596-4392-9f56-571eafe449d1)"
	
	* 
	* ==> kubernetes-dashboard [8c65f8873799] <==
	* 2022/05/31 19:43:11 Starting overwatch
	2022/05/31 19:43:11 Using namespace: kubernetes-dashboard
	2022/05/31 19:43:11 Using in-cluster config to connect to apiserver
	2022/05/31 19:43:11 Using secret token for csrf signing
	2022/05/31 19:43:11 Initializing csrf token from kubernetes-dashboard-csrf secret
	2022/05/31 19:43:11 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2022/05/31 19:43:11 Successful initial request to the apiserver, version: v1.16.0
	2022/05/31 19:43:11 Generating JWE encryption key
	2022/05/31 19:43:11 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2022/05/31 19:43:11 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2022/05/31 19:43:12 Initializing JWE encryption key from synchronized object
	2022/05/31 19:43:12 Creating in-cluster Sidecar client
	2022/05/31 19:43:12 Serving insecurely on HTTP port: 9090
	2022/05/31 19:43:12 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2022/05/31 19:43:42 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2022/05/31 19:44:12 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	
	* 
	* ==> storage-provisioner [78333097362e] <==
	* I0531 19:42:26.537283       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0531 19:42:26.627656       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0531 19:42:26.628194       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0531 19:42:26.726417       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0531 19:42:26.728128       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-20220531192531-2108_dedac804-a1e6-404e-a773-fabda9042592!
	I0531 19:42:26.729441       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"9faff84a-d94b-40cf-9dbc-713d9688f4e4", APIVersion:"v1", ResourceVersion:"474", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-20220531192531-2108_dedac804-a1e6-404e-a773-fabda9042592 became leader
	I0531 19:42:26.931160       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-20220531192531-2108_dedac804-a1e6-404e-a773-fabda9042592!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p old-k8s-version-20220531192531-2108 -n old-k8s-version-20220531192531-2108
helpers_test.go:254: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p old-k8s-version-20220531192531-2108 -n old-k8s-version-20220531192531-2108: (10.7140405s)
helpers_test.go:261: (dbg) Run:  kubectl --context old-k8s-version-20220531192531-2108 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:270: non-running pods: metrics-server-6f89b5864b-v8mjx
helpers_test.go:272: ======> post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: describe non-running pods <======
helpers_test.go:275: (dbg) Run:  kubectl --context old-k8s-version-20220531192531-2108 describe pod metrics-server-6f89b5864b-v8mjx
helpers_test.go:275: (dbg) Non-zero exit: kubectl --context old-k8s-version-20220531192531-2108 describe pod metrics-server-6f89b5864b-v8mjx: exit status 1 (338.8752ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-6f89b5864b-v8mjx" not found

                                                
                                                
** /stderr **
helpers_test.go:277: kubectl --context old-k8s-version-20220531192531-2108 describe pod metrics-server-6f89b5864b-v8mjx: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect old-k8s-version-20220531192531-2108
helpers_test.go:231: (dbg) Done: docker inspect old-k8s-version-20220531192531-2108: (1.379114s)
helpers_test.go:235: (dbg) docker inspect old-k8s-version-20220531192531-2108:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "73d9dd1c979f96bb5332755b3731cd7b8a50346468819b2bc362eb6e60c2bebe",
	        "Created": "2022-05-31T19:32:10.7783868Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 209389,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2022-05-31T19:35:54.7458106Z",
	            "FinishedAt": "2022-05-31T19:35:32.9751099Z"
	        },
	        "Image": "sha256:aedbaa58534633065a66af6f01ba15f6c7dc1b8a285b6938f9d04325ceab9ed4",
	        "ResolvConfPath": "/var/lib/docker/containers/73d9dd1c979f96bb5332755b3731cd7b8a50346468819b2bc362eb6e60c2bebe/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/73d9dd1c979f96bb5332755b3731cd7b8a50346468819b2bc362eb6e60c2bebe/hostname",
	        "HostsPath": "/var/lib/docker/containers/73d9dd1c979f96bb5332755b3731cd7b8a50346468819b2bc362eb6e60c2bebe/hosts",
	        "LogPath": "/var/lib/docker/containers/73d9dd1c979f96bb5332755b3731cd7b8a50346468819b2bc362eb6e60c2bebe/73d9dd1c979f96bb5332755b3731cd7b8a50346468819b2bc362eb6e60c2bebe-json.log",
	        "Name": "/old-k8s-version-20220531192531-2108",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-20220531192531-2108:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "old-k8s-version-20220531192531-2108",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 2306867200,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/62b6f32bd3ed97def9fa39268e79cead11ff11a89372be02429bef84bcf65810-init/diff:/var/lib/docker/overlay2/42ebd8012a176a6c9bc83a2b81ffb1eb5c8e01d5410cb5d59346522bbaddf2cc/diff:/var/lib/docker/overlay2/59dce173ea661e9679f479af711a101ab0e97afb60abfd3c5b7a199b5c3e2b3b/diff:/var/lib/docker/overlay2/0328b60a223ca9f8bab93e6b86106d8b64d16fa559a56e88abbdee372b3b6a70/diff:/var/lib/docker/overlay2/b781f2620a052ee02138337819bde18c09122be2f20b7cfefaf7688f18d0c559/diff:/var/lib/docker/overlay2/af966c145b90b1748180b9ffcb1521d6fa9914e1d0ca582b239123591ffd1527/diff:/var/lib/docker/overlay2/5cd2b511f6f3bc93855ed77b5510ca4c67426eea433ccda53ea8e864342a413e/diff:/var/lib/docker/overlay2/f896d291d0c004470c3e38ea0d3be8e2b2a48ea36d45662c40fe3e105cbf4dec/diff:/var/lib/docker/overlay2/9e8994dcf5b1692245d5e40982d040298bfa7f7977892cf4be8ba3697f2c1283/diff:/var/lib/docker/overlay2/a7da4130c1b629e2a737b34701c6d4dfe6c48f92771856a887e06a1edc5456f8/diff:/var/lib/docker/overlay2/4c2573
4b9c8459489256b5f70dbb446897b9510d1cf9187e903f845ffa2a7ec2/diff:/var/lib/docker/overlay2/5c6cef49a0d0d1a36777fa7e0955ecdffb41ce354b7984f232e9cd51916416f7/diff:/var/lib/docker/overlay2/b79c799ed97edb702ed4c4ccb55ef9c645ae162e30e8f297ca5dd1152c29de41/diff:/var/lib/docker/overlay2/c84b7bc7c79ffdedf2d1265e21eec011dc3215811fb0569f7eb7d6b9aec884e8/diff:/var/lib/docker/overlay2/df8e2c3af362fd04ee17cb8d67105cf489427b2ae7cec77b79a2778e6c8c0234/diff:/var/lib/docker/overlay2/e56e356f8425868b31ada978267de73f074f211985ff1849ece7ab8341c33bae/diff:/var/lib/docker/overlay2/82c032066e83d3297742c83dd29132974e9db73a0b0b0a8edd3bcbbdb29cd53c/diff:/var/lib/docker/overlay2/15532131f3e6d0b2faf705733b06ae0c869147f2ca9592e3a80b6eaadad23544/diff:/var/lib/docker/overlay2/73fa456f504732f46cbe49368167247ca47b3099a6a75a7023ba16e7f598aee5/diff:/var/lib/docker/overlay2/e5635e020aadcc8dd1e5e3cd2eaa45cb97147f47bf406211fc61d7cbfc531193/diff:/var/lib/docker/overlay2/40b76b3249d3f7a8a737e2db80ebc1ed3b76d59724641217e8aae414ad832781/diff:/var/lib/d
ocker/overlay2/50ea2ce78d4fe52f626b2755a14f71a3c4f9b5a4f929646d9200876bdb1652c1/diff:/var/lib/docker/overlay2/d0a6e94d1f4aa73824d39c6e655bc4bdcd6568cea821b5d0f71174591c9cbbb3/diff:/var/lib/docker/overlay2/20c8fbe37a8c89a03b7bffe8cbc507e888cd5886f86f43b551d6a09fee1ce5e7/diff:/var/lib/docker/overlay2/48942b31cfe24e44c65a8be1785cd90488444f8c420a79b72a123034b01dd3f8/diff:/var/lib/docker/overlay2/c90124ab97e02facd949bfbd45815d6d73a40303b47ba4a4bc035788f5ee2dc3/diff:/var/lib/docker/overlay2/38c82aeabee1c8f46551413ecabb24f2f22680bb623f79e40c751558747a03f5/diff:/var/lib/docker/overlay2/4fa8894d1c1d773bc2e0511f273eab03fb7b8be7489eab5cd3eb57cc0d12e855/diff:/var/lib/docker/overlay2/23319fcddb47e50928e2044bac662de8153728f3a2eefa9c6ad5a5f413efec88/diff:/var/lib/docker/overlay2/b7ecd073b5b747c21ecbd1ca61887899f7e227fac3e383e24f868549b7929d74/diff:/var/lib/docker/overlay2/29a5674b4bbabfd07c4ce0b2a8b84ce98af380bf984043a4a9a6cd0743e4630c/diff:/var/lib/docker/overlay2/86a10266979ed72dc4372ade724e64741de35702626642ba60a15cca143
3682e/diff:/var/lib/docker/overlay2/03a1af7f82f1cb2b6eadbd1f13c8e9f6ca281ef3a8968d6aa45d284f286aefca/diff:/var/lib/docker/overlay2/f36cce4566278d24128326f8ef6ea446884c0c6941ccdb763ddf936e178afbff/diff:/var/lib/docker/overlay2/e54a2a61ba3597af53ec65a822821ffca97788e4b1dbfeedf98bf4d12e78973d/diff:/var/lib/docker/overlay2/dd54a25b898b0d7952f0bcb99a0450ee3d6b4269599e9355b4ae5e0c540c2caa/diff:/var/lib/docker/overlay2/ae6c1d1e9e79e03382217f21886420e3118a3f18f7c44f76c19262a84a43e219/diff:/var/lib/docker/overlay2/82faa00f86c1fa99063466464f71cdd6d510aa3e45c6c43301b2119b5bd5285a/diff:/var/lib/docker/overlay2/9f54999972b485642f042b9ed4d00316be0a1d35c060e619aca79b1583180446/diff:/var/lib/docker/overlay2/b467240c20564ba44d0946c716cf18ab5be973b43b02c37ee3ddd8f94502f41b/diff:/var/lib/docker/overlay2/21217d4ff1c5cf81dd53cfd831e0961189fb9f86812e1f53843f0022383345e6/diff",
	                "MergedDir": "/var/lib/docker/overlay2/62b6f32bd3ed97def9fa39268e79cead11ff11a89372be02429bef84bcf65810/merged",
	                "UpperDir": "/var/lib/docker/overlay2/62b6f32bd3ed97def9fa39268e79cead11ff11a89372be02429bef84bcf65810/diff",
	                "WorkDir": "/var/lib/docker/overlay2/62b6f32bd3ed97def9fa39268e79cead11ff11a89372be02429bef84bcf65810/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-20220531192531-2108",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-20220531192531-2108/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-20220531192531-2108",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653596720-14230@sha256:e953786303ac8350802546ee187d34e89f0007072a54fdbcc2f86a1fb8575418",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-20220531192531-2108",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-20220531192531-2108",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "9d19ad06dc07a20673d48b61192d7c0e0905e29621856aeb09b8f9b0410f62d1",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "54198"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "54199"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "54200"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "54201"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "54202"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/9d19ad06dc07",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-20220531192531-2108": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "73d9dd1c979f",
	                        "old-k8s-version-20220531192531-2108"
	                    ],
	                    "NetworkID": "ee4a2a412a92078b653cbcaf1d57d8604149789ff8c7d75dfb2ed03e6ea10fc2",
	                    "EndpointID": "4270c98919ae2493bf9ef2d067f5322982869ac9e0583f0317a4898c720e0680",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:4c:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p old-k8s-version-20220531192531-2108 -n old-k8s-version-20220531192531-2108

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/Pause
helpers_test.go:239: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.Host}} -p old-k8s-version-20220531192531-2108 -n old-k8s-version-20220531192531-2108: (8.3383137s)
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-windows-amd64.exe -p old-k8s-version-20220531192531-2108 logs -n 25
E0531 19:44:43.346320    2108 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\no-preload-20220531192611-2108\client.crt: The system cannot find the path specified.

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/Pause
helpers_test.go:247: (dbg) Done: out/minikube-windows-amd64.exe -p old-k8s-version-20220531192531-2108 logs -n 25: (8.6893321s)
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/Pause logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|------------------------------------------------------------|------------------------------------------------|-------------------|----------------|---------------------|---------------------|
	| Command |                            Args                            |                    Profile                     |       User        |    Version     |     Start Time      |      End Time       |
	|---------|------------------------------------------------------------|------------------------------------------------|-------------------|----------------|---------------------|---------------------|
	| start   | -p                                                         | default-k8s-different-port-20220531193451-2108 | minikube7\jenkins | v1.26.0-beta.1 | 31 May 22 19:34 GMT | 31 May 22 19:37 GMT |
	|         | default-k8s-different-port-20220531193451-2108             |                                                |                   |                |                     |                     |
	|         | --memory=2200 --alsologtostderr --wait=true                |                                                |                   |                |                     |                     |
	|         | --apiserver-port=8444 --driver=docker                      |                                                |                   |                |                     |                     |
	|         | --kubernetes-version=v1.23.6                               |                                                |                   |                |                     |                     |
	| addons  | enable metrics-server -p                                   | default-k8s-different-port-20220531193451-2108 | minikube7\jenkins | v1.26.0-beta.1 | 31 May 22 19:37 GMT | 31 May 22 19:37 GMT |
	|         | default-k8s-different-port-20220531193451-2108             |                                                |                   |                |                     |                     |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4           |                                                |                   |                |                     |                     |
	|         | --registries=MetricsServer=fake.domain                     |                                                |                   |                |                     |                     |
	| ssh     | -p                                                         | no-preload-20220531192611-2108                 | minikube7\jenkins | v1.26.0-beta.1 | 31 May 22 19:37 GMT | 31 May 22 19:37 GMT |
	|         | no-preload-20220531192611-2108                             |                                                |                   |                |                     |                     |
	|         | sudo crictl images -o json                                 |                                                |                   |                |                     |                     |
	| pause   | -p                                                         | no-preload-20220531192611-2108                 | minikube7\jenkins | v1.26.0-beta.1 | 31 May 22 19:37 GMT | 31 May 22 19:37 GMT |
	|         | no-preload-20220531192611-2108                             |                                                |                   |                |                     |                     |
	|         | --alsologtostderr -v=1                                     |                                                |                   |                |                     |                     |
	| stop    | -p                                                         | default-k8s-different-port-20220531193451-2108 | minikube7\jenkins | v1.26.0-beta.1 | 31 May 22 19:37 GMT | 31 May 22 19:37 GMT |
	|         | default-k8s-different-port-20220531193451-2108             |                                                |                   |                |                     |                     |
	|         | --alsologtostderr -v=3                                     |                                                |                   |                |                     |                     |
	| addons  | enable dashboard -p                                        | default-k8s-different-port-20220531193451-2108 | minikube7\jenkins | v1.26.0-beta.1 | 31 May 22 19:37 GMT | 31 May 22 19:37 GMT |
	|         | default-k8s-different-port-20220531193451-2108             |                                                |                   |                |                     |                     |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4          |                                                |                   |                |                     |                     |
	| unpause | -p                                                         | no-preload-20220531192611-2108                 | minikube7\jenkins | v1.26.0-beta.1 | 31 May 22 19:37 GMT | 31 May 22 19:37 GMT |
	|         | no-preload-20220531192611-2108                             |                                                |                   |                |                     |                     |
	|         | --alsologtostderr -v=1                                     |                                                |                   |                |                     |                     |
	| delete  | -p                                                         | no-preload-20220531192611-2108                 | minikube7\jenkins | v1.26.0-beta.1 | 31 May 22 19:38 GMT | 31 May 22 19:38 GMT |
	|         | no-preload-20220531192611-2108                             |                                                |                   |                |                     |                     |
	| delete  | -p                                                         | no-preload-20220531192611-2108                 | minikube7\jenkins | v1.26.0-beta.1 | 31 May 22 19:38 GMT | 31 May 22 19:38 GMT |
	|         | no-preload-20220531192611-2108                             |                                                |                   |                |                     |                     |
	| start   | -p newest-cni-20220531193849-2108 --memory=2200            | newest-cni-20220531193849-2108                 | minikube7\jenkins | v1.26.0-beta.1 | 31 May 22 19:38 GMT | 31 May 22 19:41 GMT |
	|         | --alsologtostderr --wait=apiserver,system_pods,default_sa  |                                                |                   |                |                     |                     |
	|         | --feature-gates ServerSideApply=true --network-plugin=cni  |                                                |                   |                |                     |                     |
	|         | --extra-config=kubelet.network-plugin=cni                  |                                                |                   |                |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 |                                                |                   |                |                     |                     |
	|         | --driver=docker --kubernetes-version=v1.23.6               |                                                |                   |                |                     |                     |
	| addons  | enable metrics-server -p                                   | newest-cni-20220531193849-2108                 | minikube7\jenkins | v1.26.0-beta.1 | 31 May 22 19:41 GMT | 31 May 22 19:41 GMT |
	|         | newest-cni-20220531193849-2108                             |                                                |                   |                |                     |                     |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4           |                                                |                   |                |                     |                     |
	|         | --registries=MetricsServer=fake.domain                     |                                                |                   |                |                     |                     |
	| stop    | -p                                                         | newest-cni-20220531193849-2108                 | minikube7\jenkins | v1.26.0-beta.1 | 31 May 22 19:41 GMT | 31 May 22 19:41 GMT |
	|         | newest-cni-20220531193849-2108                             |                                                |                   |                |                     |                     |
	|         | --alsologtostderr -v=3                                     |                                                |                   |                |                     |                     |
	| addons  | enable dashboard -p                                        | newest-cni-20220531193849-2108                 | minikube7\jenkins | v1.26.0-beta.1 | 31 May 22 19:41 GMT | 31 May 22 19:41 GMT |
	|         | newest-cni-20220531193849-2108                             |                                                |                   |                |                     |                     |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4          |                                                |                   |                |                     |                     |
	| start   | -p                                                         | embed-certs-20220531193346-2108                | minikube7\jenkins | v1.26.0-beta.1 | 31 May 22 19:33 GMT | 31 May 22 19:42 GMT |
	|         | embed-certs-20220531193346-2108                            |                                                |                   |                |                     |                     |
	|         | --memory=2200 --alsologtostderr                            |                                                |                   |                |                     |                     |
	|         | --wait=true --embed-certs                                  |                                                |                   |                |                     |                     |
	|         | --driver=docker                                            |                                                |                   |                |                     |                     |
	|         | --kubernetes-version=v1.23.6                               |                                                |                   |                |                     |                     |
	| addons  | enable metrics-server -p                                   | embed-certs-20220531193346-2108                | minikube7\jenkins | v1.26.0-beta.1 | 31 May 22 19:42 GMT | 31 May 22 19:42 GMT |
	|         | embed-certs-20220531193346-2108                            |                                                |                   |                |                     |                     |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4           |                                                |                   |                |                     |                     |
	|         | --registries=MetricsServer=fake.domain                     |                                                |                   |                |                     |                     |
	| stop    | -p                                                         | embed-certs-20220531193346-2108                | minikube7\jenkins | v1.26.0-beta.1 | 31 May 22 19:42 GMT | 31 May 22 19:42 GMT |
	|         | embed-certs-20220531193346-2108                            |                                                |                   |                |                     |                     |
	|         | --alsologtostderr -v=3                                     |                                                |                   |                |                     |                     |
	| addons  | enable dashboard -p                                        | embed-certs-20220531193346-2108                | minikube7\jenkins | v1.26.0-beta.1 | 31 May 22 19:42 GMT | 31 May 22 19:42 GMT |
	|         | embed-certs-20220531193346-2108                            |                                                |                   |                |                     |                     |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4          |                                                |                   |                |                     |                     |
	| start   | -p newest-cni-20220531193849-2108 --memory=2200            | newest-cni-20220531193849-2108                 | minikube7\jenkins | v1.26.0-beta.1 | 31 May 22 19:41 GMT | 31 May 22 19:43 GMT |
	|         | --alsologtostderr --wait=apiserver,system_pods,default_sa  |                                                |                   |                |                     |                     |
	|         | --feature-gates ServerSideApply=true --network-plugin=cni  |                                                |                   |                |                     |                     |
	|         | --extra-config=kubelet.network-plugin=cni                  |                                                |                   |                |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 |                                                |                   |                |                     |                     |
	|         | --driver=docker --kubernetes-version=v1.23.6               |                                                |                   |                |                     |                     |
	| ssh     | -p                                                         | newest-cni-20220531193849-2108                 | minikube7\jenkins | v1.26.0-beta.1 | 31 May 22 19:43 GMT | 31 May 22 19:43 GMT |
	|         | newest-cni-20220531193849-2108                             |                                                |                   |                |                     |                     |
	|         | sudo crictl images -o json                                 |                                                |                   |                |                     |                     |
	| start   | -p                                                         | old-k8s-version-20220531192531-2108            | minikube7\jenkins | v1.26.0-beta.1 | 31 May 22 19:35 GMT | 31 May 22 19:43 GMT |
	|         | old-k8s-version-20220531192531-2108                        |                                                |                   |                |                     |                     |
	|         | --memory=2200 --alsologtostderr                            |                                                |                   |                |                     |                     |
	|         | --wait=true --kvm-network=default                          |                                                |                   |                |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                              |                                                |                   |                |                     |                     |
	|         | --disable-driver-mounts                                    |                                                |                   |                |                     |                     |
	|         | --keep-context=false                                       |                                                |                   |                |                     |                     |
	|         | --driver=docker                                            |                                                |                   |                |                     |                     |
	|         | --kubernetes-version=v1.16.0                               |                                                |                   |                |                     |                     |
	| pause   | -p                                                         | newest-cni-20220531193849-2108                 | minikube7\jenkins | v1.26.0-beta.1 | 31 May 22 19:43 GMT | 31 May 22 19:43 GMT |
	|         | newest-cni-20220531193849-2108                             |                                                |                   |                |                     |                     |
	|         | --alsologtostderr -v=1                                     |                                                |                   |                |                     |                     |
	| ssh     | -p                                                         | old-k8s-version-20220531192531-2108            | minikube7\jenkins | v1.26.0-beta.1 | 31 May 22 19:43 GMT | 31 May 22 19:43 GMT |
	|         | old-k8s-version-20220531192531-2108                        |                                                |                   |                |                     |                     |
	|         | sudo crictl images -o json                                 |                                                |                   |                |                     |                     |
	| unpause | -p                                                         | newest-cni-20220531193849-2108                 | minikube7\jenkins | v1.26.0-beta.1 | 31 May 22 19:43 GMT | 31 May 22 19:43 GMT |
	|         | newest-cni-20220531193849-2108                             |                                                |                   |                |                     |                     |
	|         | --alsologtostderr -v=1                                     |                                                |                   |                |                     |                     |
	| logs    | old-k8s-version-20220531192531-2108                        | old-k8s-version-20220531192531-2108            | minikube7\jenkins | v1.26.0-beta.1 | 31 May 22 19:44 GMT | 31 May 22 19:44 GMT |
	|         | logs -n 25                                                 |                                                |                   |                |                     |                     |
	| start   | -p                                                         | default-k8s-different-port-20220531193451-2108 | minikube7\jenkins | v1.26.0-beta.1 | 31 May 22 19:37 GMT | 31 May 22 19:44 GMT |
	|         | default-k8s-different-port-20220531193451-2108             |                                                |                   |                |                     |                     |
	|         | --memory=2200 --alsologtostderr --wait=true                |                                                |                   |                |                     |                     |
	|         | --apiserver-port=8444 --driver=docker                      |                                                |                   |                |                     |                     |
	|         | --kubernetes-version=v1.23.6                               |                                                |                   |                |                     |                     |
	|---------|------------------------------------------------------------|------------------------------------------------|-------------------|----------------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2022/05/31 19:42:50
	Running on machine: minikube7
	Binary: Built with gc go1.18.2 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0531 19:42:50.127342    8616 out.go:296] Setting OutFile to fd 2016 ...
	I0531 19:42:50.216255    8616 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0531 19:42:50.216255    8616 out.go:309] Setting ErrFile to fd 712...
	I0531 19:42:50.216255    8616 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0531 19:42:50.238305    8616 out.go:303] Setting JSON to false
	I0531 19:42:50.243280    8616 start.go:115] hostinfo: {"hostname":"minikube7","uptime":84440,"bootTime":1653941730,"procs":156,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19044 Build 19044","kernelVersion":"10.0.19044 Build 19044","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f66ed2ea-6c04-4a6b-8eea-b2fb0953e990"}
	W0531 19:42:50.243280    8616 start.go:123] gopshost.Virtualization returned error: not implemented yet
	I0531 19:42:50.247300    8616 out.go:177] * [embed-certs-20220531193346-2108] minikube v1.26.0-beta.1 on Microsoft Windows 10 Enterprise N 10.0.19044 Build 19044
	I0531 19:42:50.251277    8616 notify.go:193] Checking for updates...
	I0531 19:42:50.254277    8616 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube7\minikube-integration\kubeconfig
	I0531 19:42:50.257263    8616 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube7\minikube-integration\.minikube
	I0531 19:42:50.259281    8616 out.go:177]   - MINIKUBE_LOCATION=14079
	I0531 19:42:50.262281    8616 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0531 19:42:46.136944    7556 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:54458/healthz ...
	I0531 19:42:46.155664    7556 api_server.go:266] https://127.0.0.1:54458/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0531 19:42:46.155664    7556 api_server.go:102] status: https://127.0.0.1:54458/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0531 19:42:46.638060    7556 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:54458/healthz ...
	I0531 19:42:46.725795    7556 api_server.go:266] https://127.0.0.1:54458/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0531 19:42:46.725795    7556 api_server.go:102] status: https://127.0.0.1:54458/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0531 19:42:47.143046    7556 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:54458/healthz ...
	I0531 19:42:47.242770    7556 api_server.go:266] https://127.0.0.1:54458/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0531 19:42:47.242770    7556 api_server.go:102] status: https://127.0.0.1:54458/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0531 19:42:47.638251    7556 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:54458/healthz ...
	I0531 19:42:47.830869    7556 api_server.go:266] https://127.0.0.1:54458/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0531 19:42:47.831617    7556 api_server.go:102] status: https://127.0.0.1:54458/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0531 19:42:48.140734    7556 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:54458/healthz ...
	I0531 19:42:48.245672    7556 api_server.go:266] https://127.0.0.1:54458/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0531 19:42:48.245672    7556 api_server.go:102] status: https://127.0.0.1:54458/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0531 19:42:48.641307    7556 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:54458/healthz ...
	I0531 19:42:48.733322    7556 api_server.go:266] https://127.0.0.1:54458/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0531 19:42:48.733322    7556 api_server.go:102] status: https://127.0.0.1:54458/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0531 19:42:49.147453    7556 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:54458/healthz ...
	I0531 19:42:49.240676    7556 api_server.go:266] https://127.0.0.1:54458/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0531 19:42:49.240676    7556 api_server.go:102] status: https://127.0.0.1:54458/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0531 19:42:49.648266    7556 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:54458/healthz ...
	I0531 19:42:49.743281    7556 api_server.go:266] https://127.0.0.1:54458/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0531 19:42:49.743281    7556 api_server.go:102] status: https://127.0.0.1:54458/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0531 19:42:50.146279    7556 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:54458/healthz ...
	I0531 19:42:50.240267    7556 api_server.go:266] https://127.0.0.1:54458/healthz returned 200:
	ok
	I0531 19:42:50.325883    7556 api_server.go:140] control plane version: v1.23.6
	I0531 19:42:50.325883    7556 api_server.go:130] duration metric: took 12.194093s to wait for apiserver health ...
	I0531 19:42:50.325883    7556 cni.go:95] Creating CNI manager for ""
	I0531 19:42:50.325883    7556 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0531 19:42:50.325883    7556 system_pods.go:43] waiting for kube-system pods to appear ...
	I0531 19:42:50.433899    7556 system_pods.go:59] 8 kube-system pods found
	I0531 19:42:50.433899    7556 system_pods.go:61] "coredns-64897985d-qpzx5" [801bd65d-655d-451b-a3ff-79295aaeaf09] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0531 19:42:50.433899    7556 system_pods.go:61] "etcd-newest-cni-20220531193849-2108" [663ce3bf-8420-4d38-a012-54abf228ce78] Running
	I0531 19:42:50.433899    7556 system_pods.go:61] "kube-apiserver-newest-cni-20220531193849-2108" [e713d4e4-3356-41ac-b8af-52bf19d65052] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0531 19:42:50.433899    7556 system_pods.go:61] "kube-controller-manager-newest-cni-20220531193849-2108" [fd22af45-85a2-484a-a239-0e45831e8df8] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0531 19:42:50.433899    7556 system_pods.go:61] "kube-proxy-mh9ct" [85823877-90a8-4ad0-b0ad-b4ed75f4845e] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0531 19:42:50.433899    7556 system_pods.go:61] "kube-scheduler-newest-cni-20220531193849-2108" [34deeb35-60bc-42b6-a91d-cedda9a76363] Running
	I0531 19:42:50.433899    7556 system_pods.go:61] "metrics-server-b955d9d8-rt44k" [d0c2b733-0fd1-4e34-88aa-d73b77704a89] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0531 19:42:50.433899    7556 system_pods.go:61] "storage-provisioner" [a104bd0d-cf9b-4dd3-b127-d3e84c4ae96b] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0531 19:42:50.433899    7556 system_pods.go:74] duration metric: took 108.0157ms to wait for pod list to return data ...
	I0531 19:42:50.433899    7556 node_conditions.go:102] verifying NodePressure condition ...
	I0531 19:42:50.538880    7556 node_conditions.go:122] node storage ephemeral capacity is 263174212Ki
	I0531 19:42:50.538880    7556 node_conditions.go:123] node cpu capacity is 16
	I0531 19:42:50.538880    7556 node_conditions.go:105] duration metric: took 104.9809ms to run NodePressure ...
	I0531 19:42:50.538880    7556 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0531 19:42:49.037358    1340 system_pods.go:86] 4 kube-system pods found
	I0531 19:42:49.037358    1340 system_pods.go:89] "coredns-5644d7b6d9-jxp72" [f831437f-5aa7-42cd-bb72-b1a20942f56a] Running
	I0531 19:42:49.037358    1340 system_pods.go:89] "kube-proxy-r556l" [86e67749-a177-45b6-8857-e4f6c55b8dd8] Running
	I0531 19:42:49.037358    1340 system_pods.go:89] "metrics-server-6f89b5864b-v8mjx" [c248957e-7215-4044-9e73-acc8998b61f1] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0531 19:42:49.037358    1340 system_pods.go:89] "storage-provisioner" [8001c475-f873-4298-ad3e-ac9b8d56927b] Running
	I0531 19:42:49.037358    1340 retry.go:31] will retry after 6.402197611s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0531 19:42:52.545436    7556 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml": (2.0065467s)
	I0531 19:42:52.545436    7556 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0531 19:42:52.637427    7556 ops.go:34] apiserver oom_adj: -16
	I0531 19:42:52.637427    7556 kubeadm.go:630] restartCluster took 27.8311331s
	I0531 19:42:52.637427    7556 kubeadm.go:397] StartCluster complete in 27.9633793s
	I0531 19:42:52.637427    7556 settings.go:142] acquiring lock: {Name:mk2f48f1c2db86c45c5c20d13312e07e9c171d09 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0531 19:42:52.638421    7556 settings.go:150] Updating kubeconfig:  C:\Users\jenkins.minikube7\minikube-integration\kubeconfig
	I0531 19:42:52.642395    7556 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube7\minikube-integration\kubeconfig: {Name:mk3b9816be6135fef42a073128e9ed356868417a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0531 19:42:52.750403    7556 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "newest-cni-20220531193849-2108" rescaled to 1
	I0531 19:42:52.750403    7556 start.go:208] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.23.6 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0531 19:42:52.750403    7556 addons.go:415] enableAddons start: toEnable=map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true], additional=[]
	I0531 19:42:52.756465    7556 out.go:177] * Verifying Kubernetes components...
	I0531 19:42:52.750403    7556 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.23.6/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0531 19:42:52.750403    7556 addons.go:65] Setting storage-provisioner=true in profile "newest-cni-20220531193849-2108"
	I0531 19:42:52.750403    7556 addons.go:65] Setting metrics-server=true in profile "newest-cni-20220531193849-2108"
	I0531 19:42:52.750403    7556 addons.go:65] Setting default-storageclass=true in profile "newest-cni-20220531193849-2108"
	I0531 19:42:52.750403    7556 addons.go:65] Setting dashboard=true in profile "newest-cni-20220531193849-2108"
	I0531 19:42:52.751430    7556 config.go:178] Loaded profile config "newest-cni-20220531193849-2108": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.6
	I0531 19:42:51.472496    9560 pod_ready.go:102] pod "metrics-server-b955d9d8-9rlw5" in "kube-system" namespace has status "Ready":"False"
	I0531 19:42:53.953088    9560 pod_ready.go:81] duration metric: took 4m0.1785878s waiting for pod "metrics-server-b955d9d8-9rlw5" in "kube-system" namespace to be "Ready" ...
	E0531 19:42:53.953088    9560 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-b955d9d8-9rlw5" in "kube-system" namespace to be "Ready" (will not retry!)
	I0531 19:42:53.953088    9560 pod_ready.go:38] duration metric: took 4m5.5039717s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0531 19:42:53.953088    9560 kubeadm.go:630] restartCluster took 4m28.9618863s
	W0531 19:42:53.953731    9560 out.go:239] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I0531 19:42:53.953731    9560 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6:$PATH" kubeadm reset --cri-socket /var/run/dockershim.sock --force"
	I0531 19:42:52.756465    7556 addons.go:153] Setting addon storage-provisioner=true in "newest-cni-20220531193849-2108"
	W0531 19:42:52.759410    7556 addons.go:165] addon storage-provisioner should already be in state true
	I0531 19:42:52.756465    7556 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-20220531193849-2108"
	I0531 19:42:52.760420    7556 host.go:66] Checking if "newest-cni-20220531193849-2108" exists ...
	I0531 19:42:52.756465    7556 addons.go:153] Setting addon metrics-server=true in "newest-cni-20220531193849-2108"
	W0531 19:42:52.760420    7556 addons.go:165] addon metrics-server should already be in state true
	I0531 19:42:52.756465    7556 addons.go:153] Setting addon dashboard=true in "newest-cni-20220531193849-2108"
	W0531 19:42:52.760420    7556 addons.go:165] addon dashboard should already be in state true
	I0531 19:42:52.760420    7556 host.go:66] Checking if "newest-cni-20220531193849-2108" exists ...
	I0531 19:42:52.760420    7556 host.go:66] Checking if "newest-cni-20220531193849-2108" exists ...
	I0531 19:42:52.787289    7556 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0531 19:42:52.798232    7556 cli_runner.go:164] Run: docker container inspect newest-cni-20220531193849-2108 --format={{.State.Status}}
	I0531 19:42:52.802209    7556 cli_runner.go:164] Run: docker container inspect newest-cni-20220531193849-2108 --format={{.State.Status}}
	I0531 19:42:52.803210    7556 cli_runner.go:164] Run: docker container inspect newest-cni-20220531193849-2108 --format={{.State.Status}}
	I0531 19:42:52.807213    7556 cli_runner.go:164] Run: docker container inspect newest-cni-20220531193849-2108 --format={{.State.Status}}
	I0531 19:42:53.346951    7556 start.go:786] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I0531 19:42:53.362835    7556 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" newest-cni-20220531193849-2108
	I0531 19:42:54.408200    7556 cli_runner.go:217] Completed: docker container inspect newest-cni-20220531193849-2108 --format={{.State.Status}}: (1.6099613s)
	I0531 19:42:54.435301    7556 cli_runner.go:217] Completed: docker container inspect newest-cni-20220531193849-2108 --format={{.State.Status}}: (1.6330855s)
	I0531 19:42:54.439314    7556 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0531 19:42:54.442291    7556 addons.go:348] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0531 19:42:54.442291    7556 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0531 19:42:54.446291    7556 cli_runner.go:217] Completed: docker container inspect newest-cni-20220531193849-2108 --format={{.State.Status}}: (1.6430744s)
	I0531 19:42:54.451275    7556 out.go:177]   - Using image kubernetesui/dashboard:v2.5.1
	I0531 19:42:54.454298    7556 out.go:177]   - Using image k8s.gcr.io/echoserver:1.4
	I0531 19:42:54.456303    7556 addons.go:348] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0531 19:42:54.456303    7556 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0531 19:42:54.456303    7556 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220531193849-2108
	I0531 19:42:54.461309    7556 cli_runner.go:217] Completed: docker container inspect newest-cni-20220531193849-2108 --format={{.State.Status}}: (1.6540886s)
	I0531 19:42:54.464270    7556 out.go:177]   - Using image fake.domain/k8s.gcr.io/echoserver:1.4
	I0531 19:42:50.265252    8616 config.go:178] Loaded profile config "embed-certs-20220531193346-2108": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.6
	I0531 19:42:50.267300    8616 driver.go:358] Setting default libvirt URI to qemu:///system
	I0531 19:42:54.110723    8616 docker.go:137] docker version: linux-20.10.14
	I0531 19:42:54.118695    8616 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0531 19:42:54.465305    7556 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220531193849-2108
	I0531 19:42:54.466270    7556 addons.go:348] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0531 19:42:54.466270    7556 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0531 19:42:54.477291    7556 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220531193849-2108
	I0531 19:42:54.530792    7556 addons.go:153] Setting addon default-storageclass=true in "newest-cni-20220531193849-2108"
	W0531 19:42:54.530929    7556 addons.go:165] addon default-storageclass should already be in state true
	I0531 19:42:54.530929    7556 host.go:66] Checking if "newest-cni-20220531193849-2108" exists ...
	I0531 19:42:54.565279    7556 cli_runner.go:164] Run: docker container inspect newest-cni-20220531193849-2108 --format={{.State.Status}}
	I0531 19:42:55.004826    7556 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" newest-cni-20220531193849-2108: (1.6419837s)
	I0531 19:42:55.004826    7556 api_server.go:51] waiting for apiserver process to appear ...
	I0531 19:42:55.017835    7556 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 19:42:55.060968    7556 api_server.go:71] duration metric: took 2.3105545s to wait for apiserver process to appear ...
	I0531 19:42:55.060968    7556 api_server.go:87] waiting for apiserver healthz status ...
	I0531 19:42:55.060968    7556 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:54458/healthz ...
	I0531 19:42:55.159964    7556 api_server.go:266] https://127.0.0.1:54458/healthz returned 200:
	ok
	I0531 19:42:55.165977    7556 api_server.go:140] control plane version: v1.23.6
	I0531 19:42:55.165977    7556 api_server.go:130] duration metric: took 105.0091ms to wait for apiserver health ...
	I0531 19:42:55.165977    7556 system_pods.go:43] waiting for kube-system pods to appear ...
	I0531 19:42:55.244153    7556 system_pods.go:59] 8 kube-system pods found
	I0531 19:42:55.244153    7556 system_pods.go:61] "coredns-64897985d-qpzx5" [801bd65d-655d-451b-a3ff-79295aaeaf09] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0531 19:42:55.244153    7556 system_pods.go:61] "etcd-newest-cni-20220531193849-2108" [663ce3bf-8420-4d38-a012-54abf228ce78] Running
	I0531 19:42:55.244153    7556 system_pods.go:61] "kube-apiserver-newest-cni-20220531193849-2108" [e713d4e4-3356-41ac-b8af-52bf19d65052] Running
	I0531 19:42:55.244153    7556 system_pods.go:61] "kube-controller-manager-newest-cni-20220531193849-2108" [fd22af45-85a2-484a-a239-0e45831e8df8] Running
	I0531 19:42:55.244153    7556 system_pods.go:61] "kube-proxy-mh9ct" [85823877-90a8-4ad0-b0ad-b4ed75f4845e] Running
	I0531 19:42:55.244153    7556 system_pods.go:61] "kube-scheduler-newest-cni-20220531193849-2108" [34deeb35-60bc-42b6-a91d-cedda9a76363] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0531 19:42:55.244153    7556 system_pods.go:61] "metrics-server-b955d9d8-rt44k" [d0c2b733-0fd1-4e34-88aa-d73b77704a89] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0531 19:42:55.244153    7556 system_pods.go:61] "storage-provisioner" [a104bd0d-cf9b-4dd3-b127-d3e84c4ae96b] Running
	I0531 19:42:55.244153    7556 system_pods.go:74] duration metric: took 78.1757ms to wait for pod list to return data ...
	I0531 19:42:55.244153    7556 default_sa.go:34] waiting for default service account to be created ...
	I0531 19:42:55.253122    7556 default_sa.go:45] found service account: "default"
	I0531 19:42:55.253122    7556 default_sa.go:55] duration metric: took 7.9853ms for default service account to be created ...
	I0531 19:42:55.254122    7556 kubeadm.go:572] duration metric: took 2.5037081s to wait for : map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] ...
	I0531 19:42:55.254122    7556 node_conditions.go:102] verifying NodePressure condition ...
	I0531 19:42:55.265141    7556 node_conditions.go:122] node storage ephemeral capacity is 263174212Ki
	I0531 19:42:55.265141    7556 node_conditions.go:123] node cpu capacity is 16
	I0531 19:42:55.265141    7556 node_conditions.go:105] duration metric: took 11.0188ms to run NodePressure ...
	I0531 19:42:55.265141    7556 start.go:213] waiting for startup goroutines ...
	I0531 19:42:56.082328    7556 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220531193849-2108: (1.6160519s)
	I0531 19:42:56.082328    7556 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:54454 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\newest-cni-20220531193849-2108\id_rsa Username:docker}
	I0531 19:42:56.098315    7556 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220531193849-2108: (1.6210174s)
	I0531 19:42:56.098315    7556 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:54454 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\newest-cni-20220531193849-2108\id_rsa Username:docker}
	I0531 19:42:56.129090    7556 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220531193849-2108: (1.6726375s)
	I0531 19:42:57.078202    8616 cli_runner.go:217] Completed: docker system info --format "{{json .}}": (2.9594937s)
	I0531 19:42:57.079004    8616 info.go:265] docker info: {ID:JKWR:L4LW:XYJC:G6AI:GZFU:RUGW:CCH6:OD2M:V572:4FTB:B7YC:DTUC Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:92 OomKillDisable:true NGoroutines:63 SystemTime:2022-05-31 19:42:55.6928806 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.102.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_
64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:53902323712 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.14 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3df54a852345ae127d1fa3092b95168e4a88e2f8 Expected:3df54a852345ae127d1fa3092b95168e4a88e2f8} RuncCommit:{ID:v1.0.3-0-gf46b6ba Expected:v1.0.3-0-gf46b6ba} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,
profile=default] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.5.1] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:C:\Program Files\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc.
Version:v0.17.0]] Warnings:<nil>}}
	I0531 19:42:57.083679    8616 out.go:177] * Using the docker driver based on existing profile
	I0531 19:42:55.542888    1340 system_pods.go:86] 5 kube-system pods found
	I0531 19:42:55.542888    1340 system_pods.go:89] "coredns-5644d7b6d9-jxp72" [f831437f-5aa7-42cd-bb72-b1a20942f56a] Running
	I0531 19:42:55.542888    1340 system_pods.go:89] "etcd-old-k8s-version-20220531192531-2108" [d26cf30c-3701-4ea4-8a08-6aa0010138c2] Pending
	I0531 19:42:55.542888    1340 system_pods.go:89] "kube-proxy-r556l" [86e67749-a177-45b6-8857-e4f6c55b8dd8] Running
	I0531 19:42:55.542888    1340 system_pods.go:89] "metrics-server-6f89b5864b-v8mjx" [c248957e-7215-4044-9e73-acc8998b61f1] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0531 19:42:55.542888    1340 system_pods.go:89] "storage-provisioner" [8001c475-f873-4298-ad3e-ac9b8d56927b] Running
	I0531 19:42:55.543887    1340 retry.go:31] will retry after 6.062999549s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0531 19:42:57.086388    8616 start.go:284] selected driver: docker
	I0531 19:42:57.086388    8616 start.go:806] validating driver "docker" against &{Name:embed-certs-20220531193346-2108 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653596720-14230@sha256:e953786303ac8350802546ee187d34e89f0007072a54fdbcc2f86a1fb8575418 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.6 ClusterName:embed-certs-20220531193346-2108 Namespace:d
efault APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.23.6 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.5.1@sha256:cc746e7a0b1eec0db01cbabbb6386b23d7af97e79fa9e36bb883a95b7eb96fe2 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s Schedule
dStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube7:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0531 19:42:57.086388    8616 start.go:817] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0531 19:42:57.174246    8616 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0531 19:42:59.712379    8616 cli_runner.go:217] Completed: docker system info --format "{{json .}}": (2.5379525s)
	I0531 19:42:59.712780    8616 info.go:265] docker info: {ID:JKWR:L4LW:XYJC:G6AI:GZFU:RUGW:CCH6:OD2M:V572:4FTB:B7YC:DTUC Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:92 OomKillDisable:true NGoroutines:63 SystemTime:2022-05-31 19:42:58.4856581 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.102.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_
64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:53902323712 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.14 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3df54a852345ae127d1fa3092b95168e4a88e2f8 Expected:3df54a852345ae127d1fa3092b95168e4a88e2f8} RuncCommit:{ID:v1.0.3-0-gf46b6ba Expected:v1.0.3-0-gf46b6ba} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,
profile=default] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.5.1] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:C:\Program Files\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc.
Version:v0.17.0]] Warnings:<nil>}}
	I0531 19:42:59.713307    8616 start_flags.go:847] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0531 19:42:59.713408    8616 cni.go:95] Creating CNI manager for ""
	I0531 19:42:59.713408    8616 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0531 19:42:59.713435    8616 start_flags.go:306] config:
	{Name:embed-certs-20220531193346-2108 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653596720-14230@sha256:e953786303ac8350802546ee187d34e89f0007072a54fdbcc2f86a1fb8575418 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.6 ClusterName:embed-certs-20220531193346-2108 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:clus
ter.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.23.6 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.5.1@sha256:cc746e7a0b1eec0db01cbabbb6386b23d7af97e79fa9e36bb883a95b7eb96fe2 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:f
alse ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube7:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0531 19:42:59.719162    8616 out.go:177] * Starting control plane node embed-certs-20220531193346-2108 in cluster embed-certs-20220531193346-2108
	I0531 19:42:59.722725    8616 cache.go:120] Beginning downloading kic base image for docker with docker
	I0531 19:42:59.725806    8616 out.go:177] * Pulling base image ...
	I0531 19:42:59.729482    8616 preload.go:132] Checking if preload exists for k8s version v1.23.6 and runtime docker
	I0531 19:42:59.729687    8616 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653596720-14230@sha256:e953786303ac8350802546ee187d34e89f0007072a54fdbcc2f86a1fb8575418 in local docker daemon
	I0531 19:42:59.729848    8616 preload.go:148] Found local preload: C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.23.6-docker-overlay2-amd64.tar.lz4
	I0531 19:42:59.729948    8616 cache.go:57] Caching tarball of preloaded images
	I0531 19:42:59.730619    8616 preload.go:174] Found C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.23.6-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0531 19:42:59.731046    8616 cache.go:60] Finished verifying existence of preloaded tar for  v1.23.6 on docker
	I0531 19:42:59.731505    8616 profile.go:148] Saving config to C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\embed-certs-20220531193346-2108\config.json ...
	I0531 19:42:56.129266    7556 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:54454 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\newest-cni-20220531193849-2108\id_rsa Username:docker}
	I0531 19:42:56.152573    7556 cli_runner.go:217] Completed: docker container inspect newest-cni-20220531193849-2108 --format={{.State.Status}}: (1.5871038s)
	I0531 19:42:56.152573    7556 addons.go:348] installing /etc/kubernetes/addons/storageclass.yaml
	I0531 19:42:56.152573    7556 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0531 19:42:56.160019    7556 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220531193849-2108
	I0531 19:42:56.540386    7556 addons.go:348] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0531 19:42:56.540938    7556 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0531 19:42:56.555337    7556 addons.go:348] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0531 19:42:56.555337    7556 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1849 bytes)
	I0531 19:42:56.568986    7556 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.6/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0531 19:42:56.824720    7556 addons.go:348] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0531 19:42:56.824720    7556 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0531 19:42:56.831199    7556 addons.go:348] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0531 19:42:56.831199    7556 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0531 19:42:57.036004    7556 addons.go:348] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0531 19:42:57.036038    7556 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0531 19:42:57.052974    7556 addons.go:348] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0531 19:42:57.053103    7556 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0531 19:42:57.174246    7556 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.6/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0531 19:42:57.230987    7556 addons.go:348] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0531 19:42:57.231080    7556 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4196 bytes)
	I0531 19:42:57.340153    7556 addons.go:348] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0531 19:42:57.340153    7556 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0531 19:42:57.529392    7556 addons.go:348] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0531 19:42:57.529392    7556 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0531 19:42:57.597366    7556 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220531193849-2108: (1.4373411s)
	I0531 19:42:57.597597    7556 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:54454 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\newest-cni-20220531193849-2108\id_rsa Username:docker}
	I0531 19:42:57.642507    7556 addons.go:348] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0531 19:42:57.642507    7556 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0531 19:42:57.745306    7556 addons.go:348] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0531 19:42:57.745306    7556 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0531 19:42:57.841187    7556 addons.go:348] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0531 19:42:57.841187    7556 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0531 19:42:57.950588    7556 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.6/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0531 19:42:58.061914    7556 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.6/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0531 19:43:01.060754    8616 image.go:79] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653596720-14230@sha256:e953786303ac8350802546ee187d34e89f0007072a54fdbcc2f86a1fb8575418 in local docker daemon, skipping pull
	I0531 19:43:01.060754    8616 cache.go:141] gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653596720-14230@sha256:e953786303ac8350802546ee187d34e89f0007072a54fdbcc2f86a1fb8575418 exists in daemon, skipping load
	I0531 19:43:01.060932    8616 cache.go:206] Successfully downloaded all kic artifacts
	I0531 19:43:01.061977    8616 start.go:352] acquiring machines lock for embed-certs-20220531193346-2108: {Name:mk63e14faf496672dd0ff3bf59fb3b27b51120db Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0531 19:43:01.061977    8616 start.go:356] acquired machines lock for "embed-certs-20220531193346-2108" in 0s
	I0531 19:43:01.061977    8616 start.go:94] Skipping create...Using existing machine configuration
	I0531 19:43:01.061977    8616 fix.go:55] fixHost starting: 
	I0531 19:43:01.077901    8616 cli_runner.go:164] Run: docker container inspect embed-certs-20220531193346-2108 --format={{.State.Status}}
	I0531 19:43:02.412373    8616 cli_runner.go:217] Completed: docker container inspect embed-certs-20220531193346-2108 --format={{.State.Status}}: (1.3344665s)
	I0531 19:43:02.412373    8616 fix.go:103] recreateIfNeeded on embed-certs-20220531193346-2108: state=Stopped err=<nil>
	W0531 19:43:02.412373    8616 fix.go:129] unexpected machine state, will restart: <nil>
	I0531 19:43:02.432375    8616 out.go:177] * Restarting existing docker container for "embed-certs-20220531193346-2108" ...
	I0531 19:43:01.632478    1340 system_pods.go:86] 6 kube-system pods found
	I0531 19:43:01.632478    1340 system_pods.go:89] "coredns-5644d7b6d9-jxp72" [f831437f-5aa7-42cd-bb72-b1a20942f56a] Running
	I0531 19:43:01.632478    1340 system_pods.go:89] "etcd-old-k8s-version-20220531192531-2108" [d26cf30c-3701-4ea4-8a08-6aa0010138c2] Pending
	I0531 19:43:01.632478    1340 system_pods.go:89] "kube-controller-manager-old-k8s-version-20220531192531-2108" [f99468bb-525f-4752-9dab-f40607db0d10] Pending
	I0531 19:43:01.632478    1340 system_pods.go:89] "kube-proxy-r556l" [86e67749-a177-45b6-8857-e4f6c55b8dd8] Running
	I0531 19:43:01.632478    1340 system_pods.go:89] "metrics-server-6f89b5864b-v8mjx" [c248957e-7215-4044-9e73-acc8998b61f1] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0531 19:43:01.632478    1340 system_pods.go:89] "storage-provisioner" [8001c475-f873-4298-ad3e-ac9b8d56927b] Running
	I0531 19:43:01.632478    1340 retry.go:31] will retry after 10.504197539s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0531 19:43:01.236824    7556 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.6/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (4.6678186s)
	I0531 19:43:01.635485    7556 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.6/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (4.4612198s)
	I0531 19:43:01.635485    7556 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.6/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (3.6848808s)
	I0531 19:43:01.635485    7556 addons.go:386] Verifying addon metrics-server=true in "newest-cni-20220531193849-2108"
	I0531 19:43:02.727980    7556 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.6/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (4.6660457s)
	I0531 19:43:02.732954    7556 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server, dashboard
	I0531 19:43:02.735959    7556 addons.go:417] enableAddons completed in 9.9855123s
	I0531 19:43:02.974430    7556 start.go:504] kubectl: 1.18.2, cluster: 1.23.6 (minor skew: 5)
	I0531 19:43:02.977020    7556 out.go:177] 
	W0531 19:43:02.978721    7556 out.go:239] ! C:\ProgramData\chocolatey\bin\kubectl.exe is version 1.18.2, which may have incompatibilites with Kubernetes 1.23.6.
	I0531 19:43:02.982712    7556 out.go:177]   - Want kubectl v1.23.6? Try 'minikube kubectl -- get pods -A'
	I0531 19:43:02.986727    7556 out.go:177] * Done! kubectl is now configured to use "newest-cni-20220531193849-2108" cluster and "default" namespace by default
	I0531 19:43:02.454413    8616 cli_runner.go:164] Run: docker start embed-certs-20220531193346-2108
	I0531 19:43:04.888118    8616 cli_runner.go:217] Completed: docker start embed-certs-20220531193346-2108: (2.4336945s)
	I0531 19:43:04.899149    8616 cli_runner.go:164] Run: docker container inspect embed-certs-20220531193346-2108 --format={{.State.Status}}
	I0531 19:43:06.460445    8616 cli_runner.go:217] Completed: docker container inspect embed-certs-20220531193346-2108 --format={{.State.Status}}: (1.5602977s)
	I0531 19:43:06.460445    8616 kic.go:416] container "embed-certs-20220531193346-2108" state is running.
	I0531 19:43:06.474428    8616 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-20220531193346-2108
	I0531 19:43:07.957441    8616 cli_runner.go:217] Completed: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-20220531193346-2108: (1.4830066s)
	I0531 19:43:07.957441    8616 profile.go:148] Saving config to C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\embed-certs-20220531193346-2108\config.json ...
	I0531 19:43:07.960439    8616 machine.go:88] provisioning docker machine ...
	I0531 19:43:07.960439    8616 ubuntu.go:169] provisioning hostname "embed-certs-20220531193346-2108"
	I0531 19:43:07.970438    8616 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220531193346-2108
	I0531 19:43:09.409622    8616 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220531193346-2108: (1.4380322s)
	I0531 19:43:09.412613    8616 main.go:134] libmachine: Using SSH client type: native
	I0531 19:43:09.413634    8616 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x1322ea0] 0x1325d00 <nil>  [] 0s} 127.0.0.1 54560 <nil> <nil>}
	I0531 19:43:09.413634    8616 main.go:134] libmachine: About to run SSH command:
	sudo hostname embed-certs-20220531193346-2108 && echo "embed-certs-20220531193346-2108" | sudo tee /etc/hostname
	I0531 19:43:09.664937    8616 main.go:134] libmachine: SSH cmd err, output: <nil>: embed-certs-20220531193346-2108
	
	I0531 19:43:09.674907    8616 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220531193346-2108
	I0531 19:43:12.159805    1340 system_pods.go:86] 7 kube-system pods found
	I0531 19:43:12.159805    1340 system_pods.go:89] "coredns-5644d7b6d9-jxp72" [f831437f-5aa7-42cd-bb72-b1a20942f56a] Running
	I0531 19:43:12.159805    1340 system_pods.go:89] "etcd-old-k8s-version-20220531192531-2108" [d26cf30c-3701-4ea4-8a08-6aa0010138c2] Running
	I0531 19:43:12.159805    1340 system_pods.go:89] "kube-controller-manager-old-k8s-version-20220531192531-2108" [f99468bb-525f-4752-9dab-f40607db0d10] Running
	I0531 19:43:12.159805    1340 system_pods.go:89] "kube-proxy-r556l" [86e67749-a177-45b6-8857-e4f6c55b8dd8] Running
	I0531 19:43:12.159805    1340 system_pods.go:89] "kube-scheduler-old-k8s-version-20220531192531-2108" [7bbd9c04-c349-4dae-8a28-e0127457afce] Pending
	I0531 19:43:12.159805    1340 system_pods.go:89] "metrics-server-6f89b5864b-v8mjx" [c248957e-7215-4044-9e73-acc8998b61f1] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0531 19:43:12.159805    1340 system_pods.go:89] "storage-provisioner" [8001c475-f873-4298-ad3e-ac9b8d56927b] Running
	I0531 19:43:12.159805    1340 retry.go:31] will retry after 12.194240946s: missing components: kube-apiserver, kube-scheduler
	I0531 19:43:11.086438    8616 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220531193346-2108: (1.4115245s)
	I0531 19:43:11.092434    8616 main.go:134] libmachine: Using SSH client type: native
	I0531 19:43:11.093440    8616 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x1322ea0] 0x1325d00 <nil>  [] 0s} 127.0.0.1 54560 <nil> <nil>}
	I0531 19:43:11.093440    8616 main.go:134] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-20220531193346-2108' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-20220531193346-2108/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-20220531193346-2108' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0531 19:43:11.255429    8616 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I0531 19:43:11.255429    8616 ubuntu.go:175] set auth options {CertDir:C:\Users\jenkins.minikube7\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube7\minikube-integration\.minikube}
	I0531 19:43:11.255429    8616 ubuntu.go:177] setting up certificates
	I0531 19:43:11.255429    8616 provision.go:83] configureAuth start
	I0531 19:43:11.265440    8616 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-20220531193346-2108
	I0531 19:43:12.708520    8616 cli_runner.go:217] Completed: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-20220531193346-2108: (1.4430742s)
	I0531 19:43:12.708520    8616 provision.go:138] copyHostCerts
	I0531 19:43:12.708520    8616 exec_runner.go:144] found C:\Users\jenkins.minikube7\minikube-integration\.minikube/cert.pem, removing ...
	I0531 19:43:12.708520    8616 exec_runner.go:207] rm: C:\Users\jenkins.minikube7\minikube-integration\.minikube\cert.pem
	I0531 19:43:12.709415    8616 exec_runner.go:151] cp: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube7\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0531 19:43:12.710359    8616 exec_runner.go:144] found C:\Users\jenkins.minikube7\minikube-integration\.minikube/key.pem, removing ...
	I0531 19:43:12.710359    8616 exec_runner.go:207] rm: C:\Users\jenkins.minikube7\minikube-integration\.minikube\key.pem
	I0531 19:43:12.711169    8616 exec_runner.go:151] cp: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube7\minikube-integration\.minikube/key.pem (1679 bytes)
	I0531 19:43:12.712053    8616 exec_runner.go:144] found C:\Users\jenkins.minikube7\minikube-integration\.minikube/ca.pem, removing ...
	I0531 19:43:12.712053    8616 exec_runner.go:207] rm: C:\Users\jenkins.minikube7\minikube-integration\.minikube\ca.pem
	I0531 19:43:12.712716    8616 exec_runner.go:151] cp: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube7\minikube-integration\.minikube/ca.pem (1078 bytes)
	I0531 19:43:12.713400    8616 provision.go:112] generating server cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.embed-certs-20220531193346-2108 san=[192.168.67.2 127.0.0.1 localhost 127.0.0.1 minikube embed-certs-20220531193346-2108]
	I0531 19:43:13.011580    8616 provision.go:172] copyRemoteCerts
	I0531 19:43:13.023550    8616 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0531 19:43:13.038556    8616 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220531193346-2108
	I0531 19:43:14.338732    8616 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220531193346-2108: (1.3001706s)
	I0531 19:43:14.338732    8616 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:54560 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\embed-certs-20220531193346-2108\id_rsa Username:docker}
	I0531 19:43:14.551697    8616 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (1.5281407s)
	I0531 19:43:14.552675    8616 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0531 19:43:14.613469    8616 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1265 bytes)
	I0531 19:43:14.684854    8616 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0531 19:43:14.747113    8616 provision.go:86] duration metric: configureAuth took 3.4916684s
	I0531 19:43:14.747113    8616 ubuntu.go:193] setting minikube options for container-runtime
	I0531 19:43:14.748088    8616 config.go:178] Loaded profile config "embed-certs-20220531193346-2108": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.6
	I0531 19:43:14.761866    8616 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220531193346-2108
	I0531 19:43:16.087899    8616 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220531193346-2108: (1.3258419s)
	I0531 19:43:16.091824    8616 main.go:134] libmachine: Using SSH client type: native
	I0531 19:43:16.092214    8616 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x1322ea0] 0x1325d00 <nil>  [] 0s} 127.0.0.1 54560 <nil> <nil>}
	I0531 19:43:16.092298    8616 main.go:134] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0531 19:43:16.307822    8616 main.go:134] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0531 19:43:16.307822    8616 ubuntu.go:71] root file system type: overlay
	I0531 19:43:16.308825    8616 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0531 19:43:16.317850    8616 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220531193346-2108
	I0531 19:43:17.695543    8616 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220531193346-2108: (1.3776872s)
	I0531 19:43:17.701536    8616 main.go:134] libmachine: Using SSH client type: native
	I0531 19:43:17.701536    8616 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x1322ea0] 0x1325d00 <nil>  [] 0s} 127.0.0.1 54560 <nil> <nil>}
	I0531 19:43:17.701536    8616 main.go:134] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0531 19:43:17.874809    8616 main.go:134] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0531 19:43:17.883833    8616 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220531193346-2108
	I0531 19:43:19.149206    8616 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220531193346-2108: (1.2653668s)
	I0531 19:43:19.152226    8616 main.go:134] libmachine: Using SSH client type: native
	I0531 19:43:19.153214    8616 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x1322ea0] 0x1325d00 <nil>  [] 0s} 127.0.0.1 54560 <nil> <nil>}
	I0531 19:43:19.153214    8616 main.go:134] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0531 19:43:19.357762    8616 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I0531 19:43:19.357762    8616 machine.go:91] provisioned docker machine in 11.397274s
	I0531 19:43:19.357762    8616 start.go:306] post-start starting for "embed-certs-20220531193346-2108" (driver="docker")
	I0531 19:43:19.357762    8616 start.go:316] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0531 19:43:19.368634    8616 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0531 19:43:19.375625    8616 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220531193346-2108
	I0531 19:43:24.393133    1340 system_pods.go:86] 8 kube-system pods found
	I0531 19:43:24.393133    1340 system_pods.go:89] "coredns-5644d7b6d9-jxp72" [f831437f-5aa7-42cd-bb72-b1a20942f56a] Running
	I0531 19:43:24.393133    1340 system_pods.go:89] "etcd-old-k8s-version-20220531192531-2108" [d26cf30c-3701-4ea4-8a08-6aa0010138c2] Running
	I0531 19:43:24.393133    1340 system_pods.go:89] "kube-apiserver-old-k8s-version-20220531192531-2108" [d699c547-2f4a-4e3d-8249-2018ca3f8fb7] Running
	I0531 19:43:24.393133    1340 system_pods.go:89] "kube-controller-manager-old-k8s-version-20220531192531-2108" [f99468bb-525f-4752-9dab-f40607db0d10] Running
	I0531 19:43:24.393133    1340 system_pods.go:89] "kube-proxy-r556l" [86e67749-a177-45b6-8857-e4f6c55b8dd8] Running
	I0531 19:43:24.393133    1340 system_pods.go:89] "kube-scheduler-old-k8s-version-20220531192531-2108" [7bbd9c04-c349-4dae-8a28-e0127457afce] Running
	I0531 19:43:24.393133    1340 system_pods.go:89] "metrics-server-6f89b5864b-v8mjx" [c248957e-7215-4044-9e73-acc8998b61f1] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0531 19:43:24.393133    1340 system_pods.go:89] "storage-provisioner" [8001c475-f873-4298-ad3e-ac9b8d56927b] Running
	I0531 19:43:24.393133    1340 system_pods.go:126] duration metric: took 56.8618258s to wait for k8s-apps to be running ...
	I0531 19:43:24.393133    1340 system_svc.go:44] waiting for kubelet service to be running ....
	I0531 19:43:24.403125    1340 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0531 19:43:24.445798    1340 system_svc.go:56] duration metric: took 52.6647ms WaitForService to wait for kubelet.
	I0531 19:43:24.445798    1340 kubeadm.go:572] duration metric: took 1m7.5077501s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0531 19:43:24.445798    1340 node_conditions.go:102] verifying NodePressure condition ...
	I0531 19:43:24.453799    1340 node_conditions.go:122] node storage ephemeral capacity is 263174212Ki
	I0531 19:43:24.453799    1340 node_conditions.go:123] node cpu capacity is 16
	I0531 19:43:24.453799    1340 node_conditions.go:105] duration metric: took 8.0013ms to run NodePressure ...
	I0531 19:43:24.453799    1340 start.go:213] waiting for startup goroutines ...
	I0531 19:43:24.669491    1340 start.go:504] kubectl: 1.18.2, cluster: 1.16.0 (minor skew: 2)
	I0531 19:43:24.674088    1340 out.go:177] 
	W0531 19:43:24.677260    1340 out.go:239] ! C:\ProgramData\chocolatey\bin\kubectl.exe is version 1.18.2, which may have incompatibilites with Kubernetes 1.16.0.
	I0531 19:43:24.680532    1340 out.go:177]   - Want kubectl v1.16.0? Try 'minikube kubectl -- get pods -A'
	I0531 19:43:24.685300    1340 out.go:177] * Done! kubectl is now configured to use "old-k8s-version-20220531192531-2108" cluster and "default" namespace by default
	I0531 19:43:20.624671    8616 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220531193346-2108: (1.2490409s)
	I0531 19:43:20.624671    8616 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:54560 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\embed-certs-20220531193346-2108\id_rsa Username:docker}
	I0531 19:43:20.759134    8616 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (1.3904934s)
	I0531 19:43:20.769129    8616 ssh_runner.go:195] Run: cat /etc/os-release
	I0531 19:43:20.781134    8616 main.go:134] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0531 19:43:20.781134    8616 main.go:134] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0531 19:43:20.781134    8616 main.go:134] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0531 19:43:20.781134    8616 info.go:137] Remote host: Ubuntu 20.04.4 LTS
	I0531 19:43:20.781134    8616 filesync.go:126] Scanning C:\Users\jenkins.minikube7\minikube-integration\.minikube\addons for local assets ...
	I0531 19:43:20.782134    8616 filesync.go:126] Scanning C:\Users\jenkins.minikube7\minikube-integration\.minikube\files for local assets ...
	I0531 19:43:20.783138    8616 filesync.go:149] local asset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\files\etc\ssl\certs\21082.pem -> 21082.pem in /etc/ssl/certs
	I0531 19:43:20.797122    8616 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0531 19:43:20.827072    8616 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\files\etc\ssl\certs\21082.pem --> /etc/ssl/certs/21082.pem (1708 bytes)
	I0531 19:43:20.887348    8616 start.go:309] post-start completed in 1.5295791s
	I0531 19:43:20.898339    8616 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0531 19:43:20.904357    8616 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220531193346-2108
	I0531 19:43:22.218797    8616 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220531193346-2108: (1.3134488s)
	I0531 19:43:22.218797    8616 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:54560 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\embed-certs-20220531193346-2108\id_rsa Username:docker}
	I0531 19:43:22.370144    8616 ssh_runner.go:235] Completed: sh -c "df -h /var | awk 'NR==2{print $5}'": (1.4717985s)
	I0531 19:43:22.382114    8616 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0531 19:43:22.399454    8616 fix.go:57] fixHost completed within 21.3373847s
	I0531 19:43:22.399454    8616 start.go:81] releasing machines lock for "embed-certs-20220531193346-2108", held for 21.3373847s
	I0531 19:43:22.406442    8616 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-20220531193346-2108
	I0531 19:43:23.736218    8616 cli_runner.go:217] Completed: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-20220531193346-2108: (1.3297705s)
	I0531 19:43:23.739211    8616 ssh_runner.go:195] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0531 19:43:23.747232    8616 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220531193346-2108
	I0531 19:43:23.748219    8616 ssh_runner.go:195] Run: systemctl --version
	I0531 19:43:23.757214    8616 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220531193346-2108
	I0531 19:43:25.191855    8616 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220531193346-2108: (1.4446175s)
	I0531 19:43:25.192881    8616 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:54560 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\embed-certs-20220531193346-2108\id_rsa Username:docker}
	I0531 19:43:25.216528    8616 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220531193346-2108: (1.4593082s)
	I0531 19:43:25.216528    8616 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:54560 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\embed-certs-20220531193346-2108\id_rsa Username:docker}
	I0531 19:43:25.434737    8616 ssh_runner.go:235] Completed: curl -sS -m 2 https://k8s.gcr.io/: (1.6955191s)
	I0531 19:43:25.434737    8616 ssh_runner.go:235] Completed: systemctl --version: (1.6865115s)
	I0531 19:43:25.450724    8616 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0531 19:43:25.564263    8616 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0531 19:43:25.598655    8616 cruntime.go:273] skipping containerd shutdown because we are bound to it
	I0531 19:43:25.608101    8616 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0531 19:43:25.652695    8616 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/dockershim.sock
	image-endpoint: unix:///var/run/dockershim.sock
	" | sudo tee /etc/crictl.yaml"
	I0531 19:43:25.706768    8616 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0531 19:43:25.913648    8616 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0531 19:43:26.130777    8616 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0531 19:43:26.181761    8616 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0531 19:43:26.375718    8616 ssh_runner.go:195] Run: sudo systemctl start docker
	I0531 19:43:26.412452    8616 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0531 19:43:26.527203    8616 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0531 19:43:26.636205    8616 out.go:204] * Preparing Kubernetes v1.23.6 on Docker 20.10.16 ...
	I0531 19:43:26.651207    8616 cli_runner.go:164] Run: docker exec -t embed-certs-20220531193346-2108 dig +short host.docker.internal
	I0531 19:43:28.191872    8616 cli_runner.go:217] Completed: docker exec -t embed-certs-20220531193346-2108 dig +short host.docker.internal: (1.5406583s)
	I0531 19:43:28.191872    8616 network.go:96] got host ip for mount in container by digging dns: 192.168.65.2
	I0531 19:43:28.202880    8616 ssh_runner.go:195] Run: grep 192.168.65.2	host.minikube.internal$ /etc/hosts
	I0531 19:43:28.218418    8616 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.65.2	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0531 19:43:28.262706    8616 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" embed-certs-20220531193346-2108
	I0531 19:43:29.498586    8616 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" embed-certs-20220531193346-2108: (1.2358751s)
	I0531 19:43:29.498586    8616 preload.go:132] Checking if preload exists for k8s version v1.23.6 and runtime docker
	I0531 19:43:29.505594    8616 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0531 19:43:29.585592    8616 docker.go:610] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.23.6
	k8s.gcr.io/kube-proxy:v1.23.6
	k8s.gcr.io/kube-scheduler:v1.23.6
	k8s.gcr.io/kube-controller-manager:v1.23.6
	k8s.gcr.io/etcd:3.5.1-0
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28.4-glibc
	
	-- /stdout --
	I0531 19:43:29.585592    8616 docker.go:541] Images already preloaded, skipping extraction
	I0531 19:43:29.592584    8616 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0531 19:43:29.692450    8616 docker.go:610] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.23.6
	k8s.gcr.io/kube-controller-manager:v1.23.6
	k8s.gcr.io/kube-proxy:v1.23.6
	k8s.gcr.io/kube-scheduler:v1.23.6
	k8s.gcr.io/etcd:3.5.1-0
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28.4-glibc
	
	-- /stdout --
	I0531 19:43:29.692450    8616 cache_images.go:84] Images are preloaded, skipping loading
	I0531 19:43:29.698436    8616 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0531 19:43:29.922432    8616 cni.go:95] Creating CNI manager for ""
	I0531 19:43:29.922549    8616 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0531 19:43:29.922549    8616 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0531 19:43:29.922549    8616 kubeadm.go:158] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.67.2 APIServerPort:8443 KubernetesVersion:v1.23.6 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-20220531193346-2108 NodeName:embed-certs-20220531193346-2108 DNSDomain:cluster.local CRISocket:/var/run/dockershim.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.67.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.67.2 CgroupDriver:cgroupfs ClientCAFile:/
var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	I0531 19:43:29.922775    8616 kubeadm.go:162] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.67.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/dockershim.sock
	  name: "embed-certs-20220531193346-2108"
	  kubeletExtraArgs:
	    node-ip: 192.168.67.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.67.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.23.6
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0531 19:43:29.922863    8616 kubeadm.go:961] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.23.6/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=docker --hostname-override=embed-certs-20220531193346-2108 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.67.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.23.6 ClusterName:embed-certs-20220531193346-2108 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0531 19:43:29.937834    8616 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.23.6
	I0531 19:43:29.970126    8616 binaries.go:44] Found k8s binaries, skipping transfer
	I0531 19:43:29.980712    8616 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0531 19:43:30.003043    8616 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (357 bytes)
	I0531 19:43:30.040432    8616 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0531 19:43:30.080432    8616 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2053 bytes)
	I0531 19:43:30.128472    8616 ssh_runner.go:195] Run: grep 192.168.67.2	control-plane.minikube.internal$ /etc/hosts
	I0531 19:43:30.144212    8616 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.67.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0531 19:43:30.174247    8616 certs.go:54] Setting up C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\embed-certs-20220531193346-2108 for IP: 192.168.67.2
	I0531 19:43:30.174247    8616 certs.go:182] skipping minikubeCA CA generation: C:\Users\jenkins.minikube7\minikube-integration\.minikube\ca.key
	I0531 19:43:30.175233    8616 certs.go:182] skipping proxyClientCA CA generation: C:\Users\jenkins.minikube7\minikube-integration\.minikube\proxy-client-ca.key
	I0531 19:43:30.175233    8616 certs.go:298] skipping minikube-user signed cert generation: C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\embed-certs-20220531193346-2108\client.key
	I0531 19:43:30.176258    8616 certs.go:298] skipping minikube signed cert generation: C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\embed-certs-20220531193346-2108\apiserver.key.c7fa3a9e
	I0531 19:43:30.176258    8616 certs.go:298] skipping aggregator signed cert generation: C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\embed-certs-20220531193346-2108\proxy-client.key
	I0531 19:43:30.177245    8616 certs.go:388] found cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\2108.pem (1338 bytes)
	W0531 19:43:30.177245    8616 certs.go:384] ignoring C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\2108_empty.pem, impossibly tiny 0 bytes
	I0531 19:43:30.178242    8616 certs.go:388] found cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca-key.pem (1675 bytes)
	I0531 19:43:30.178242    8616 certs.go:388] found cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem (1078 bytes)
	I0531 19:43:30.178242    8616 certs.go:388] found cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0531 19:43:30.178242    8616 certs.go:388] found cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\key.pem (1679 bytes)
	I0531 19:43:30.179223    8616 certs.go:388] found cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\files\etc\ssl\certs\C:\Users\jenkins.minikube7\minikube-integration\.minikube\files\etc\ssl\certs\21082.pem (1708 bytes)
	I0531 19:43:30.181232    8616 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\embed-certs-20220531193346-2108\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0531 19:43:30.231446    8616 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\embed-certs-20220531193346-2108\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0531 19:43:30.286026    8616 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\embed-certs-20220531193346-2108\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0531 19:43:30.347980    8616 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\embed-certs-20220531193346-2108\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0531 19:43:30.409568    8616 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0531 19:43:30.468376    8616 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0531 19:43:30.523994    8616 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0531 19:43:30.580310    8616 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0531 19:43:30.649449    8616 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0531 19:43:30.719603    8616 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\2108.pem --> /usr/share/ca-certificates/2108.pem (1338 bytes)
	I0531 19:43:30.786513    8616 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\files\etc\ssl\certs\21082.pem --> /usr/share/ca-certificates/21082.pem (1708 bytes)
	I0531 19:43:30.859521    8616 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (752 bytes)
	I0531 19:43:30.907080    8616 ssh_runner.go:195] Run: openssl version
	I0531 19:43:30.929093    8616 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0531 19:43:30.986864    8616 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0531 19:43:31.005336    8616 certs.go:431] hashing: -rw-r--r-- 1 root root 1111 May 31 17:19 /usr/share/ca-certificates/minikubeCA.pem
	I0531 19:43:31.014329    8616 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0531 19:43:31.040343    8616 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0531 19:43:31.077357    8616 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2108.pem && ln -fs /usr/share/ca-certificates/2108.pem /etc/ssl/certs/2108.pem"
	I0531 19:43:31.111349    8616 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2108.pem
	I0531 19:43:31.121345    8616 certs.go:431] hashing: -rw-r--r-- 1 root root 1338 May 31 17:31 /usr/share/ca-certificates/2108.pem
	I0531 19:43:31.132335    8616 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2108.pem
	I0531 19:43:31.156335    8616 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/2108.pem /etc/ssl/certs/51391683.0"
	I0531 19:43:31.198345    8616 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/21082.pem && ln -fs /usr/share/ca-certificates/21082.pem /etc/ssl/certs/21082.pem"
	I0531 19:43:31.230395    8616 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/21082.pem
	I0531 19:43:31.245833    8616 certs.go:431] hashing: -rw-r--r-- 1 root root 1708 May 31 17:31 /usr/share/ca-certificates/21082.pem
	I0531 19:43:31.256203    8616 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/21082.pem
	I0531 19:43:31.279205    8616 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/21082.pem /etc/ssl/certs/3ec20f2e.0"
	I0531 19:43:31.309204    8616 kubeadm.go:395] StartCluster: {Name:embed-certs-20220531193346-2108 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653596720-14230@sha256:e953786303ac8350802546ee187d34e89f0007072a54fdbcc2f86a1fb8575418 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.6 ClusterName:embed-certs-20220531193346-2108 Namespace:default APIServerName
:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.23.6 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.5.1@sha256:cc746e7a0b1eec0db01cbabbb6386b23d7af97e79fa9e36bb883a95b7eb96fe2 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedP
orts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube7:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0531 19:43:31.319672    8616 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0531 19:43:31.403071    8616 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0531 19:43:31.428156    8616 kubeadm.go:410] found existing configuration files, will attempt cluster restart
	I0531 19:43:31.429149    8616 kubeadm.go:626] restartCluster start
	I0531 19:43:31.439155    8616 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0531 19:43:31.459646    8616 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0531 19:43:31.471196    8616 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" embed-certs-20220531193346-2108
	I0531 19:43:32.683774    8616 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" embed-certs-20220531193346-2108: (1.2125727s)
	I0531 19:43:32.685818    8616 kubeconfig.go:116] verify returned: extract IP: "embed-certs-20220531193346-2108" does not appear in C:\Users\jenkins.minikube7\minikube-integration\kubeconfig
	I0531 19:43:32.687090    8616 kubeconfig.go:127] "embed-certs-20220531193346-2108" context is missing from C:\Users\jenkins.minikube7\minikube-integration\kubeconfig - will repair!
	I0531 19:43:32.689552    8616 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube7\minikube-integration\kubeconfig: {Name:mk3b9816be6135fef42a073128e9ed356868417a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0531 19:43:32.711443    8616 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0531 19:43:32.741481    8616 api_server.go:165] Checking apiserver status ...
	I0531 19:43:32.755718    8616 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0531 19:43:32.782840    8616 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0531 19:43:32.998467    8616 api_server.go:165] Checking apiserver status ...
	I0531 19:43:33.009814    8616 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0531 19:43:33.038110    8616 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0531 19:43:33.187690    8616 api_server.go:165] Checking apiserver status ...
	I0531 19:43:33.198728    8616 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0531 19:43:33.231759    8616 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0531 19:43:33.389082    8616 api_server.go:165] Checking apiserver status ...
	I0531 19:43:33.401128    8616 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0531 19:43:33.437991    8616 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0531 19:43:33.596003    8616 api_server.go:165] Checking apiserver status ...
	I0531 19:43:33.610434    8616 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0531 19:43:33.649694    8616 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0531 19:43:33.783691    8616 api_server.go:165] Checking apiserver status ...
	I0531 19:43:33.794460    8616 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0531 19:43:33.820957    8616 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0531 19:43:33.989461    8616 api_server.go:165] Checking apiserver status ...
	I0531 19:43:34.000602    8616 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0531 19:43:34.029721    8616 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0531 19:43:34.197752    8616 api_server.go:165] Checking apiserver status ...
	I0531 19:43:34.208129    8616 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0531 19:43:34.240212    8616 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0531 19:43:34.388876    8616 api_server.go:165] Checking apiserver status ...
	I0531 19:43:34.398381    8616 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0531 19:43:34.427592    8616 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0531 19:43:34.592690    8616 api_server.go:165] Checking apiserver status ...
	I0531 19:43:34.603583    8616 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0531 19:43:34.630736    8616 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0531 19:43:34.793731    8616 api_server.go:165] Checking apiserver status ...
	I0531 19:43:34.802574    8616 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0531 19:43:34.840450    8616 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0531 19:43:34.982936    8616 api_server.go:165] Checking apiserver status ...
	I0531 19:43:34.994510    8616 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0531 19:43:35.019918    8616 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0531 19:43:35.194019    8616 api_server.go:165] Checking apiserver status ...
	I0531 19:43:35.206505    8616 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0531 19:43:35.239207    8616 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0531 19:43:35.383180    8616 api_server.go:165] Checking apiserver status ...
	I0531 19:43:35.395711    8616 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0531 19:43:35.424130    8616 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0531 19:43:35.588664    8616 api_server.go:165] Checking apiserver status ...
	I0531 19:43:35.599428    8616 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0531 19:43:35.628476    8616 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0531 19:43:35.794694    8616 api_server.go:165] Checking apiserver status ...
	I0531 19:43:35.804703    8616 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0531 19:43:35.835435    8616 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0531 19:43:35.835435    8616 api_server.go:165] Checking apiserver status ...
	I0531 19:43:35.845430    8616 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0531 19:43:35.872922    8616 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0531 19:43:35.872966    8616 kubeadm.go:601] needs reconfigure: apiserver error: timed out waiting for the condition
	I0531 19:43:35.872966    8616 kubeadm.go:1092] stopping kube-system containers ...
	I0531 19:43:35.888288    8616 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0531 19:43:36.000416    8616 docker.go:442] Stopping containers: [18f322b433f7 51e6610a10f8 a57711de2363 b6d1008f5a7a 05f4ee855b48 5f2d7ca41131 a4c5949a368e 76eefca54c8e 938b770e6a22 7d10308b7ec6 284f4ae8d238 af695fbb3afb 09ad7210375d 74454f5b27c5 95eae5077bdb 459d9f4673a9]
	I0531 19:43:36.009398    8616 ssh_runner.go:195] Run: docker stop 18f322b433f7 51e6610a10f8 a57711de2363 b6d1008f5a7a 05f4ee855b48 5f2d7ca41131 a4c5949a368e 76eefca54c8e 938b770e6a22 7d10308b7ec6 284f4ae8d238 af695fbb3afb 09ad7210375d 74454f5b27c5 95eae5077bdb 459d9f4673a9
	I0531 19:43:36.107408    8616 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0531 19:43:36.149402    8616 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0531 19:43:36.172404    8616 kubeadm.go:155] found existing configuration files:
	-rw------- 1 root root 5643 May 31 19:41 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5652 May 31 19:41 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2063 May 31 19:41 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5604 May 31 19:41 /etc/kubernetes/scheduler.conf
	
	I0531 19:43:36.182417    8616 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0531 19:43:36.212422    8616 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0531 19:43:36.244428    8616 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0531 19:43:36.266416    8616 kubeadm.go:166] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0531 19:43:36.278403    8616 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0531 19:43:36.311760    8616 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0531 19:43:36.334758    8616 kubeadm.go:166] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0531 19:43:36.343790    8616 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0531 19:43:36.383787    8616 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0531 19:43:36.405748    8616 kubeadm.go:703] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0531 19:43:36.405748    8616 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0531 19:43:36.547878    8616 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0531 19:43:37.853479    8616 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.3055954s)
	I0531 19:43:37.854020    8616 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0531 19:43:38.190227    8616 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0531 19:43:38.364256    8616 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0531 19:43:38.640714    8616 api_server.go:51] waiting for apiserver process to appear ...
	I0531 19:43:38.654240    8616 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 19:43:39.203285    8616 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 19:43:39.698006    8616 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 19:43:40.197265    8616 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 19:43:40.699188    8616 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 19:43:41.204693    8616 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 19:43:41.696036    8616 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 19:43:42.200147    8616 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 19:43:42.701351    8616 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 19:43:42.745446    8616 api_server.go:71] duration metric: took 4.1047578s to wait for apiserver process to appear ...
	I0531 19:43:42.745446    8616 api_server.go:87] waiting for apiserver healthz status ...
	I0531 19:43:42.745446    8616 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:54559/healthz ...
	I0531 19:43:42.750441    8616 api_server.go:256] stopped: https://127.0.0.1:54559/healthz: Get "https://127.0.0.1:54559/healthz": EOF
	I0531 19:43:43.260468    8616 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:54559/healthz ...
	I0531 19:43:44.816816    9560 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6:$PATH" kubeadm reset --cri-socket /var/run/dockershim.sock --force": (50.8628666s)
	I0531 19:43:44.834841    9560 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0531 19:43:44.875054    9560 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0531 19:43:44.910403    9560 kubeadm.go:221] ignoring SystemVerification for kubeadm because of docker driver
	I0531 19:43:44.926341    9560 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0531 19:43:44.952402    9560 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0531 19:43:44.952402    9560 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0531 19:43:48.268195    8616 api_server.go:256] stopped: https://127.0.0.1:54559/healthz: Get "https://127.0.0.1:54559/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0531 19:43:48.760057    8616 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:54559/healthz ...
	I0531 19:43:49.742236    8616 api_server.go:266] https://127.0.0.1:54559/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0531 19:43:49.742236    8616 api_server.go:102] status: https://127.0.0.1:54559/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0531 19:43:49.762242    8616 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:54559/healthz ...
	I0531 19:43:49.834262    8616 api_server.go:266] https://127.0.0.1:54559/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0531 19:43:49.834262    8616 api_server.go:102] status: https://127.0.0.1:54559/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0531 19:43:50.265125    8616 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:54559/healthz ...
	I0531 19:43:50.325042    8616 api_server.go:266] https://127.0.0.1:54559/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0531 19:43:50.325997    8616 api_server.go:102] status: https://127.0.0.1:54559/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0531 19:43:50.754281    8616 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:54559/healthz ...
	I0531 19:43:50.778515    8616 api_server.go:266] https://127.0.0.1:54559/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0531 19:43:50.778515    8616 api_server.go:102] status: https://127.0.0.1:54559/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0531 19:43:51.256439    8616 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:54559/healthz ...
	I0531 19:43:51.326599    8616 api_server.go:266] https://127.0.0.1:54559/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0531 19:43:51.326599    8616 api_server.go:102] status: https://127.0.0.1:54559/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0531 19:43:51.759811    8616 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:54559/healthz ...
	I0531 19:43:51.871137    8616 api_server.go:266] https://127.0.0.1:54559/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0531 19:43:51.871223    8616 api_server.go:102] status: https://127.0.0.1:54559/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0531 19:43:52.250743    8616 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:54559/healthz ...
	I0531 19:43:52.368759    8616 api_server.go:266] https://127.0.0.1:54559/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0531 19:43:52.368759    8616 api_server.go:102] status: https://127.0.0.1:54559/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0531 19:43:52.756360    8616 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:54559/healthz ...
	I0531 19:43:52.847081    8616 api_server.go:266] https://127.0.0.1:54559/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0531 19:43:52.847081    8616 api_server.go:102] status: https://127.0.0.1:54559/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0531 19:43:53.262170    8616 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:54559/healthz ...
	I0531 19:43:53.433182    8616 api_server.go:266] https://127.0.0.1:54559/healthz returned 200:
	ok
	I0531 19:43:53.526458    8616 api_server.go:140] control plane version: v1.23.6
	I0531 19:43:53.526458    8616 api_server.go:130] duration metric: took 10.7809658s to wait for apiserver health ...
	I0531 19:43:53.526458    8616 cni.go:95] Creating CNI manager for ""
	I0531 19:43:53.526458    8616 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0531 19:43:53.526780    8616 system_pods.go:43] waiting for kube-system pods to appear ...
	I0531 19:43:53.834702    8616 system_pods.go:59] 8 kube-system pods found
	I0531 19:43:53.834702    8616 system_pods.go:61] "coredns-64897985d-h6l4d" [45e6521b-b5ba-4365-bf64-ed7f35254f8d] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0531 19:43:53.834702    8616 system_pods.go:61] "etcd-embed-certs-20220531193346-2108" [8748c975-baa9-452c-8caa-89e8ff59a91a] Running
	I0531 19:43:53.834702    8616 system_pods.go:61] "kube-apiserver-embed-certs-20220531193346-2108" [df786508-0ac8-406a-b97b-c6650f016ceb] Running
	I0531 19:43:53.834702    8616 system_pods.go:61] "kube-controller-manager-embed-certs-20220531193346-2108" [fc4dc9a6-b756-47c3-93c8-b37e4a6af2ea] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0531 19:43:53.834702    8616 system_pods.go:61] "kube-proxy-qmdlz" [e19359f3-a2a5-4148-a7a1-ba356b861a3a] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0531 19:43:53.834702    8616 system_pods.go:61] "kube-scheduler-embed-certs-20220531193346-2108" [f3efddc7-a2d0-44d8-a33b-a9b19295fe12] Running
	I0531 19:43:53.834702    8616 system_pods.go:61] "metrics-server-b955d9d8-n88dp" [eb244db6-b3bb-4838-a675-474d5d7a17d7] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0531 19:43:53.834702    8616 system_pods.go:61] "storage-provisioner" [1f06e411-8a16-4f79-b44d-1c60e7d37395] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0531 19:43:53.834702    8616 system_pods.go:74] duration metric: took 307.92ms to wait for pod list to return data ...
	I0531 19:43:53.834702    8616 node_conditions.go:102] verifying NodePressure condition ...
	I0531 19:43:53.929683    8616 node_conditions.go:122] node storage ephemeral capacity is 263174212Ki
	I0531 19:43:53.929683    8616 node_conditions.go:123] node cpu capacity is 16
	I0531 19:43:53.929683    8616 node_conditions.go:105] duration metric: took 94.9807ms to run NodePressure ...
	I0531 19:43:53.929683    8616 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0531 19:43:56.629261    8616 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml": (2.6995662s)
	I0531 19:43:56.629261    8616 kubeadm.go:762] waiting for restarted kubelet to initialise ...
	I0531 19:43:56.740265    8616 kubeadm.go:777] kubelet initialised
	I0531 19:43:56.740265    8616 kubeadm.go:778] duration metric: took 111.0043ms waiting for restarted kubelet to initialise ...
	I0531 19:43:56.740265    8616 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0531 19:43:56.844460    8616 pod_ready.go:78] waiting up to 4m0s for pod "coredns-64897985d-h6l4d" in "kube-system" namespace to be "Ready" ...
	I0531 19:43:58.488158    8616 pod_ready.go:92] pod "coredns-64897985d-h6l4d" in "kube-system" namespace has status "Ready":"True"
	I0531 19:43:58.488158    8616 pod_ready.go:81] duration metric: took 1.6436914s waiting for pod "coredns-64897985d-h6l4d" in "kube-system" namespace to be "Ready" ...
	I0531 19:43:58.488158    8616 pod_ready.go:78] waiting up to 4m0s for pod "etcd-embed-certs-20220531193346-2108" in "kube-system" namespace to be "Ready" ...
	I0531 19:43:58.510159    8616 pod_ready.go:92] pod "etcd-embed-certs-20220531193346-2108" in "kube-system" namespace has status "Ready":"True"
	I0531 19:43:58.510159    8616 pod_ready.go:81] duration metric: took 22.0001ms waiting for pod "etcd-embed-certs-20220531193346-2108" in "kube-system" namespace to be "Ready" ...
	I0531 19:43:58.510159    8616 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-embed-certs-20220531193346-2108" in "kube-system" namespace to be "Ready" ...
	I0531 19:43:58.527166    8616 pod_ready.go:92] pod "kube-apiserver-embed-certs-20220531193346-2108" in "kube-system" namespace has status "Ready":"True"
	I0531 19:43:58.527166    8616 pod_ready.go:81] duration metric: took 17.0071ms waiting for pod "kube-apiserver-embed-certs-20220531193346-2108" in "kube-system" namespace to be "Ready" ...
	I0531 19:43:58.527166    8616 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-20220531193346-2108" in "kube-system" namespace to be "Ready" ...
	I0531 19:43:58.550165    8616 pod_ready.go:92] pod "kube-controller-manager-embed-certs-20220531193346-2108" in "kube-system" namespace has status "Ready":"True"
	I0531 19:43:58.550165    8616 pod_ready.go:81] duration metric: took 22.9989ms waiting for pod "kube-controller-manager-embed-certs-20220531193346-2108" in "kube-system" namespace to be "Ready" ...
	I0531 19:43:58.550165    8616 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-qmdlz" in "kube-system" namespace to be "Ready" ...
	I0531 19:43:58.565167    8616 pod_ready.go:92] pod "kube-proxy-qmdlz" in "kube-system" namespace has status "Ready":"True"
	I0531 19:43:58.566182    8616 pod_ready.go:81] duration metric: took 16.0169ms waiting for pod "kube-proxy-qmdlz" in "kube-system" namespace to be "Ready" ...
	I0531 19:43:58.566182    8616 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-embed-certs-20220531193346-2108" in "kube-system" namespace to be "Ready" ...
	I0531 19:43:58.885714    8616 pod_ready.go:92] pod "kube-scheduler-embed-certs-20220531193346-2108" in "kube-system" namespace has status "Ready":"True"
	I0531 19:43:58.885714    8616 pod_ready.go:81] duration metric: took 319.5308ms waiting for pod "kube-scheduler-embed-certs-20220531193346-2108" in "kube-system" namespace to be "Ready" ...
	I0531 19:43:58.885714    8616 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-b955d9d8-n88dp" in "kube-system" namespace to be "Ready" ...
	I0531 19:44:01.299568    8616 pod_ready.go:102] pod "metrics-server-b955d9d8-n88dp" in "kube-system" namespace has status "Ready":"False"
	I0531 19:44:03.307027    8616 pod_ready.go:102] pod "metrics-server-b955d9d8-n88dp" in "kube-system" namespace has status "Ready":"False"
	I0531 19:44:07.756660    9560 out.go:204]   - Generating certificates and keys ...
	I0531 19:44:07.763661    9560 out.go:204]   - Booting up control plane ...
	I0531 19:44:07.770647    9560 out.go:204]   - Configuring RBAC rules ...
	I0531 19:44:07.776631    9560 cni.go:95] Creating CNI manager for ""
	I0531 19:44:07.776631    9560 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0531 19:44:07.776631    9560 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0531 19:44:07.790660    9560 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl label nodes minikube.k8s.io/version=v1.26.0-beta.1 minikube.k8s.io/commit=bd46569bd7cb517fcad2e704abdebd2826bd8454 minikube.k8s.io/name=default-k8s-different-port-20220531193451-2108 minikube.k8s.io/updated_at=2022_05_31T19_44_07_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 19:44:07.790660    9560 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 19:44:07.840339    9560 ops.go:34] apiserver oom_adj: -16
	I0531 19:44:08.352099    9560 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 19:44:05.309753    8616 pod_ready.go:102] pod "metrics-server-b955d9d8-n88dp" in "kube-system" namespace has status "Ready":"False"
	I0531 19:44:07.337428    8616 pod_ready.go:102] pod "metrics-server-b955d9d8-n88dp" in "kube-system" namespace has status "Ready":"False"
	I0531 19:44:09.798297    8616 pod_ready.go:102] pod "metrics-server-b955d9d8-n88dp" in "kube-system" namespace has status "Ready":"False"
	I0531 19:44:09.482153    9560 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 19:44:09.971972    9560 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 19:44:10.464847    9560 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 19:44:10.969442    9560 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 19:44:11.476944    9560 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 19:44:11.961230    9560 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 19:44:12.471788    9560 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 19:44:12.967463    9560 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 19:44:13.460534    9560 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 19:44:13.967474    9560 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 19:44:11.816175    8616 pod_ready.go:102] pod "metrics-server-b955d9d8-n88dp" in "kube-system" namespace has status "Ready":"False"
	I0531 19:44:14.312525    8616 pod_ready.go:102] pod "metrics-server-b955d9d8-n88dp" in "kube-system" namespace has status "Ready":"False"
	I0531 19:44:14.468026    9560 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 19:44:14.970487    9560 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 19:44:15.475084    9560 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 19:44:15.974523    9560 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 19:44:16.469115    9560 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 19:44:16.974170    9560 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 19:44:17.462022    9560 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 19:44:17.968911    9560 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 19:44:18.470960    9560 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 19:44:16.806595    8616 pod_ready.go:102] pod "metrics-server-b955d9d8-n88dp" in "kube-system" namespace has status "Ready":"False"
	I0531 19:44:18.810488    8616 pod_ready.go:102] pod "metrics-server-b955d9d8-n88dp" in "kube-system" namespace has status "Ready":"False"
	I0531 19:44:19.534925    9560 ssh_runner.go:235] Completed: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig: (1.0638088s)
	I0531 19:44:19.970577    9560 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 19:44:20.438654    9560 kubeadm.go:1045] duration metric: took 12.6619693s to wait for elevateKubeSystemPrivileges.
	I0531 19:44:20.438654    9560 kubeadm.go:397] StartCluster complete in 5m55.5759519s
	I0531 19:44:20.438654    9560 settings.go:142] acquiring lock: {Name:mk2f48f1c2db86c45c5c20d13312e07e9c171d09 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0531 19:44:20.439658    9560 settings.go:150] Updating kubeconfig:  C:\Users\jenkins.minikube7\minikube-integration\kubeconfig
	I0531 19:44:20.443654    9560 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube7\minikube-integration\kubeconfig: {Name:mk3b9816be6135fef42a073128e9ed356868417a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0531 19:44:21.229705    9560 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "default-k8s-different-port-20220531193451-2108" rescaled to 1
	I0531 19:44:21.229705    9560 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.23.6/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0531 19:44:21.229705    9560 start.go:208] Will wait 6m0s for node &{Name: IP:192.168.58.2 Port:8444 KubernetesVersion:v1.23.6 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0531 19:44:21.235704    9560 out.go:177] * Verifying Kubernetes components...
	I0531 19:44:21.229705    9560 addons.go:415] enableAddons start: toEnable=map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true], additional=[]
	I0531 19:44:21.230733    9560 config.go:178] Loaded profile config "default-k8s-different-port-20220531193451-2108": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.6
	I0531 19:44:21.239692    9560 addons.go:65] Setting storage-provisioner=true in profile "default-k8s-different-port-20220531193451-2108"
	I0531 19:44:21.239692    9560 addons.go:65] Setting default-storageclass=true in profile "default-k8s-different-port-20220531193451-2108"
	I0531 19:44:21.239692    9560 addons.go:153] Setting addon storage-provisioner=true in "default-k8s-different-port-20220531193451-2108"
	W0531 19:44:21.239692    9560 addons.go:165] addon storage-provisioner should already be in state true
	I0531 19:44:21.239692    9560 addons.go:65] Setting dashboard=true in profile "default-k8s-different-port-20220531193451-2108"
	I0531 19:44:21.239692    9560 addons.go:153] Setting addon dashboard=true in "default-k8s-different-port-20220531193451-2108"
	I0531 19:44:21.239692    9560 addons.go:65] Setting metrics-server=true in profile "default-k8s-different-port-20220531193451-2108"
	I0531 19:44:21.239692    9560 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-different-port-20220531193451-2108"
	I0531 19:44:21.239692    9560 addons.go:153] Setting addon metrics-server=true in "default-k8s-different-port-20220531193451-2108"
	W0531 19:44:21.239692    9560 addons.go:165] addon metrics-server should already be in state true
	I0531 19:44:21.239692    9560 host.go:66] Checking if "default-k8s-different-port-20220531193451-2108" exists ...
	I0531 19:44:21.239692    9560 host.go:66] Checking if "default-k8s-different-port-20220531193451-2108" exists ...
	W0531 19:44:21.239692    9560 addons.go:165] addon dashboard should already be in state true
	I0531 19:44:21.239692    9560 host.go:66] Checking if "default-k8s-different-port-20220531193451-2108" exists ...
	I0531 19:44:21.260676    9560 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0531 19:44:21.271683    9560 cli_runner.go:164] Run: docker container inspect default-k8s-different-port-20220531193451-2108 --format={{.State.Status}}
	I0531 19:44:21.273689    9560 cli_runner.go:164] Run: docker container inspect default-k8s-different-port-20220531193451-2108 --format={{.State.Status}}
	I0531 19:44:21.275673    9560 cli_runner.go:164] Run: docker container inspect default-k8s-different-port-20220531193451-2108 --format={{.State.Status}}
	I0531 19:44:21.275673    9560 cli_runner.go:164] Run: docker container inspect default-k8s-different-port-20220531193451-2108 --format={{.State.Status}}
	I0531 19:44:22.048618    9560 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.23.6/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.65.2 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.23.6/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0531 19:44:22.061616    9560 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8444/tcp") 0).HostPort}}'" default-k8s-different-port-20220531193451-2108
	I0531 19:44:22.911602    9560 cli_runner.go:217] Completed: docker container inspect default-k8s-different-port-20220531193451-2108 --format={{.State.Status}}: (1.6359211s)
	I0531 19:44:22.914600    9560 out.go:177]   - Using image kubernetesui/dashboard:v2.5.1
	I0531 19:44:22.916617    9560 cli_runner.go:217] Completed: docker container inspect default-k8s-different-port-20220531193451-2108 --format={{.State.Status}}: (1.6449265s)
	I0531 19:44:22.919604    9560 out.go:177]   - Using image k8s.gcr.io/echoserver:1.4
	I0531 19:44:22.922602    9560 addons.go:348] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0531 19:44:22.922602    9560 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0531 19:44:22.930625    9560 addons.go:153] Setting addon default-storageclass=true in "default-k8s-different-port-20220531193451-2108"
	W0531 19:44:22.930625    9560 addons.go:165] addon default-storageclass should already be in state true
	I0531 19:44:22.930625    9560 host.go:66] Checking if "default-k8s-different-port-20220531193451-2108" exists ...
	I0531 19:44:22.944627    9560 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220531193451-2108
	I0531 19:44:22.957615    9560 cli_runner.go:164] Run: docker container inspect default-k8s-different-port-20220531193451-2108 --format={{.State.Status}}
	I0531 19:44:22.961614    9560 cli_runner.go:217] Completed: docker container inspect default-k8s-different-port-20220531193451-2108 --format={{.State.Status}}: (1.685933s)
	I0531 19:44:22.965626    9560 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0531 19:44:22.967607    9560 addons.go:348] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0531 19:44:22.967607    9560 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0531 19:44:22.976605    9560 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220531193451-2108
	I0531 19:44:22.980611    9560 cli_runner.go:217] Completed: docker container inspect default-k8s-different-port-20220531193451-2108 --format={{.State.Status}}: (1.7069152s)
	I0531 19:44:22.982607    9560 out.go:177]   - Using image fake.domain/k8s.gcr.io/echoserver:1.4
	I0531 19:44:22.985603    9560 addons.go:348] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0531 19:44:22.985603    9560 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0531 19:44:22.994627    9560 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220531193451-2108
	I0531 19:44:23.687963    9560 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8444/tcp") 0).HostPort}}'" default-k8s-different-port-20220531193451-2108: (1.6263394s)
	I0531 19:44:23.687963    9560 node_ready.go:35] waiting up to 6m0s for node "default-k8s-different-port-20220531193451-2108" to be "Ready" ...
	I0531 19:44:23.828370    9560 node_ready.go:49] node "default-k8s-different-port-20220531193451-2108" has status "Ready":"True"
	I0531 19:44:23.828734    9560 node_ready.go:38] duration metric: took 140.7709ms waiting for node "default-k8s-different-port-20220531193451-2108" to be "Ready" ...
	I0531 19:44:23.828734    9560 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0531 19:44:23.936964    9560 pod_ready.go:78] waiting up to 6m0s for pod "coredns-64897985d-884ps" in "kube-system" namespace to be "Ready" ...
	I0531 19:44:21.322140    8616 pod_ready.go:102] pod "metrics-server-b955d9d8-n88dp" in "kube-system" namespace has status "Ready":"False"
	I0531 19:44:23.803392    8616 pod_ready.go:102] pod "metrics-server-b955d9d8-n88dp" in "kube-system" namespace has status "Ready":"False"
	I0531 19:44:24.501961    9560 cli_runner.go:217] Completed: docker container inspect default-k8s-different-port-20220531193451-2108 --format={{.State.Status}}: (1.5433206s)
	I0531 19:44:24.501961    9560 addons.go:348] installing /etc/kubernetes/addons/storageclass.yaml
	I0531 19:44:24.501961    9560 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0531 19:44:24.506972    9560 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220531193451-2108: (1.5123383s)
	I0531 19:44:24.506972    9560 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:54331 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\default-k8s-different-port-20220531193451-2108\id_rsa Username:docker}
	I0531 19:44:24.510973    9560 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220531193451-2108
	I0531 19:44:24.532741    9560 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220531193451-2108: (1.5881072s)
	I0531 19:44:24.532741    9560 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220531193451-2108: (1.5561296s)
	I0531 19:44:24.533696    9560 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:54331 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\default-k8s-different-port-20220531193451-2108\id_rsa Username:docker}
	I0531 19:44:24.533846    9560 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:54331 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\default-k8s-different-port-20220531193451-2108\id_rsa Username:docker}
	I0531 19:44:25.437388    9560 addons.go:348] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0531 19:44:25.437388    9560 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1849 bytes)
	I0531 19:44:25.462818    9560 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.6/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0531 19:44:25.529121    9560 addons.go:348] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0531 19:44:25.529121    9560 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0531 19:44:25.731080    9560 addons.go:348] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0531 19:44:25.731080    9560 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0531 19:44:25.747097    9560 addons.go:348] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0531 19:44:25.747097    9560 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0531 19:44:25.844043    9560 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220531193451-2108: (1.3330635s)
	I0531 19:44:25.844043    9560 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:54331 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\default-k8s-different-port-20220531193451-2108\id_rsa Username:docker}
	I0531 19:44:25.937163    9560 addons.go:348] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0531 19:44:25.937163    9560 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0531 19:44:26.126892    9560 addons.go:348] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0531 19:44:26.126892    9560 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0531 19:44:26.325073    9560 addons.go:348] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0531 19:44:26.325073    9560 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4196 bytes)
	I0531 19:44:26.446367    9560 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.6/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0531 19:44:26.453092    9560 addons.go:348] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0531 19:44:26.453226    9560 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0531 19:44:26.569123    9560 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.6/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0531 19:44:26.642947    9560 addons.go:348] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0531 19:44:26.642947    9560 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0531 19:44:26.829387    9560 addons.go:348] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0531 19:44:26.829577    9560 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0531 19:44:27.125508    9560 addons.go:348] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0531 19:44:27.125852    9560 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0531 19:44:27.260412    9560 addons.go:348] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0531 19:44:27.260412    9560 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0531 19:44:27.363794    9560 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.6/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0531 19:44:26.325073    8616 pod_ready.go:102] pod "metrics-server-b955d9d8-n88dp" in "kube-system" namespace has status "Ready":"False"
	I0531 19:44:31.634839    9560 pod_ready.go:102] pod "coredns-64897985d-884ps" in "kube-system" namespace has status "Ready":"False"
	I0531 19:44:32.237634    9560 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.23.6/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.65.2 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.23.6/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (10.1879316s)
	I0531 19:44:32.237634    9560 start.go:806] {"host.minikube.internal": 192.168.65.2} host record injected into CoreDNS
	I0531 19:44:33.242650    9560 pod_ready.go:92] pod "coredns-64897985d-884ps" in "kube-system" namespace has status "Ready":"True"
	I0531 19:44:33.242721    9560 pod_ready.go:81] duration metric: took 9.3055945s waiting for pod "coredns-64897985d-884ps" in "kube-system" namespace to be "Ready" ...
	I0531 19:44:33.242721    9560 pod_ready.go:78] waiting up to 6m0s for pod "coredns-64897985d-kh4sm" in "kube-system" namespace to be "Ready" ...
	I0531 19:44:33.627904    9560 pod_ready.go:92] pod "coredns-64897985d-kh4sm" in "kube-system" namespace has status "Ready":"True"
	I0531 19:44:33.627904    9560 pod_ready.go:81] duration metric: took 385.1101ms waiting for pod "coredns-64897985d-kh4sm" in "kube-system" namespace to be "Ready" ...
	I0531 19:44:33.627904    9560 pod_ready.go:78] waiting up to 6m0s for pod "etcd-default-k8s-different-port-20220531193451-2108" in "kube-system" namespace to be "Ready" ...
	I0531 19:44:33.731711    9560 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.6/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (8.268857s)
	I0531 19:44:33.841582    9560 pod_ready.go:92] pod "etcd-default-k8s-different-port-20220531193451-2108" in "kube-system" namespace has status "Ready":"True"
	I0531 19:44:33.841654    9560 pod_ready.go:81] duration metric: took 213.7491ms waiting for pod "etcd-default-k8s-different-port-20220531193451-2108" in "kube-system" namespace to be "Ready" ...
	I0531 19:44:33.841654    9560 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-default-k8s-different-port-20220531193451-2108" in "kube-system" namespace to be "Ready" ...
	I0531 19:44:34.145728    9560 pod_ready.go:92] pod "kube-apiserver-default-k8s-different-port-20220531193451-2108" in "kube-system" namespace has status "Ready":"True"
	I0531 19:44:34.145969    9560 pod_ready.go:81] duration metric: took 304.3136ms waiting for pod "kube-apiserver-default-k8s-different-port-20220531193451-2108" in "kube-system" namespace to be "Ready" ...
	I0531 19:44:34.145969    9560 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-different-port-20220531193451-2108" in "kube-system" namespace to be "Ready" ...
	I0531 19:44:31.510804    8616 pod_ready.go:102] pod "metrics-server-b955d9d8-n88dp" in "kube-system" namespace has status "Ready":"False"
	I0531 19:44:33.810256    8616 pod_ready.go:102] pod "metrics-server-b955d9d8-n88dp" in "kube-system" namespace has status "Ready":"False"
	I0531 19:44:34.327366    9560 pod_ready.go:92] pod "kube-controller-manager-default-k8s-different-port-20220531193451-2108" in "kube-system" namespace has status "Ready":"True"
	I0531 19:44:34.327366    9560 pod_ready.go:81] duration metric: took 181.3956ms waiting for pod "kube-controller-manager-default-k8s-different-port-20220531193451-2108" in "kube-system" namespace to be "Ready" ...
	I0531 19:44:34.327366    9560 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-vm4xx" in "kube-system" namespace to be "Ready" ...
	I0531 19:44:34.437117    9560 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.6/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (7.9907156s)
	I0531 19:44:34.437117    9560 addons.go:386] Verifying addon metrics-server=true in "default-k8s-different-port-20220531193451-2108"
	I0531 19:44:34.437117    9560 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.6/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (7.8674005s)
	I0531 19:44:34.635216    9560 pod_ready.go:92] pod "kube-proxy-vm4xx" in "kube-system" namespace has status "Ready":"True"
	I0531 19:44:34.635216    9560 pod_ready.go:81] duration metric: took 307.8491ms waiting for pod "kube-proxy-vm4xx" in "kube-system" namespace to be "Ready" ...
	I0531 19:44:34.635216    9560 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-default-k8s-different-port-20220531193451-2108" in "kube-system" namespace to be "Ready" ...
	I0531 19:44:34.849278    9560 pod_ready.go:92] pod "kube-scheduler-default-k8s-different-port-20220531193451-2108" in "kube-system" namespace has status "Ready":"True"
	I0531 19:44:34.849360    9560 pod_ready.go:81] duration metric: took 214.1034ms waiting for pod "kube-scheduler-default-k8s-different-port-20220531193451-2108" in "kube-system" namespace to be "Ready" ...
	I0531 19:44:34.849360    9560 pod_ready.go:38] duration metric: took 11.0204756s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0531 19:44:34.849436    9560 api_server.go:51] waiting for apiserver process to appear ...
	I0531 19:44:34.861644    9560 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 19:44:36.044255    9560 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.6/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (8.6804234s)
	I0531 19:44:36.044255    9560 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (1.1826061s)
	I0531 19:44:36.044368    9560 api_server.go:71] duration metric: took 14.8145997s to wait for apiserver process to appear ...
	I0531 19:44:36.044368    9560 api_server.go:87] waiting for apiserver healthz status ...
	I0531 19:44:36.044430    9560 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:54335/healthz ...
	I0531 19:44:36.057400    9560 out.go:177] * Enabled addons: storage-provisioner, metrics-server, default-storageclass, dashboard
	I0531 19:44:36.060335    9560 addons.go:417] enableAddons completed in 14.8305665s
	I0531 19:44:36.224066    9560 api_server.go:266] https://127.0.0.1:54335/healthz returned 200:
	ok
	I0531 19:44:36.236498    9560 api_server.go:140] control plane version: v1.23.6
	I0531 19:44:36.236498    9560 api_server.go:130] duration metric: took 192.1293ms to wait for apiserver health ...
	I0531 19:44:36.236498    9560 system_pods.go:43] waiting for kube-system pods to appear ...
	I0531 19:44:36.336613    9560 system_pods.go:59] 9 kube-system pods found
	I0531 19:44:36.336613    9560 system_pods.go:61] "coredns-64897985d-884ps" [7e2fbf53-e186-470d-a621-422adcdefe32] Running
	I0531 19:44:36.336613    9560 system_pods.go:61] "coredns-64897985d-kh4sm" [97634b04-cc54-499b-b5bd-75ff7aaa05ab] Running
	I0531 19:44:36.336613    9560 system_pods.go:61] "etcd-default-k8s-different-port-20220531193451-2108" [7feb3af0-6f53-4104-9330-49b7717ce971] Running
	I0531 19:44:36.336613    9560 system_pods.go:61] "kube-apiserver-default-k8s-different-port-20220531193451-2108" [c7da96b9-e9f6-490e-9709-9b938eee0a8b] Running
	I0531 19:44:36.336613    9560 system_pods.go:61] "kube-controller-manager-default-k8s-different-port-20220531193451-2108" [2005a083-b32b-4917-ab29-f35f09eb7b73] Running
	I0531 19:44:36.336613    9560 system_pods.go:61] "kube-proxy-vm4xx" [fa9681eb-0cbe-4d1f-9533-1f8e3a229507] Running
	I0531 19:44:36.336613    9560 system_pods.go:61] "kube-scheduler-default-k8s-different-port-20220531193451-2108" [edc80bc1-bac0-4f1a-a048-d26c23630cef] Running
	I0531 19:44:36.336613    9560 system_pods.go:61] "metrics-server-b955d9d8-ztsk8" [5c61df25-03cd-4529-b27f-75b5a684dd1d] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0531 19:44:36.336613    9560 system_pods.go:61] "storage-provisioner" [92d9d4de-13a3-487a-8bd4-133e153e4444] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0531 19:44:36.336613    9560 system_pods.go:74] duration metric: took 100.1146ms to wait for pod list to return data ...
	I0531 19:44:36.336613    9560 default_sa.go:34] waiting for default service account to be created ...
	I0531 19:44:36.434215    9560 default_sa.go:45] found service account: "default"
	I0531 19:44:36.434215    9560 default_sa.go:55] duration metric: took 97.602ms for default service account to be created ...
	I0531 19:44:36.434215    9560 system_pods.go:116] waiting for k8s-apps to be running ...
	I0531 19:44:36.458219    9560 system_pods.go:86] 9 kube-system pods found
	I0531 19:44:36.458219    9560 system_pods.go:89] "coredns-64897985d-884ps" [7e2fbf53-e186-470d-a621-422adcdefe32] Running
	I0531 19:44:36.458219    9560 system_pods.go:89] "coredns-64897985d-kh4sm" [97634b04-cc54-499b-b5bd-75ff7aaa05ab] Running
	I0531 19:44:36.458219    9560 system_pods.go:89] "etcd-default-k8s-different-port-20220531193451-2108" [7feb3af0-6f53-4104-9330-49b7717ce971] Running
	I0531 19:44:36.458219    9560 system_pods.go:89] "kube-apiserver-default-k8s-different-port-20220531193451-2108" [c7da96b9-e9f6-490e-9709-9b938eee0a8b] Running
	I0531 19:44:36.458219    9560 system_pods.go:89] "kube-controller-manager-default-k8s-different-port-20220531193451-2108" [2005a083-b32b-4917-ab29-f35f09eb7b73] Running
	I0531 19:44:36.458219    9560 system_pods.go:89] "kube-proxy-vm4xx" [fa9681eb-0cbe-4d1f-9533-1f8e3a229507] Running
	I0531 19:44:36.458219    9560 system_pods.go:89] "kube-scheduler-default-k8s-different-port-20220531193451-2108" [edc80bc1-bac0-4f1a-a048-d26c23630cef] Running
	I0531 19:44:36.458219    9560 system_pods.go:89] "metrics-server-b955d9d8-ztsk8" [5c61df25-03cd-4529-b27f-75b5a684dd1d] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0531 19:44:36.458219    9560 system_pods.go:89] "storage-provisioner" [92d9d4de-13a3-487a-8bd4-133e153e4444] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0531 19:44:36.458219    9560 system_pods.go:126] duration metric: took 24.003ms to wait for k8s-apps to be running ...
	I0531 19:44:36.458219    9560 system_svc.go:44] waiting for kubelet service to be running ....
	I0531 19:44:36.471401    9560 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0531 19:44:36.632800    9560 system_svc.go:56] duration metric: took 174.5803ms WaitForService to wait for kubelet.
	I0531 19:44:36.632894    9560 kubeadm.go:572] duration metric: took 15.4031236s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0531 19:44:36.633100    9560 node_conditions.go:102] verifying NodePressure condition ...
	I0531 19:44:36.726771    9560 node_conditions.go:122] node storage ephemeral capacity is 263174212Ki
	I0531 19:44:36.726825    9560 node_conditions.go:123] node cpu capacity is 16
	I0531 19:44:36.726825    9560 node_conditions.go:105] duration metric: took 93.7246ms to run NodePressure ...
	I0531 19:44:36.726825    9560 start.go:213] waiting for startup goroutines ...
	I0531 19:44:36.942049    9560 start.go:504] kubectl: 1.18.2, cluster: 1.23.6 (minor skew: 5)
	I0531 19:44:36.944037    9560 out.go:177] 
	W0531 19:44:36.947044    9560 out.go:239] ! C:\ProgramData\chocolatey\bin\kubectl.exe is version 1.18.2, which may have incompatibilites with Kubernetes 1.23.6.
	I0531 19:44:36.952046    9560 out.go:177]   - Want kubectl v1.23.6? Try 'minikube kubectl -- get pods -A'
	I0531 19:44:36.959043    9560 out.go:177] * Done! kubectl is now configured to use "default-k8s-different-port-20220531193451-2108" cluster and "default" namespace by default
	I0531 19:44:36.305796    8616 pod_ready.go:102] pod "metrics-server-b955d9d8-n88dp" in "kube-system" namespace has status "Ready":"False"
	I0531 19:44:38.314968    8616 pod_ready.go:102] pod "metrics-server-b955d9d8-n88dp" in "kube-system" namespace has status "Ready":"False"
	
	* 
	* ==> Docker <==
	* -- Logs begin at Tue 2022-05-31 19:35:55 UTC, end at Tue 2022-05-31 19:44:49 UTC. --
	May 31 19:41:35 old-k8s-version-20220531192531-2108 dockerd[250]: time="2022-05-31T19:41:35.070201300Z" level=info msg="ignoring event" container=edfbcbda899056e2410c4acf8ea6a853cdd0b2fbe327f91acc1611a8d3d3391c module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	May 31 19:42:18 old-k8s-version-20220531192531-2108 dockerd[250]: time="2022-05-31T19:42:18.649088800Z" level=error msg="stream copy error: reading from a closed fifo"
	May 31 19:42:18 old-k8s-version-20220531192531-2108 dockerd[250]: time="2022-05-31T19:42:18.719649300Z" level=error msg="stream copy error: reading from a closed fifo"
	May 31 19:42:19 old-k8s-version-20220531192531-2108 dockerd[250]: time="2022-05-31T19:42:19.821922300Z" level=error msg="247dabaa3d5f823c5b66c2062b47692e5b49e91435959ae09de0f032a5623800 cleanup: failed to delete container from containerd: no such container"
	May 31 19:42:19 old-k8s-version-20220531192531-2108 dockerd[250]: time="2022-05-31T19:42:19.822450700Z" level=error msg="Handler for POST /containers/247dabaa3d5f823c5b66c2062b47692e5b49e91435959ae09de0f032a5623800/start returned error: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: writing syncT \"procResume\": write init-p: broken pipe: unknown"
	May 31 19:42:28 old-k8s-version-20220531192531-2108 dockerd[250]: time="2022-05-31T19:42:28.645661400Z" level=warning msg="Error getting v2 registry: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host"
	May 31 19:42:28 old-k8s-version-20220531192531-2108 dockerd[250]: time="2022-05-31T19:42:28.645743100Z" level=info msg="Attempting next endpoint for pull after error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host"
	May 31 19:42:28 old-k8s-version-20220531192531-2108 dockerd[250]: time="2022-05-31T19:42:28.655547400Z" level=error msg="Handler for POST /images/create returned error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host"
	May 31 19:42:29 old-k8s-version-20220531192531-2108 dockerd[250]: time="2022-05-31T19:42:29.873960300Z" level=warning msg="reference for unknown type: application/vnd.docker.distribution.manifest.v1+prettyjws" digest="sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb" remote="k8s.gcr.io/echoserver:1.4"
	May 31 19:42:30 old-k8s-version-20220531192531-2108 dockerd[250]: time="2022-05-31T19:42:30.049928000Z" level=warning msg="Error persisting manifest" digest="sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb" error="error committing manifest to content store: commit failed: unexpected commit digest sha256:eaee4c452b076cdb05b391ed7e75e1ad0aca136665875ab5d7e2f3d9f4675769, expected sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb: failed precondition" remote="k8s.gcr.io/echoserver:1.4"
	May 31 19:42:47 old-k8s-version-20220531192531-2108 dockerd[250]: time="2022-05-31T19:42:47.980007500Z" level=warning msg="reference for unknown type: " digest="sha256:cc746e7a0b1eec0db01cbabbb6386b23d7af97e79fa9e36bb883a95b7eb96fe2" remote="docker.io/kubernetesui/dashboard@sha256:cc746e7a0b1eec0db01cbabbb6386b23d7af97e79fa9e36bb883a95b7eb96fe2"
	May 31 19:42:48 old-k8s-version-20220531192531-2108 dockerd[250]: time="2022-05-31T19:42:48.345433300Z" level=info msg="ignoring event" container=9baa5613dc437df519abd371e1c81dd506a65210c0049a8f8c6a60cba02b84c9 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	May 31 19:42:49 old-k8s-version-20220531192531-2108 dockerd[250]: time="2022-05-31T19:42:49.050869500Z" level=info msg="ignoring event" container=a82f5f7cd3eb6a6365072d16275bd8268df4e96e619a79fa57e043015adfdee5 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	May 31 19:43:01 old-k8s-version-20220531192531-2108 dockerd[250]: time="2022-05-31T19:43:01.022851800Z" level=info msg="ignoring event" container=817b790393050f42b01ddb3442a99a6c5ba58651a8df3b91e5f0c0ffb80c4666 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	May 31 19:43:11 old-k8s-version-20220531192531-2108 dockerd[250]: time="2022-05-31T19:43:11.258016700Z" level=warning msg="Error getting v2 registry: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host"
	May 31 19:43:11 old-k8s-version-20220531192531-2108 dockerd[250]: time="2022-05-31T19:43:11.258272000Z" level=info msg="Attempting next endpoint for pull after error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host"
	May 31 19:43:11 old-k8s-version-20220531192531-2108 dockerd[250]: time="2022-05-31T19:43:11.270055200Z" level=error msg="Handler for POST /images/create returned error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host"
	May 31 19:43:22 old-k8s-version-20220531192531-2108 dockerd[250]: time="2022-05-31T19:43:22.420615000Z" level=info msg="ignoring event" container=03167ba0f08fdff581ad496cd4bbb5dbff52a4602923c0fecd2f201ef3c775cb module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	May 31 19:43:39 old-k8s-version-20220531192531-2108 dockerd[250]: time="2022-05-31T19:43:39.644305000Z" level=warning msg="Error getting v2 registry: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host"
	May 31 19:43:39 old-k8s-version-20220531192531-2108 dockerd[250]: time="2022-05-31T19:43:39.644465200Z" level=info msg="Attempting next endpoint for pull after error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host"
	May 31 19:43:39 old-k8s-version-20220531192531-2108 dockerd[250]: time="2022-05-31T19:43:39.661137500Z" level=error msg="Handler for POST /images/create returned error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host"
	May 31 19:44:09 old-k8s-version-20220531192531-2108 dockerd[250]: time="2022-05-31T19:44:09.254101000Z" level=info msg="ignoring event" container=d9bab4566384602eebba16adb5527bbde86b9377f00da7d2e1af2ddc1ea2b2a8 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	May 31 19:44:21 old-k8s-version-20220531192531-2108 dockerd[250]: time="2022-05-31T19:44:21.620781100Z" level=warning msg="Error getting v2 registry: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host"
	May 31 19:44:21 old-k8s-version-20220531192531-2108 dockerd[250]: time="2022-05-31T19:44:21.620964500Z" level=info msg="Attempting next endpoint for pull after error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host"
	May 31 19:44:21 old-k8s-version-20220531192531-2108 dockerd[250]: time="2022-05-31T19:44:21.651968600Z" level=error msg="Handler for POST /images/create returned error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host"
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                                                            CREATED              STATE               NAME                        ATTEMPT             POD ID
	d9bab45663846       a90209bb39e3d                                                                                    41 seconds ago       Exited              dashboard-metrics-scraper   4                   10762210a70b3
	8c65f88737998       kubernetesui/dashboard@sha256:cc746e7a0b1eec0db01cbabbb6386b23d7af97e79fa9e36bb883a95b7eb96fe2   About a minute ago   Running             kubernetes-dashboard        0                   363a1b018898e
	78333097362ef       6e38f40d628db                                                                                    2 minutes ago        Running             storage-provisioner         0                   8357a697373e0
	fb1686d28fd2f       bf261d1579144                                                                                    2 minutes ago        Running             coredns                     0                   e64f972d867d6
	49b44baa515c4       c21b0c7400f98                                                                                    2 minutes ago        Running             kube-proxy                  0                   46895f03742d0
	be55be2991218       b2756210eeabf                                                                                    3 minutes ago        Running             etcd                        0                   2f1d5465012c0
	2e7bf308b9082       301ddc62b80b1                                                                                    3 minutes ago        Running             kube-scheduler              0                   2b7ee67a9db7a
	86f1dffe85081       b305571ca60a5                                                                                    3 minutes ago        Running             kube-apiserver              0                   28f7e7bc70e63
	963332154a4f7       06a629a7e51cd                                                                                    3 minutes ago        Running             kube-controller-manager     0                   b80cecadcecf5
	
	* 
	* ==> coredns [fb1686d28fd2] <==
	* .:53
	2022-05-31T19:42:20.924Z [INFO] plugin/reload: Running configuration MD5 = f64cb9b977c7dfca58c4fab108535a76
	2022-05-31T19:42:20.925Z [INFO] CoreDNS-1.6.2
	2022-05-31T19:42:20.929Z [INFO] linux/amd64, go1.12.8, 795a3eb
	CoreDNS-1.6.2
	linux/amd64, go1.12.8, 795a3eb
	[INFO] Reloading
	2022-05-31T19:42:57.534Z [INFO] plugin/reload: Running configuration MD5 = 034a4984a79adc08e57427d1bc08b68f
	[INFO] Reloading complete
	
	* 
	* ==> describe nodes <==
	* Name:               old-k8s-version-20220531192531-2108
	Roles:              master
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=old-k8s-version-20220531192531-2108
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=bd46569bd7cb517fcad2e704abdebd2826bd8454
	                    minikube.k8s.io/name=old-k8s-version-20220531192531-2108
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2022_05_31T19_41_59_0700
	                    minikube.k8s.io/version=v1.26.0-beta.1
	                    node-role.kubernetes.io/master=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 31 May 2022 19:41:54 +0000
	Taints:             <none>
	Unschedulable:      false
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 31 May 2022 19:44:16 +0000   Tue, 31 May 2022 19:41:49 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 31 May 2022 19:44:16 +0000   Tue, 31 May 2022 19:41:49 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 31 May 2022 19:44:16 +0000   Tue, 31 May 2022 19:41:49 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 31 May 2022 19:44:16 +0000   Tue, 31 May 2022 19:41:49 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    old-k8s-version-20220531192531-2108
	Capacity:
	 cpu:                16
	 ephemeral-storage:  263174212Ki
	 hugepages-1Gi:      0
	 hugepages-2Mi:      0
	 memory:             52638988Ki
	 pods:               110
	Allocatable:
	 cpu:                16
	 ephemeral-storage:  263174212Ki
	 hugepages-1Gi:      0
	 hugepages-2Mi:      0
	 memory:             52638988Ki
	 pods:               110
	System Info:
	 Machine ID:                 bfc82849fe6e4a6a9236307a23a8b5f1
	 System UUID:                bfc82849fe6e4a6a9236307a23a8b5f1
	 Boot ID:                    99d8680c-6839-4c5e-a5fa-8740ef80d5ef
	 Kernel Version:             5.10.102.1-microsoft-standard-WSL2
	 OS Image:                   Ubuntu 20.04.4 LTS
	 Operating System:           linux
	 Architecture:               amd64
	 Container Runtime Version:  docker://20.10.16
	 Kubelet Version:            v1.16.0
	 Kube-Proxy Version:         v1.16.0
	PodCIDR:                     10.244.0.0/24
	PodCIDRs:                    10.244.0.0/24
	Non-terminated Pods:         (10 in total)
	  Namespace                  Name                                                           CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE
	  ---------                  ----                                                           ------------  ----------  ---------------  -------------  ---
	  kube-system                coredns-5644d7b6d9-jxp72                                       100m (0%!)(MISSING)     0 (0%!)(MISSING)      70Mi (0%!)(MISSING)        170Mi (0%!)(MISSING)     2m34s
	  kube-system                etcd-old-k8s-version-20220531192531-2108                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         116s
	  kube-system                kube-apiserver-old-k8s-version-20220531192531-2108             250m (1%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         95s
	  kube-system                kube-controller-manager-old-k8s-version-20220531192531-2108    200m (1%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         114s
	  kube-system                kube-proxy-r556l                                               0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m34s
	  kube-system                kube-scheduler-old-k8s-version-20220531192531-2108             100m (0%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         98s
	  kube-system                metrics-server-6f89b5864b-v8mjx                                100m (0%!)(MISSING)     0 (0%!)(MISSING)      200Mi (0%!)(MISSING)       0 (0%!)(MISSING)         2m25s
	  kube-system                storage-provisioner                                            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m27s
	  kubernetes-dashboard       dashboard-metrics-scraper-6b84985989-cs5v7                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m23s
	  kubernetes-dashboard       kubernetes-dashboard-6fb5469cf5-spl8w                          0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m22s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (4%!)(MISSING)   0 (0%!)(MISSING)
	  memory             270Mi (0%!)(MISSING)  170Mi (0%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                  From                                             Message
	  ----    ------                   ----                 ----                                             -------
	  Normal  NodeHasSufficientMemory  3m7s (x7 over 3m8s)  kubelet, old-k8s-version-20220531192531-2108     Node old-k8s-version-20220531192531-2108 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m7s (x8 over 3m8s)  kubelet, old-k8s-version-20220531192531-2108     Node old-k8s-version-20220531192531-2108 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m7s (x8 over 3m8s)  kubelet, old-k8s-version-20220531192531-2108     Node old-k8s-version-20220531192531-2108 status is now: NodeHasSufficientPID
	  Normal  Starting                 2m30s                kube-proxy, old-k8s-version-20220531192531-2108  Starting kube-proxy.
	
	* 
	* ==> dmesg <==
	* [  +0.001366] overlayfs: upperdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
	[  +0.000006] overlayfs: workdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
	[  +0.000932] overlayfs: upperdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
	[  +0.000006] overlayfs: workdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
	[  +0.089750] overlayfs: upperdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
	[  +0.000005] overlayfs: workdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
	[  +0.002712] overlayfs: upperdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
	[  +0.000005] overlayfs: workdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
	[  +0.106424] overlayfs: upperdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
	[  +0.000004] overlayfs: workdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
	[  +0.091580] overlayfs: upperdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
	[  +0.000006] overlayfs: workdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
	[May31 19:22] WSL2: Performing memory compaction.
	[May31 19:23] WSL2: Performing memory compaction.
	[May31 19:24] WSL2: Performing memory compaction.
	[May31 19:25] WSL2: Performing memory compaction.
	[May31 19:26] WSL2: Performing memory compaction.
	[May31 19:27] WSL2: Performing memory compaction.
	[May31 19:28] WSL2: Performing memory compaction.
	[May31 19:30] WSL2: Performing memory compaction.
	[May31 19:32] WSL2: Performing memory compaction.
	[May31 19:34] WSL2: Performing memory compaction.
	[May31 19:37] WSL2: Performing memory compaction.
	[May31 19:39] WSL2: Performing memory compaction.
	[May31 19:40] WSL2: Performing memory compaction.
	
	* 
	* ==> etcd [be55be299121] <==
	* 2022-05-31 19:42:26.630811 W | etcdserver: read-only range request "key:\"/registry/pods/kubernetes-dashboard/dashboard-metrics-scraper-6b84985989-cs5v7\" " with result "range_response_count:1 size:1332" took too long (107.6487ms) to execute
	2022-05-31 19:42:26.830639 W | etcdserver: read-only range request "key:\"/registry/services/specs/kubernetes-dashboard/dashboard-metrics-scraper\" " with result "range_response_count:0 size:5" took too long (105.1522ms) to execute
	2022-05-31 19:42:26.840350 W | etcdserver: read-only range request "key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" " with result "range_response_count:1 size:765" took too long (107.9713ms) to execute
	2022-05-31 19:42:27.442251 W | etcdserver: request "header:<ID:15638328711796955265 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/deployments/kubernetes-dashboard/kubernetes-dashboard\" mod_revision:446 > success:<request_put:<key:\"/registry/deployments/kubernetes-dashboard/kubernetes-dashboard\" value_size:2654 >> failure:<request_range:<key:\"/registry/deployments/kubernetes-dashboard/kubernetes-dashboard\" > >>" with result "size:16" took too long (107.2543ms) to execute
	2022-05-31 19:42:27.442940 W | etcdserver: read-only range request "key:\"/registry/replicasets/kubernetes-dashboard/kubernetes-dashboard-6fb5469cf5\" " with result "range_response_count:1 size:1341" took too long (114.3883ms) to execute
	2022-05-31 19:42:27.443132 W | etcdserver: read-only range request "key:\"/registry/pods/kubernetes-dashboard/kubernetes-dashboard-6fb5469cf5-spl8w\" " with result "range_response_count:1 size:1433" took too long (118.4499ms) to execute
	2022-05-31 19:42:27.443515 W | etcdserver: read-only range request "key:\"/registry/pods/kube-system/\" range_end:\"/registry/pods/kube-system0\" " with result "range_response_count:4 size:7727" took too long (103.0409ms) to execute
	2022-05-31 19:42:27.726784 W | etcdserver: read-only range request "key:\"/registry/pods/kube-system/\" range_end:\"/registry/pods/kube-system0\" " with result "range_response_count:4 size:7727" took too long (191.366ms) to execute
	2022-05-31 19:42:59.737641 W | etcdserver: read-only range request "key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" " with result "range_response_count:1 size:765" took too long (192.8333ms) to execute
	2022-05-31 19:43:46.615330 W | etcdserver: read-only range request "key:\"/registry/services/endpoints\" range_end:\"/registry/services/endpointt\" count_only:true " with result "range_response_count:0 size:7" took too long (179.4081ms) to execute
	2022-05-31 19:44:28.378981 W | wal: sync duration of 2.652483s, expected less than 1s
	2022-05-31 19:44:28.379407 W | etcdserver: read-only range request "key:\"/registry/ingress\" range_end:\"/registry/ingrest\" count_only:true " with result "range_response_count:0 size:5" took too long (2.5672282s) to execute
	2022-05-31 19:44:28.379449 W | etcdserver: read-only range request "key:\"/registry/health\" " with result "range_response_count:0 size:5" took too long (446.7121ms) to execute
	2022-05-31 19:44:28.379503 W | etcdserver: read-only range request "key:\"/registry/namespaces/default\" " with result "range_response_count:1 size:172" took too long (1.0195367s) to execute
	2022-05-31 19:44:28.379646 W | etcdserver: read-only range request "key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" " with result "range_response_count:1 size:765" took too long (1.7875564s) to execute
	2022-05-31 19:44:28.379737 W | etcdserver: read-only range request "key:\"/registry/priorityclasses\" range_end:\"/registry/priorityclasset\" count_only:true " with result "range_response_count:0 size:7" took too long (2.2886056s) to execute
	2022-05-31 19:44:28.379819 W | etcdserver: read-only range request "key:\"/registry/replicasets\" range_end:\"/registry/replicasett\" count_only:true " with result "range_response_count:0 size:7" took too long (1.8233543s) to execute
	2022-05-31 19:44:28.380038 W | etcdserver: read-only range request "key:\"/registry/apiregistration.k8s.io/apiservices\" range_end:\"/registry/apiregistration.k8s.io/apiservicet\" count_only:true " with result "range_response_count:0 size:7" took too long (1.7236893s) to execute
	2022-05-31 19:44:31.265268 W | etcdserver: read-only range request "key:\"/registry/health\" " with result "error:context canceled" took too long (1.9985346s) to execute
	WARNING: 2022/05/31 19:44:31 grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	2022-05-31 19:44:31.447580 W | wal: sync duration of 3.0605439s, expected less than 1s
	2022-05-31 19:44:31.452686 W | etcdserver: read-only range request "key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" " with result "range_response_count:1 size:765" took too long (1.057688s) to execute
	2022-05-31 19:44:31.452899 W | etcdserver: read-only range request "key:\"/registry/jobs/\" range_end:\"/registry/jobs0\" limit:500 " with result "range_response_count:0 size:5" took too long (991.3616ms) to execute
	2022-05-31 19:44:31.595680 W | etcdserver: read-only range request "key:\"/registry/masterleases/\" range_end:\"/registry/masterleases0\" " with result "range_response_count:1 size:133" took too long (133.4275ms) to execute
	2022-05-31 19:44:31.595763 W | etcdserver: read-only range request "key:\"/registry/cronjobs/\" range_end:\"/registry/cronjobs0\" limit:500 " with result "range_response_count:0 size:5" took too long (137.9661ms) to execute
	
	* 
	* ==> kernel <==
	*  19:44:49 up  2:32,  0 users,  load average: 9.76, 6.95, 4.78
	Linux old-k8s-version-20220531192531-2108 5.10.102.1-microsoft-standard-WSL2 #1 SMP Wed Mar 2 00:30:59 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 20.04.4 LTS"
	
	* 
	* ==> kube-apiserver [86f1dffe8508] <==
	* I0531 19:42:15.540631       1 controller.go:606] quota admission added evaluator for: events.events.k8s.io
	E0531 19:42:21.037408       1 status.go:71] apiserver received an error that is not an metav1.Status: &errors.errorString{s:"context canceled"}
	E0531 19:42:24.321308       1 available_controller.go:416] v1beta1.metrics.k8s.io failed with: Operation cannot be fulfilled on apiservices.apiregistration.k8s.io "v1beta1.metrics.k8s.io": the object has been modified; please apply your changes to the latest version and try again
	I0531 19:42:28.041510       1 controller.go:107] OpenAPI AggregationController: Processing item v1beta1.metrics.k8s.io
	W0531 19:42:28.042270       1 handler_proxy.go:99] no RequestInfo found in the context
	E0531 19:42:28.042505       1 controller.go:114] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0531 19:42:28.042586       1 controller.go:127] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0531 19:43:28.042562       1 controller.go:107] OpenAPI AggregationController: Processing item v1beta1.metrics.k8s.io
	W0531 19:43:28.042881       1 handler_proxy.go:99] no RequestInfo found in the context
	E0531 19:43:28.042943       1 controller.go:114] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0531 19:43:28.042960       1 controller.go:127] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0531 19:44:28.380791       1 trace.go:116] Trace[1541496348]: "Get" url:/api/v1/namespaces/default (started: 2022-05-31 19:44:27.3576741 +0000 UTC m=+162.419119001) (total time: 1.0230698s):
	Trace[1541496348]: [1.0229261s] [1.0228664s] About to write a response
	I0531 19:44:28.381585       1 trace.go:116] Trace[950649266]: "Get" url:/api/v1/namespaces/kube-system/endpoints/k8s.io-minikube-hostpath (started: 2022-05-31 19:44:26.5882129 +0000 UTC m=+161.652586901) (total time: 1.7904001s):
	Trace[950649266]: [1.7896403s] [1.7895117s] About to write a response
	I0531 19:44:31.453626       1 trace.go:116] Trace[1006579568]: "Get" url:/api/v1/namespaces/kube-system/endpoints/k8s.io-minikube-hostpath (started: 2022-05-31 19:44:30.3938889 +0000 UTC m=+165.455333701) (total time: 1.0596932s):
	Trace[1006579568]: [1.0596037s] [1.0595329s] About to write a response
	I0531 19:44:31.453813       1 trace.go:116] Trace[1200980499]: "GuaranteedUpdate etcd3" type:*v1.Endpoints (started: 2022-05-31 19:44:28.385392 +0000 UTC m=+163.446836901) (total time: 3.0683875s):
	Trace[1200980499]: [3.0649376s] [3.0634737s] Transaction prepared
	I0531 19:44:31.454012       1 trace.go:116] Trace[1731275634]: "List etcd3" key:/jobs,resourceVersion:,limit:500,continue: (started: 2022-05-31 19:44:30.4608477 +0000 UTC m=+165.522292601) (total time: 993.125ms):
	Trace[1731275634]: [993.125ms] [993.125ms] END
	I0531 19:44:31.454156       1 trace.go:116] Trace[1224807279]: "List" url:/apis/batch/v1/jobs (started: 2022-05-31 19:44:30.4606445 +0000 UTC m=+165.522089401) (total time: 993.4827ms):
	Trace[1224807279]: [993.383ms] [993.2085ms] Listing from storage done
	
	* 
	* ==> kube-controller-manager [963332154a4f] <==
	* E0531 19:42:25.723920       1 replica_set.go:450] Sync "kubernetes-dashboard/kubernetes-dashboard-6fb5469cf5" failed with pods "kubernetes-dashboard-6fb5469cf5-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0531 19:42:25.724140       1 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kubernetes-dashboard", Name:"kubernetes-dashboard-6fb5469cf5", UID:"40a45895-1dcf-45f7-a413-5594ec6dfeae", APIVersion:"apps/v1", ResourceVersion:"442", FieldPath:""}): type: 'Warning' reason: 'FailedCreate' Error creating: pods "kubernetes-dashboard-6fb5469cf5-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	E0531 19:42:25.831478       1 replica_set.go:450] Sync "kubernetes-dashboard/kubernetes-dashboard-6fb5469cf5" failed with pods "kubernetes-dashboard-6fb5469cf5-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0531 19:42:25.831589       1 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kubernetes-dashboard", Name:"dashboard-metrics-scraper-6b84985989", UID:"9c398431-17d0-4314-8e73-409c72e6fd2f", APIVersion:"apps/v1", ResourceVersion:"435", FieldPath:""}): type: 'Warning' reason: 'FailedCreate' Error creating: pods "dashboard-metrics-scraper-6b84985989-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0531 19:42:25.831621       1 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kubernetes-dashboard", Name:"kubernetes-dashboard-6fb5469cf5", UID:"40a45895-1dcf-45f7-a413-5594ec6dfeae", APIVersion:"apps/v1", ResourceVersion:"442", FieldPath:""}): type: 'Warning' reason: 'FailedCreate' Error creating: pods "kubernetes-dashboard-6fb5469cf5-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	E0531 19:42:25.831478       1 replica_set.go:450] Sync "kubernetes-dashboard/dashboard-metrics-scraper-6b84985989" failed with pods "dashboard-metrics-scraper-6b84985989-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	E0531 19:42:25.845070       1 replica_set.go:450] Sync "kubernetes-dashboard/kubernetes-dashboard-6fb5469cf5" failed with pods "kubernetes-dashboard-6fb5469cf5-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	E0531 19:42:25.845070       1 replica_set.go:450] Sync "kubernetes-dashboard/dashboard-metrics-scraper-6b84985989" failed with pods "dashboard-metrics-scraper-6b84985989-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0531 19:42:25.845111       1 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kubernetes-dashboard", Name:"dashboard-metrics-scraper-6b84985989", UID:"9c398431-17d0-4314-8e73-409c72e6fd2f", APIVersion:"apps/v1", ResourceVersion:"435", FieldPath:""}): type: 'Warning' reason: 'FailedCreate' Error creating: pods "dashboard-metrics-scraper-6b84985989-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0531 19:42:25.845338       1 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kubernetes-dashboard", Name:"kubernetes-dashboard-6fb5469cf5", UID:"40a45895-1dcf-45f7-a413-5594ec6dfeae", APIVersion:"apps/v1", ResourceVersion:"442", FieldPath:""}): type: 'Warning' reason: 'FailedCreate' Error creating: pods "kubernetes-dashboard-6fb5469cf5-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	E0531 19:42:25.925957       1 replica_set.go:450] Sync "kubernetes-dashboard/kubernetes-dashboard-6fb5469cf5" failed with pods "kubernetes-dashboard-6fb5469cf5-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0531 19:42:25.926050       1 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kubernetes-dashboard", Name:"kubernetes-dashboard-6fb5469cf5", UID:"40a45895-1dcf-45f7-a413-5594ec6dfeae", APIVersion:"apps/v1", ResourceVersion:"442", FieldPath:""}): type: 'Warning' reason: 'FailedCreate' Error creating: pods "kubernetes-dashboard-6fb5469cf5-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	E0531 19:42:25.926273       1 replica_set.go:450] Sync "kubernetes-dashboard/dashboard-metrics-scraper-6b84985989" failed with pods "dashboard-metrics-scraper-6b84985989-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0531 19:42:25.926443       1 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kubernetes-dashboard", Name:"dashboard-metrics-scraper-6b84985989", UID:"9c398431-17d0-4314-8e73-409c72e6fd2f", APIVersion:"apps/v1", ResourceVersion:"435", FieldPath:""}): type: 'Warning' reason: 'FailedCreate' Error creating: pods "dashboard-metrics-scraper-6b84985989-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0531 19:42:26.334974       1 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kubernetes-dashboard", Name:"dashboard-metrics-scraper-6b84985989", UID:"9c398431-17d0-4314-8e73-409c72e6fd2f", APIVersion:"apps/v1", ResourceVersion:"435", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: dashboard-metrics-scraper-6b84985989-cs5v7
	I0531 19:42:27.125010       1 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kubernetes-dashboard", Name:"kubernetes-dashboard-6fb5469cf5", UID:"40a45895-1dcf-45f7-a413-5594ec6dfeae", APIVersion:"apps/v1", ResourceVersion:"442", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: kubernetes-dashboard-6fb5469cf5-spl8w
	E0531 19:42:45.384462       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0531 19:42:47.140581       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0531 19:43:15.637500       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0531 19:43:19.143177       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0531 19:43:45.891581       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0531 19:43:51.148696       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0531 19:44:16.143760       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0531 19:44:23.151407       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0531 19:44:46.400529       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	
	* 
	* ==> kube-proxy [49b44baa515c] <==
	* W0531 19:42:19.444246       1 proxier.go:584] Failed to read file /lib/modules/5.10.102.1-microsoft-standard-WSL2/modules.builtin with error open /lib/modules/5.10.102.1-microsoft-standard-WSL2/modules.builtin: no such file or directory. You can ignore this message when kube-proxy is running inside container without mounting /lib/modules
	W0531 19:42:19.446817       1 proxier.go:597] Failed to load kernel module ip_vs with modprobe. You can ignore this message when kube-proxy is running inside container without mounting /lib/modules
	W0531 19:42:19.448464       1 proxier.go:597] Failed to load kernel module ip_vs_rr with modprobe. You can ignore this message when kube-proxy is running inside container without mounting /lib/modules
	W0531 19:42:19.450411       1 proxier.go:597] Failed to load kernel module ip_vs_wrr with modprobe. You can ignore this message when kube-proxy is running inside container without mounting /lib/modules
	W0531 19:42:19.452862       1 proxier.go:597] Failed to load kernel module ip_vs_sh with modprobe. You can ignore this message when kube-proxy is running inside container without mounting /lib/modules
	W0531 19:42:19.455345       1 proxier.go:597] Failed to load kernel module nf_conntrack with modprobe. You can ignore this message when kube-proxy is running inside container without mounting /lib/modules
	W0531 19:42:19.467121       1 server_others.go:329] Flag proxy-mode="" unknown, assuming iptables proxy
	I0531 19:42:19.544240       1 node.go:135] Successfully retrieved node IP: 192.168.76.2
	I0531 19:42:19.544403       1 server_others.go:149] Using iptables Proxier.
	I0531 19:42:19.546875       1 server.go:529] Version: v1.16.0
	I0531 19:42:19.549594       1 config.go:313] Starting service config controller
	I0531 19:42:19.549892       1 shared_informer.go:197] Waiting for caches to sync for service config
	I0531 19:42:19.550030       1 config.go:131] Starting endpoints config controller
	I0531 19:42:19.550059       1 shared_informer.go:197] Waiting for caches to sync for endpoints config
	I0531 19:42:19.719990       1 shared_informer.go:204] Caches are synced for service config 
	I0531 19:42:19.720004       1 shared_informer.go:204] Caches are synced for endpoints config 
	
	* 
	* ==> kube-scheduler [2e7bf308b908] <==
	* E0531 19:41:54.325388       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0531 19:41:54.325504       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0531 19:41:54.325639       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1beta1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0531 19:41:54.329115       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0531 19:41:54.329869       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0531 19:41:54.330888       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0531 19:41:54.330887       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0531 19:41:54.332887       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0531 19:41:54.332892       1 reflector.go:123] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:236: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0531 19:41:55.329102       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0531 19:41:55.332862       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0531 19:41:55.332977       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0531 19:41:55.334522       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0531 19:41:55.334630       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1beta1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0531 19:41:55.423075       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0531 19:41:55.423889       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0531 19:41:55.425687       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0531 19:41:55.425758       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0531 19:41:55.431862       1 reflector.go:123] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:236: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0531 19:41:55.432247       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0531 19:41:56.333544       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0531 19:41:56.336222       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0531 19:41:56.338066       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0531 19:41:56.342236       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1beta1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0531 19:41:56.342315       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	
	* 
	* ==> kubelet <==
	* -- Logs begin at Tue 2022-05-31 19:35:55 UTC, end at Tue 2022-05-31 19:44:50 UTC. --
	May 31 19:43:24 old-k8s-version-20220531192531-2108 kubelet[5553]: W0531 19:43:24.306814    5553 docker_sandbox.go:394] failed to read pod IP from plugin/docker: Couldn't find network status for kubernetes-dashboard/dashboard-metrics-scraper-6b84985989-cs5v7 through plugin: invalid network status for
	May 31 19:43:25 old-k8s-version-20220531192531-2108 kubelet[5553]: E0531 19:43:25.560752    5553 pod_workers.go:191] Error syncing pod c248957e-7215-4044-9e73-acc8998b61f1 ("metrics-server-6f89b5864b-v8mjx_kube-system(c248957e-7215-4044-9e73-acc8998b61f1)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/k8s.gcr.io/echoserver:1.4\""
	May 31 19:43:29 old-k8s-version-20220531192531-2108 kubelet[5553]: E0531 19:43:29.445512    5553 pod_workers.go:191] Error syncing pod 82fa1259-4596-4392-9f56-571eafe449d1 ("dashboard-metrics-scraper-6b84985989-cs5v7_kubernetes-dashboard(82fa1259-4596-4392-9f56-571eafe449d1)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6b84985989-cs5v7_kubernetes-dashboard(82fa1259-4596-4392-9f56-571eafe449d1)"
	May 31 19:43:39 old-k8s-version-20220531192531-2108 kubelet[5553]: E0531 19:43:39.662275    5553 remote_image.go:113] PullImage "fake.domain/k8s.gcr.io/echoserver:1.4" from image service failed: rpc error: code = Unknown desc = Error response from daemon: Get "https://fake.domain/v2/": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host
	May 31 19:43:39 old-k8s-version-20220531192531-2108 kubelet[5553]: E0531 19:43:39.662417    5553 kuberuntime_image.go:50] Pull image "fake.domain/k8s.gcr.io/echoserver:1.4" failed: rpc error: code = Unknown desc = Error response from daemon: Get "https://fake.domain/v2/": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host
	May 31 19:43:39 old-k8s-version-20220531192531-2108 kubelet[5553]: E0531 19:43:39.662599    5553 kuberuntime_manager.go:783] container start failed: ErrImagePull: rpc error: code = Unknown desc = Error response from daemon: Get "https://fake.domain/v2/": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host
	May 31 19:43:39 old-k8s-version-20220531192531-2108 kubelet[5553]: E0531 19:43:39.662670    5553 pod_workers.go:191] Error syncing pod c248957e-7215-4044-9e73-acc8998b61f1 ("metrics-server-6f89b5864b-v8mjx_kube-system(c248957e-7215-4044-9e73-acc8998b61f1)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = Error response from daemon: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host"
	May 31 19:43:42 old-k8s-version-20220531192531-2108 kubelet[5553]: E0531 19:43:42.551936    5553 pod_workers.go:191] Error syncing pod 82fa1259-4596-4392-9f56-571eafe449d1 ("dashboard-metrics-scraper-6b84985989-cs5v7_kubernetes-dashboard(82fa1259-4596-4392-9f56-571eafe449d1)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6b84985989-cs5v7_kubernetes-dashboard(82fa1259-4596-4392-9f56-571eafe449d1)"
	May 31 19:43:51 old-k8s-version-20220531192531-2108 kubelet[5553]: E0531 19:43:51.568837    5553 pod_workers.go:191] Error syncing pod c248957e-7215-4044-9e73-acc8998b61f1 ("metrics-server-6f89b5864b-v8mjx_kube-system(c248957e-7215-4044-9e73-acc8998b61f1)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/k8s.gcr.io/echoserver:1.4\""
	May 31 19:43:54 old-k8s-version-20220531192531-2108 kubelet[5553]: E0531 19:43:54.552084    5553 pod_workers.go:191] Error syncing pod 82fa1259-4596-4392-9f56-571eafe449d1 ("dashboard-metrics-scraper-6b84985989-cs5v7_kubernetes-dashboard(82fa1259-4596-4392-9f56-571eafe449d1)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6b84985989-cs5v7_kubernetes-dashboard(82fa1259-4596-4392-9f56-571eafe449d1)"
	May 31 19:44:06 old-k8s-version-20220531192531-2108 kubelet[5553]: E0531 19:44:06.557836    5553 pod_workers.go:191] Error syncing pod c248957e-7215-4044-9e73-acc8998b61f1 ("metrics-server-6f89b5864b-v8mjx_kube-system(c248957e-7215-4044-9e73-acc8998b61f1)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/k8s.gcr.io/echoserver:1.4\""
	May 31 19:44:08 old-k8s-version-20220531192531-2108 kubelet[5553]: W0531 19:44:08.934232    5553 docker_sandbox.go:394] failed to read pod IP from plugin/docker: Couldn't find network status for kubernetes-dashboard/dashboard-metrics-scraper-6b84985989-cs5v7 through plugin: invalid network status for
	May 31 19:44:09 old-k8s-version-20220531192531-2108 kubelet[5553]: W0531 19:44:09.402928    5553 container.go:409] Failed to create summary reader for "/kubepods/besteffort/pod82fa1259-4596-4392-9f56-571eafe449d1/d9bab4566384602eebba16adb5527bbde86b9377f00da7d2e1af2ddc1ea2b2a8": none of the resources are being tracked.
	May 31 19:44:10 old-k8s-version-20220531192531-2108 kubelet[5553]: W0531 19:44:10.250330    5553 docker_sandbox.go:394] failed to read pod IP from plugin/docker: Couldn't find network status for kubernetes-dashboard/dashboard-metrics-scraper-6b84985989-cs5v7 through plugin: invalid network status for
	May 31 19:44:10 old-k8s-version-20220531192531-2108 kubelet[5553]: E0531 19:44:10.268374    5553 pod_workers.go:191] Error syncing pod 82fa1259-4596-4392-9f56-571eafe449d1 ("dashboard-metrics-scraper-6b84985989-cs5v7_kubernetes-dashboard(82fa1259-4596-4392-9f56-571eafe449d1)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6b84985989-cs5v7_kubernetes-dashboard(82fa1259-4596-4392-9f56-571eafe449d1)"
	May 31 19:44:11 old-k8s-version-20220531192531-2108 kubelet[5553]: W0531 19:44:11.282563    5553 docker_sandbox.go:394] failed to read pod IP from plugin/docker: Couldn't find network status for kubernetes-dashboard/dashboard-metrics-scraper-6b84985989-cs5v7 through plugin: invalid network status for
	May 31 19:44:19 old-k8s-version-20220531192531-2108 kubelet[5553]: E0531 19:44:19.443677    5553 pod_workers.go:191] Error syncing pod 82fa1259-4596-4392-9f56-571eafe449d1 ("dashboard-metrics-scraper-6b84985989-cs5v7_kubernetes-dashboard(82fa1259-4596-4392-9f56-571eafe449d1)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6b84985989-cs5v7_kubernetes-dashboard(82fa1259-4596-4392-9f56-571eafe449d1)"
	May 31 19:44:21 old-k8s-version-20220531192531-2108 kubelet[5553]: E0531 19:44:21.653489    5553 remote_image.go:113] PullImage "fake.domain/k8s.gcr.io/echoserver:1.4" from image service failed: rpc error: code = Unknown desc = Error response from daemon: Get "https://fake.domain/v2/": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host
	May 31 19:44:21 old-k8s-version-20220531192531-2108 kubelet[5553]: E0531 19:44:21.653959    5553 kuberuntime_image.go:50] Pull image "fake.domain/k8s.gcr.io/echoserver:1.4" failed: rpc error: code = Unknown desc = Error response from daemon: Get "https://fake.domain/v2/": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host
	May 31 19:44:21 old-k8s-version-20220531192531-2108 kubelet[5553]: E0531 19:44:21.654224    5553 kuberuntime_manager.go:783] container start failed: ErrImagePull: rpc error: code = Unknown desc = Error response from daemon: Get "https://fake.domain/v2/": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host
	May 31 19:44:21 old-k8s-version-20220531192531-2108 kubelet[5553]: E0531 19:44:21.654330    5553 pod_workers.go:191] Error syncing pod c248957e-7215-4044-9e73-acc8998b61f1 ("metrics-server-6f89b5864b-v8mjx_kube-system(c248957e-7215-4044-9e73-acc8998b61f1)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = Error response from daemon: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host"
	May 31 19:44:30 old-k8s-version-20220531192531-2108 kubelet[5553]: E0531 19:44:30.553156    5553 pod_workers.go:191] Error syncing pod 82fa1259-4596-4392-9f56-571eafe449d1 ("dashboard-metrics-scraper-6b84985989-cs5v7_kubernetes-dashboard(82fa1259-4596-4392-9f56-571eafe449d1)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6b84985989-cs5v7_kubernetes-dashboard(82fa1259-4596-4392-9f56-571eafe449d1)"
	May 31 19:44:35 old-k8s-version-20220531192531-2108 kubelet[5553]: E0531 19:44:35.557793    5553 pod_workers.go:191] Error syncing pod c248957e-7215-4044-9e73-acc8998b61f1 ("metrics-server-6f89b5864b-v8mjx_kube-system(c248957e-7215-4044-9e73-acc8998b61f1)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/k8s.gcr.io/echoserver:1.4\""
	May 31 19:44:45 old-k8s-version-20220531192531-2108 kubelet[5553]: E0531 19:44:45.553135    5553 pod_workers.go:191] Error syncing pod 82fa1259-4596-4392-9f56-571eafe449d1 ("dashboard-metrics-scraper-6b84985989-cs5v7_kubernetes-dashboard(82fa1259-4596-4392-9f56-571eafe449d1)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6b84985989-cs5v7_kubernetes-dashboard(82fa1259-4596-4392-9f56-571eafe449d1)"
	May 31 19:44:46 old-k8s-version-20220531192531-2108 kubelet[5553]: E0531 19:44:46.557032    5553 pod_workers.go:191] Error syncing pod c248957e-7215-4044-9e73-acc8998b61f1 ("metrics-server-6f89b5864b-v8mjx_kube-system(c248957e-7215-4044-9e73-acc8998b61f1)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/k8s.gcr.io/echoserver:1.4\""
	
	* 
	* ==> kubernetes-dashboard [8c65f8873799] <==
	* 2022/05/31 19:43:11 Starting overwatch
	2022/05/31 19:43:11 Using namespace: kubernetes-dashboard
	2022/05/31 19:43:11 Using in-cluster config to connect to apiserver
	2022/05/31 19:43:11 Using secret token for csrf signing
	2022/05/31 19:43:11 Initializing csrf token from kubernetes-dashboard-csrf secret
	2022/05/31 19:43:11 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2022/05/31 19:43:11 Successful initial request to the apiserver, version: v1.16.0
	2022/05/31 19:43:11 Generating JWE encryption key
	2022/05/31 19:43:11 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2022/05/31 19:43:11 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2022/05/31 19:43:12 Initializing JWE encryption key from synchronized object
	2022/05/31 19:43:12 Creating in-cluster Sidecar client
	2022/05/31 19:43:12 Serving insecurely on HTTP port: 9090
	2022/05/31 19:43:12 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2022/05/31 19:43:42 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2022/05/31 19:44:12 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2022/05/31 19:44:42 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	
	* 
	* ==> storage-provisioner [78333097362e] <==
	* I0531 19:42:26.537283       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0531 19:42:26.627656       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0531 19:42:26.628194       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0531 19:42:26.726417       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0531 19:42:26.728128       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-20220531192531-2108_dedac804-a1e6-404e-a773-fabda9042592!
	I0531 19:42:26.729441       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"9faff84a-d94b-40cf-9dbc-713d9688f4e4", APIVersion:"v1", ResourceVersion:"474", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-20220531192531-2108_dedac804-a1e6-404e-a773-fabda9042592 became leader
	I0531 19:42:26.931160       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-20220531192531-2108_dedac804-a1e6-404e-a773-fabda9042592!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p old-k8s-version-20220531192531-2108 -n old-k8s-version-20220531192531-2108
helpers_test.go:254: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p old-k8s-version-20220531192531-2108 -n old-k8s-version-20220531192531-2108: (7.5771303s)
helpers_test.go:261: (dbg) Run:  kubectl --context old-k8s-version-20220531192531-2108 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:270: non-running pods: metrics-server-6f89b5864b-v8mjx
helpers_test.go:272: ======> post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: describe non-running pods <======
helpers_test.go:275: (dbg) Run:  kubectl --context old-k8s-version-20220531192531-2108 describe pod metrics-server-6f89b5864b-v8mjx

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/Pause
helpers_test.go:275: (dbg) Non-zero exit: kubectl --context old-k8s-version-20220531192531-2108 describe pod metrics-server-6f89b5864b-v8mjx: exit status 1 (329.8148ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-6f89b5864b-v8mjx" not found

                                                
                                                
** /stderr **
helpers_test.go:277: kubectl --context old-k8s-version-20220531192531-2108 describe pod metrics-server-6f89b5864b-v8mjx: exit status 1
--- FAIL: TestStartStop/group/old-k8s-version/serial/Pause (69.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium/Start (610.92s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium/Start
net_test.go:101: (dbg) Run:  out/minikube-windows-amd64.exe start -p cilium-20220531191937-2108 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=cilium --driver=docker

                                                
                                                
=== CONT  TestNetworkPlugins/group/cilium/Start
net_test.go:101: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p cilium-20220531191937-2108 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=cilium --driver=docker: exit status 80 (10m10.2993893s)

                                                
                                                
-- stdout --
	* [cilium-20220531191937-2108] minikube v1.26.0-beta.1 on Microsoft Windows 10 Enterprise N 10.0.19044 Build 19044
	  - KUBECONFIG=C:\Users\jenkins.minikube7\minikube-integration\kubeconfig
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube7\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=14079
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	* Using the docker driver based on user configuration
	* Using Docker Desktop driver with the root privilege
	* Starting control plane node cilium-20220531191937-2108 in cluster cilium-20220531191937-2108
	* Pulling base image ...
	* Creating docker container (CPUs=2, Memory=2048MB) ...
	* Preparing Kubernetes v1.23.6 on Docker 20.10.16 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Configuring RBAC rules ...
	* Configuring Cilium (Container Networking Interface) ...
	* Verifying Kubernetes components...
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	* Enabled addons: storage-provisioner, default-storageclass
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0531 19:47:19.217590    9268 out.go:296] Setting OutFile to fd 712 ...
	I0531 19:47:19.278596    9268 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0531 19:47:19.278596    9268 out.go:309] Setting ErrFile to fd 1932...
	I0531 19:47:19.278596    9268 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0531 19:47:19.297850    9268 out.go:303] Setting JSON to false
	I0531 19:47:19.303849    9268 start.go:115] hostinfo: {"hostname":"minikube7","uptime":84709,"bootTime":1653941730,"procs":158,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19044 Build 19044","kernelVersion":"10.0.19044 Build 19044","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f66ed2ea-6c04-4a6b-8eea-b2fb0953e990"}
	W0531 19:47:19.303849    9268 start.go:123] gopshost.Virtualization returned error: not implemented yet
	I0531 19:47:19.309419    9268 out.go:177] * [cilium-20220531191937-2108] minikube v1.26.0-beta.1 on Microsoft Windows 10 Enterprise N 10.0.19044 Build 19044
	I0531 19:47:19.313642    9268 notify.go:193] Checking for updates...
	I0531 19:47:19.317167    9268 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube7\minikube-integration\kubeconfig
	I0531 19:47:19.320604    9268 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube7\minikube-integration\.minikube
	I0531 19:47:19.323082    9268 out.go:177]   - MINIKUBE_LOCATION=14079
	I0531 19:47:19.326529    9268 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0531 19:47:19.330325    9268 config.go:178] Loaded profile config "auto-20220531191922-2108": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.6
	I0531 19:47:19.330870    9268 config.go:178] Loaded profile config "embed-certs-20220531193346-2108": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.6
	I0531 19:47:19.331448    9268 config.go:178] Loaded profile config "kindnet-20220531191930-2108": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.6
	I0531 19:47:19.331736    9268 driver.go:358] Setting default libvirt URI to qemu:///system
	I0531 19:47:22.056358    9268 docker.go:137] docker version: linux-20.10.14
	I0531 19:47:22.069230    9268 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0531 19:47:24.316876    9268 cli_runner.go:217] Completed: docker system info --format "{{json .}}": (2.2474528s)
	I0531 19:47:24.317733    9268 info.go:265] docker info: {ID:JKWR:L4LW:XYJC:G6AI:GZFU:RUGW:CCH6:OD2M:V572:4FTB:B7YC:DTUC Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:76 OomKillDisable:true NGoroutines:58 SystemTime:2022-05-31 19:47:23.1932863 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.102.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_
64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:53902323712 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.14 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3df54a852345ae127d1fa3092b95168e4a88e2f8 Expected:3df54a852345ae127d1fa3092b95168e4a88e2f8} RuncCommit:{ID:v1.0.3-0-gf46b6ba Expected:v1.0.3-0-gf46b6ba} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,
profile=default] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.5.1] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:C:\Program Files\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc.
Version:v0.17.0]] Warnings:<nil>}}
	I0531 19:47:24.322041    9268 out.go:177] * Using the docker driver based on user configuration
	I0531 19:47:24.325725    9268 start.go:284] selected driver: docker
	I0531 19:47:24.325725    9268 start.go:806] validating driver "docker" against <nil>
	I0531 19:47:24.325725    9268 start.go:817] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0531 19:47:24.393157    9268 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0531 19:47:26.469542    9268 cli_runner.go:217] Completed: docker system info --format "{{json .}}": (2.0763763s)
	I0531 19:47:26.469753    9268 info.go:265] docker info: {ID:JKWR:L4LW:XYJC:G6AI:GZFU:RUGW:CCH6:OD2M:V572:4FTB:B7YC:DTUC Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:76 OomKillDisable:true NGoroutines:58 SystemTime:2022-05-31 19:47:25.4358344 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.102.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_
64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:53902323712 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.14 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3df54a852345ae127d1fa3092b95168e4a88e2f8 Expected:3df54a852345ae127d1fa3092b95168e4a88e2f8} RuncCommit:{ID:v1.0.3-0-gf46b6ba Expected:v1.0.3-0-gf46b6ba} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,
profile=default] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.5.1] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:C:\Program Files\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc.
Version:v0.17.0]] Warnings:<nil>}}
	I0531 19:47:26.469753    9268 start_flags.go:292] no existing cluster config was found, will generate one from the flags 
	I0531 19:47:26.470460    9268 start_flags.go:847] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0531 19:47:26.479386    9268 out.go:177] * Using Docker Desktop driver with the root privilege
	I0531 19:47:26.482388    9268 cni.go:95] Creating CNI manager for "cilium"
	I0531 19:47:26.483390    9268 start_flags.go:301] Found "Cilium" CNI - setting NetworkPlugin=cni
	I0531 19:47:26.483390    9268 start_flags.go:306] config:
	{Name:cilium-20220531191937-2108 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653596720-14230@sha256:e953786303ac8350802546ee187d34e89f0007072a54fdbcc2f86a1fb8575418 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.6 ClusterName:cilium-20220531191937-2108 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:cilium NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:5m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube7:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0531 19:47:26.488055    9268 out.go:177] * Starting control plane node cilium-20220531191937-2108 in cluster cilium-20220531191937-2108
	I0531 19:47:26.490618    9268 cache.go:120] Beginning downloading kic base image for docker with docker
	I0531 19:47:26.495318    9268 out.go:177] * Pulling base image ...
	I0531 19:47:26.498492    9268 preload.go:132] Checking if preload exists for k8s version v1.23.6 and runtime docker
	I0531 19:47:26.498492    9268 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653596720-14230@sha256:e953786303ac8350802546ee187d34e89f0007072a54fdbcc2f86a1fb8575418 in local docker daemon
	I0531 19:47:26.498492    9268 preload.go:148] Found local preload: C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.23.6-docker-overlay2-amd64.tar.lz4
	I0531 19:47:26.498492    9268 cache.go:57] Caching tarball of preloaded images
	I0531 19:47:26.499020    9268 preload.go:174] Found C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.23.6-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0531 19:47:26.499113    9268 cache.go:60] Finished verifying existence of preloaded tar for  v1.23.6 on docker
	I0531 19:47:26.499113    9268 profile.go:148] Saving config to C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\cilium-20220531191937-2108\config.json ...
	I0531 19:47:26.499113    9268 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\cilium-20220531191937-2108\config.json: {Name:mkea1612a842ac986f1eb1c0f851f2c0b09e23a6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0531 19:47:27.608354    9268 image.go:79] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653596720-14230@sha256:e953786303ac8350802546ee187d34e89f0007072a54fdbcc2f86a1fb8575418 in local docker daemon, skipping pull
	I0531 19:47:27.608443    9268 cache.go:141] gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653596720-14230@sha256:e953786303ac8350802546ee187d34e89f0007072a54fdbcc2f86a1fb8575418 exists in daemon, skipping load
	I0531 19:47:27.608549    9268 cache.go:206] Successfully downloaded all kic artifacts
	I0531 19:47:27.608549    9268 start.go:352] acquiring machines lock for cilium-20220531191937-2108: {Name:mk9e58b6c885a5b3f36494dbce20134a09cdb35e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0531 19:47:27.608549    9268 start.go:356] acquired machines lock for "cilium-20220531191937-2108" in 0s
	I0531 19:47:27.609089    9268 start.go:91] Provisioning new machine with config: &{Name:cilium-20220531191937-2108 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653596720-14230@sha256:e953786303ac8350802546ee187d34e89f0007072a54fdbcc2f86a1fb8575418 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.6 ClusterName:cilium-20220531191937-2108 Namespace:default
APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:cilium NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.23.6 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:5m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube7:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docke
r BinaryMirror: DisableOptimizations:false DisableMetrics:false} &{Name: IP: Port:8443 KubernetesVersion:v1.23.6 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0531 19:47:27.609402    9268 start.go:131] createHost starting for "" (driver="docker")
	I0531 19:47:27.613663    9268 out.go:204] * Creating docker container (CPUs=2, Memory=2048MB) ...
	I0531 19:47:27.614218    9268 start.go:165] libmachine.API.Create for "cilium-20220531191937-2108" (driver="docker")
	I0531 19:47:27.614367    9268 client.go:168] LocalClient.Create starting
	I0531 19:47:27.614367    9268 main.go:134] libmachine: Reading certificate data from C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem
	I0531 19:47:27.615064    9268 main.go:134] libmachine: Decoding PEM data...
	I0531 19:47:27.615064    9268 main.go:134] libmachine: Parsing certificate...
	I0531 19:47:27.615280    9268 main.go:134] libmachine: Reading certificate data from C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\cert.pem
	I0531 19:47:27.615605    9268 main.go:134] libmachine: Decoding PEM data...
	I0531 19:47:27.615605    9268 main.go:134] libmachine: Parsing certificate...
	I0531 19:47:27.630133    9268 cli_runner.go:164] Run: docker network inspect cilium-20220531191937-2108 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0531 19:47:28.728068    9268 cli_runner.go:211] docker network inspect cilium-20220531191937-2108 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0531 19:47:28.728220    9268 cli_runner.go:217] Completed: docker network inspect cilium-20220531191937-2108 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}": (1.0979305s)
	I0531 19:47:28.736419    9268 network_create.go:272] running [docker network inspect cilium-20220531191937-2108] to gather additional debugging logs...
	I0531 19:47:28.736419    9268 cli_runner.go:164] Run: docker network inspect cilium-20220531191937-2108
	W0531 19:47:29.803384    9268 cli_runner.go:211] docker network inspect cilium-20220531191937-2108 returned with exit code 1
	I0531 19:47:29.803384    9268 cli_runner.go:217] Completed: docker network inspect cilium-20220531191937-2108: (1.0669602s)
	I0531 19:47:29.803384    9268 network_create.go:275] error running [docker network inspect cilium-20220531191937-2108]: docker network inspect cilium-20220531191937-2108: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: cilium-20220531191937-2108
	I0531 19:47:29.803384    9268 network_create.go:277] output of [docker network inspect cilium-20220531191937-2108]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: cilium-20220531191937-2108
	
	** /stderr **
	I0531 19:47:29.811391    9268 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0531 19:47:30.932517    9268 cli_runner.go:217] Completed: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}": (1.1211208s)
	I0531 19:47:30.955519    9268 network.go:288] reserving subnet 192.168.49.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[] amended:true}} dirty:map[192.168.49.0:0xc00055e368] misses:0}
	I0531 19:47:30.955519    9268 network.go:235] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0531 19:47:30.955519    9268 network_create.go:115] attempt to create docker network cilium-20220531191937-2108 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0531 19:47:30.963512    9268 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true cilium-20220531191937-2108
	W0531 19:47:32.161506    9268 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true cilium-20220531191937-2108 returned with exit code 1
	I0531 19:47:32.161662    9268 cli_runner.go:217] Completed: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true cilium-20220531191937-2108: (1.197852s)
	W0531 19:47:32.161662    9268 network_create.go:107] failed to create docker network cilium-20220531191937-2108 192.168.49.0/24, will retry: subnet is taken
	I0531 19:47:32.181276    9268 network.go:279] skipping subnet 192.168.49.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc00055e368] amended:false}} dirty:map[] misses:0}
	I0531 19:47:32.181276    9268 network.go:238] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0531 19:47:32.201335    9268 network.go:288] reserving subnet 192.168.58.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc00055e368] amended:true}} dirty:map[192.168.49.0:0xc00055e368 192.168.58.0:0xc000006688] misses:0}
	I0531 19:47:32.201335    9268 network.go:235] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0531 19:47:32.201335    9268 network_create.go:115] attempt to create docker network cilium-20220531191937-2108 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
	I0531 19:47:32.208876    9268 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true cilium-20220531191937-2108
	I0531 19:47:33.419302    9268 cli_runner.go:217] Completed: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true cilium-20220531191937-2108: (1.2103242s)
	I0531 19:47:33.419302    9268 network_create.go:99] docker network cilium-20220531191937-2108 192.168.58.0/24 created
	I0531 19:47:33.419375    9268 kic.go:106] calculated static IP "192.168.58.2" for the "cilium-20220531191937-2108" container
	I0531 19:47:33.434630    9268 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0531 19:47:34.587341    9268 cli_runner.go:217] Completed: docker ps -a --format {{.Names}}: (1.1525273s)
	I0531 19:47:34.595280    9268 cli_runner.go:164] Run: docker volume create cilium-20220531191937-2108 --label name.minikube.sigs.k8s.io=cilium-20220531191937-2108 --label created_by.minikube.sigs.k8s.io=true
	I0531 19:47:35.746811    9268 cli_runner.go:217] Completed: docker volume create cilium-20220531191937-2108 --label name.minikube.sigs.k8s.io=cilium-20220531191937-2108 --label created_by.minikube.sigs.k8s.io=true: (1.151308s)
	I0531 19:47:35.746881    9268 oci.go:103] Successfully created a docker volume cilium-20220531191937-2108
	I0531 19:47:35.757201    9268 cli_runner.go:164] Run: docker run --rm --name cilium-20220531191937-2108-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=cilium-20220531191937-2108 --entrypoint /usr/bin/test -v cilium-20220531191937-2108:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653596720-14230@sha256:e953786303ac8350802546ee187d34e89f0007072a54fdbcc2f86a1fb8575418 -d /var/lib
	I0531 19:47:38.748807    9268 cli_runner.go:217] Completed: docker run --rm --name cilium-20220531191937-2108-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=cilium-20220531191937-2108 --entrypoint /usr/bin/test -v cilium-20220531191937-2108:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653596720-14230@sha256:e953786303ac8350802546ee187d34e89f0007072a54fdbcc2f86a1fb8575418 -d /var/lib: (2.9914525s)
	I0531 19:47:38.748954    9268 oci.go:107] Successfully prepared a docker volume cilium-20220531191937-2108
	I0531 19:47:38.748954    9268 preload.go:132] Checking if preload exists for k8s version v1.23.6 and runtime docker
	I0531 19:47:38.748954    9268 kic.go:179] Starting extracting preloaded images to volume ...
	I0531 19:47:38.759795    9268 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.23.6-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v cilium-20220531191937-2108:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653596720-14230@sha256:e953786303ac8350802546ee187d34e89f0007072a54fdbcc2f86a1fb8575418 -I lz4 -xf /preloaded.tar -C /extractDir
	I0531 19:48:08.301586    9268 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.23.6-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v cilium-20220531191937-2108:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653596720-14230@sha256:e953786303ac8350802546ee187d34e89f0007072a54fdbcc2f86a1fb8575418 -I lz4 -xf /preloaded.tar -C /extractDir: (29.541589s)
	I0531 19:48:08.301670    9268 kic.go:188] duration metric: took 29.552589 seconds to extract preloaded images to volume
	I0531 19:48:08.310366    9268 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0531 19:48:10.821060    9268 cli_runner.go:217] Completed: docker system info --format "{{json .}}": (2.5106834s)
	I0531 19:48:10.821060    9268 info.go:265] docker info: {ID:JKWR:L4LW:XYJC:G6AI:GZFU:RUGW:CCH6:OD2M:V572:4FTB:B7YC:DTUC Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:74 OomKillDisable:true NGoroutines:58 SystemTime:2022-05-31 19:48:09.5689926 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.102.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_
64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:53902323712 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.14 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3df54a852345ae127d1fa3092b95168e4a88e2f8 Expected:3df54a852345ae127d1fa3092b95168e4a88e2f8} RuncCommit:{ID:v1.0.3-0-gf46b6ba Expected:v1.0.3-0-gf46b6ba} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,
profile=default] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.5.1] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:C:\Program Files\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc.
Version:v0.17.0]] Warnings:<nil>}}
	I0531 19:48:10.828087    9268 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0531 19:48:13.261056    9268 cli_runner.go:217] Completed: docker info --format "'{{json .SecurityOptions}}'": (2.4329581s)
	I0531 19:48:13.271052    9268 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname cilium-20220531191937-2108 --name cilium-20220531191937-2108 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=cilium-20220531191937-2108 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=cilium-20220531191937-2108 --network cilium-20220531191937-2108 --ip 192.168.58.2 --volume cilium-20220531191937-2108:/var --security-opt apparmor=unconfined --memory=2048mb --memory-swap=2048mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653596720-14230@sha256:e953786303ac8350802546ee187d34e89f0007072a54fdbcc2f86a1fb8575418
	I0531 19:48:16.187190    9268 cli_runner.go:217] Completed: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname cilium-20220531191937-2108 --name cilium-20220531191937-2108 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=cilium-20220531191937-2108 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=cilium-20220531191937-2108 --network cilium-20220531191937-2108 --ip 192.168.58.2 --volume cilium-20220531191937-2108:/var --security-opt apparmor=unconfined --memory=2048mb --memory-swap=2048mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653596720-14230@sha256:e953786303ac8350802546ee187d34e89f0007072a54fdbcc2f86a1fb8575418: (2.9161256s)
	I0531 19:48:16.196190    9268 cli_runner.go:164] Run: docker container inspect cilium-20220531191937-2108 --format={{.State.Running}}
	I0531 19:48:17.467534    9268 cli_runner.go:217] Completed: docker container inspect cilium-20220531191937-2108 --format={{.State.Running}}: (1.2711058s)
	I0531 19:48:17.477527    9268 cli_runner.go:164] Run: docker container inspect cilium-20220531191937-2108 --format={{.State.Status}}
	I0531 19:48:18.715754    9268 cli_runner.go:217] Completed: docker container inspect cilium-20220531191937-2108 --format={{.State.Status}}: (1.2379937s)
	I0531 19:48:18.724742    9268 cli_runner.go:164] Run: docker exec cilium-20220531191937-2108 stat /var/lib/dpkg/alternatives/iptables
	I0531 19:48:20.082903    9268 cli_runner.go:217] Completed: docker exec cilium-20220531191937-2108 stat /var/lib/dpkg/alternatives/iptables: (1.3581547s)
	I0531 19:48:20.082903    9268 oci.go:247] the created container "cilium-20220531191937-2108" has a running status.
	I0531 19:48:20.082903    9268 kic.go:210] Creating ssh key for kic: C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\cilium-20220531191937-2108\id_rsa...
	I0531 19:48:20.214608    9268 kic_runner.go:191] docker (temp): C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\cilium-20220531191937-2108\id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0531 19:48:21.484220    9268 cli_runner.go:164] Run: docker container inspect cilium-20220531191937-2108 --format={{.State.Status}}
	I0531 19:48:22.650978    9268 cli_runner.go:217] Completed: docker container inspect cilium-20220531191937-2108 --format={{.State.Status}}: (1.1667527s)
	I0531 19:48:22.667965    9268 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0531 19:48:22.667965    9268 kic_runner.go:114] Args: [docker exec --privileged cilium-20220531191937-2108 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0531 19:48:24.035166    9268 kic_runner.go:123] Done: [docker exec --privileged cilium-20220531191937-2108 chown docker:docker /home/docker/.ssh/authorized_keys]: (1.3671943s)
	I0531 19:48:24.041454    9268 kic.go:250] ensuring only current user has permissions to key file located at : C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\cilium-20220531191937-2108\id_rsa...
	I0531 19:48:24.606428    9268 cli_runner.go:164] Run: docker container inspect cilium-20220531191937-2108 --format={{.State.Status}}
	I0531 19:48:25.787331    9268 cli_runner.go:217] Completed: docker container inspect cilium-20220531191937-2108 --format={{.State.Status}}: (1.1548336s)
	I0531 19:48:25.787506    9268 machine.go:88] provisioning docker machine ...
	I0531 19:48:25.787506    9268 ubuntu.go:169] provisioning hostname "cilium-20220531191937-2108"
	I0531 19:48:25.797206    9268 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20220531191937-2108
	I0531 19:48:26.941755    9268 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20220531191937-2108: (1.1445437s)
	I0531 19:48:26.946762    9268 main.go:134] libmachine: Using SSH client type: native
	I0531 19:48:26.953987    9268 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x1322ea0] 0x1325d00 <nil>  [] 0s} 127.0.0.1 54754 <nil> <nil>}
	I0531 19:48:26.953987    9268 main.go:134] libmachine: About to run SSH command:
	sudo hostname cilium-20220531191937-2108 && echo "cilium-20220531191937-2108" | sudo tee /etc/hostname
	I0531 19:48:27.221638    9268 main.go:134] libmachine: SSH cmd err, output: <nil>: cilium-20220531191937-2108
	
	I0531 19:48:27.232353    9268 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20220531191937-2108
	I0531 19:48:28.360495    9268 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20220531191937-2108: (1.1278943s)
	I0531 19:48:28.365359    9268 main.go:134] libmachine: Using SSH client type: native
	I0531 19:48:28.365937    9268 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x1322ea0] 0x1325d00 <nil>  [] 0s} 127.0.0.1 54754 <nil> <nil>}
	I0531 19:48:28.366046    9268 main.go:134] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\scilium-20220531191937-2108' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 cilium-20220531191937-2108/g' /etc/hosts;
				else 
					echo '127.0.1.1 cilium-20220531191937-2108' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0531 19:48:28.667846    9268 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I0531 19:48:28.667935    9268 ubuntu.go:175] set auth options {CertDir:C:\Users\jenkins.minikube7\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube7\minikube-integration\.minikube}
	I0531 19:48:28.667998    9268 ubuntu.go:177] setting up certificates
	I0531 19:48:28.667998    9268 provision.go:83] configureAuth start
	I0531 19:48:28.676062    9268 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" cilium-20220531191937-2108
	I0531 19:48:29.751867    9268 cli_runner.go:217] Completed: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" cilium-20220531191937-2108: (1.0758004s)
	I0531 19:48:29.751867    9268 provision.go:138] copyHostCerts
	I0531 19:48:29.751867    9268 exec_runner.go:144] found C:\Users\jenkins.minikube7\minikube-integration\.minikube/ca.pem, removing ...
	I0531 19:48:29.751867    9268 exec_runner.go:207] rm: C:\Users\jenkins.minikube7\minikube-integration\.minikube\ca.pem
	I0531 19:48:29.752920    9268 exec_runner.go:151] cp: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube7\minikube-integration\.minikube/ca.pem (1078 bytes)
	I0531 19:48:29.753865    9268 exec_runner.go:144] found C:\Users\jenkins.minikube7\minikube-integration\.minikube/cert.pem, removing ...
	I0531 19:48:29.753865    9268 exec_runner.go:207] rm: C:\Users\jenkins.minikube7\minikube-integration\.minikube\cert.pem
	I0531 19:48:29.753865    9268 exec_runner.go:151] cp: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube7\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0531 19:48:29.754892    9268 exec_runner.go:144] found C:\Users\jenkins.minikube7\minikube-integration\.minikube/key.pem, removing ...
	I0531 19:48:29.755859    9268 exec_runner.go:207] rm: C:\Users\jenkins.minikube7\minikube-integration\.minikube\key.pem
	I0531 19:48:29.755859    9268 exec_runner.go:151] cp: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube7\minikube-integration\.minikube/key.pem (1679 bytes)
	I0531 19:48:29.756882    9268 provision.go:112] generating server cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.cilium-20220531191937-2108 san=[192.168.58.2 127.0.0.1 localhost 127.0.0.1 minikube cilium-20220531191937-2108]
	I0531 19:48:29.997660    9268 provision.go:172] copyRemoteCerts
	I0531 19:48:30.009449    9268 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0531 19:48:30.018682    9268 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20220531191937-2108
	I0531 19:48:31.057809    9268 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20220531191937-2108: (1.0391217s)
	I0531 19:48:31.058511    9268 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:54754 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\cilium-20220531191937-2108\id_rsa Username:docker}
	I0531 19:48:31.206606    9268 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (1.1971511s)
	I0531 19:48:31.207147    9268 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0531 19:48:31.271897    9268 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1253 bytes)
	I0531 19:48:31.323010    9268 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0531 19:48:31.371959    9268 provision.go:86] duration metric: configureAuth took 2.7039486s
	I0531 19:48:31.372155    9268 ubuntu.go:193] setting minikube options for container-runtime
	I0531 19:48:31.372667    9268 config.go:178] Loaded profile config "cilium-20220531191937-2108": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.6
	I0531 19:48:31.383028    9268 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20220531191937-2108
	I0531 19:48:32.456081    9268 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20220531191937-2108: (1.0730485s)
	I0531 19:48:32.462481    9268 main.go:134] libmachine: Using SSH client type: native
	I0531 19:48:32.463528    9268 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x1322ea0] 0x1325d00 <nil>  [] 0s} 127.0.0.1 54754 <nil> <nil>}
	I0531 19:48:32.463528    9268 main.go:134] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0531 19:48:32.672247    9268 main.go:134] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0531 19:48:32.672247    9268 ubuntu.go:71] root file system type: overlay
	I0531 19:48:32.673568    9268 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0531 19:48:32.681753    9268 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20220531191937-2108
	I0531 19:48:33.741320    9268 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20220531191937-2108: (1.0595627s)
	I0531 19:48:33.744316    9268 main.go:134] libmachine: Using SSH client type: native
	I0531 19:48:33.745311    9268 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x1322ea0] 0x1325d00 <nil>  [] 0s} 127.0.0.1 54754 <nil> <nil>}
	I0531 19:48:33.745311    9268 main.go:134] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0531 19:48:33.986583    9268 main.go:134] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0531 19:48:33.995529    9268 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20220531191937-2108
	I0531 19:48:35.046861    9268 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20220531191937-2108: (1.0513277s)
	I0531 19:48:35.050883    9268 main.go:134] libmachine: Using SSH client type: native
	I0531 19:48:35.050883    9268 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x1322ea0] 0x1325d00 <nil>  [] 0s} 127.0.0.1 54754 <nil> <nil>}
	I0531 19:48:35.050883    9268 main.go:134] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0531 19:48:36.498569    9268 main.go:134] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2022-05-12 09:15:28.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2022-05-31 19:48:33.967307000 +0000
	@@ -1,30 +1,32 @@
	 [Unit]
	 Description=Docker Application Container Engine
	 Documentation=https://docs.docker.com
	-After=network-online.target docker.socket firewalld.service containerd.service
	+BindsTo=containerd.service
	+After=network-online.target firewalld.service containerd.service
	 Wants=network-online.target
	-Requires=docker.socket containerd.service
	+Requires=docker.socket
	+StartLimitBurst=3
	+StartLimitIntervalSec=60
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutSec=0
	-RestartSec=2
	-Restart=always
	-
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	+Restart=on-failure
	 
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP $MAINPID
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	@@ -32,16 +34,16 @@
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	 
	 # kill only the docker process, not all processes in the cgroup
	 KillMode=process
	-OOMScoreAdjust=-500
	 
	 [Install]
	 WantedBy=multi-user.target
	Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install enable docker
	
	I0531 19:48:36.498661    9268 machine.go:91] provisioned docker machine in 10.711108s
	I0531 19:48:36.498734    9268 client.go:171] LocalClient.Create took 1m8.8840696s
	I0531 19:48:36.498734    9268 start.go:173] duration metric: libmachine.API.Create for "cilium-20220531191937-2108" took 1m8.8842184s
	I0531 19:48:36.498734    9268 start.go:306] post-start starting for "cilium-20220531191937-2108" (driver="docker")
	I0531 19:48:36.498832    9268 start.go:316] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0531 19:48:36.511843    9268 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0531 19:48:36.519175    9268 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20220531191937-2108
	I0531 19:48:37.597735    9268 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20220531191937-2108: (1.0785154s)
	I0531 19:48:37.597807    9268 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:54754 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\cilium-20220531191937-2108\id_rsa Username:docker}
	I0531 19:48:37.753512    9268 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (1.2415123s)
	I0531 19:48:37.768940    9268 ssh_runner.go:195] Run: cat /etc/os-release
	I0531 19:48:37.793144    9268 main.go:134] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0531 19:48:37.793144    9268 main.go:134] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0531 19:48:37.793144    9268 main.go:134] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0531 19:48:37.793144    9268 info.go:137] Remote host: Ubuntu 20.04.4 LTS
	I0531 19:48:37.793144    9268 filesync.go:126] Scanning C:\Users\jenkins.minikube7\minikube-integration\.minikube\addons for local assets ...
	I0531 19:48:37.793144    9268 filesync.go:126] Scanning C:\Users\jenkins.minikube7\minikube-integration\.minikube\files for local assets ...
	I0531 19:48:37.794049    9268 filesync.go:149] local asset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\files\etc\ssl\certs\21082.pem -> 21082.pem in /etc/ssl/certs
	I0531 19:48:37.803047    9268 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0531 19:48:37.826132    9268 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\files\etc\ssl\certs\21082.pem --> /etc/ssl/certs/21082.pem (1708 bytes)
	I0531 19:48:37.879440    9268 start.go:309] post-start completed in 1.380602s
	I0531 19:48:37.890943    9268 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" cilium-20220531191937-2108
	I0531 19:48:39.069045    9268 cli_runner.go:217] Completed: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" cilium-20220531191937-2108: (1.1779754s)
	I0531 19:48:39.069045    9268 profile.go:148] Saving config to C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\cilium-20220531191937-2108\config.json ...
	I0531 19:48:39.081960    9268 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0531 19:48:39.089034    9268 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20220531191937-2108
	I0531 19:48:40.313476    9268 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20220531191937-2108: (1.2244031s)
	I0531 19:48:40.313868    9268 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:54754 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\cilium-20220531191937-2108\id_rsa Username:docker}
	I0531 19:48:40.400330    9268 ssh_runner.go:235] Completed: sh -c "df -h /var | awk 'NR==2{print $5}'": (1.3183639s)
	I0531 19:48:40.412028    9268 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0531 19:48:40.438223    9268 start.go:134] duration metric: createHost completed in 1m12.8285062s
	I0531 19:48:40.438364    9268 start.go:81] releasing machines lock for "cilium-20220531191937-2108", held for 1m12.829501s
	I0531 19:48:40.449902    9268 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" cilium-20220531191937-2108
	I0531 19:48:41.556568    9268 cli_runner.go:217] Completed: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" cilium-20220531191937-2108: (1.1066081s)
	I0531 19:48:41.558574    9268 ssh_runner.go:195] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0531 19:48:41.567573    9268 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20220531191937-2108
	I0531 19:48:41.570571    9268 ssh_runner.go:195] Run: systemctl --version
	I0531 19:48:41.577570    9268 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20220531191937-2108
	I0531 19:48:42.700018    9268 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20220531191937-2108: (1.1324401s)
	I0531 19:48:42.700018    9268 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:54754 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\cilium-20220531191937-2108\id_rsa Username:docker}
	I0531 19:48:42.721019    9268 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20220531191937-2108: (1.1434438s)
	I0531 19:48:42.721019    9268 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:54754 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\cilium-20220531191937-2108\id_rsa Username:docker}
	I0531 19:48:42.798409    9268 ssh_runner.go:235] Completed: systemctl --version: (1.2278325s)
	I0531 19:48:42.808389    9268 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0531 19:48:42.931501    9268 ssh_runner.go:235] Completed: curl -sS -m 2 https://k8s.gcr.io/: (1.3729205s)
	I0531 19:48:42.940499    9268 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0531 19:48:42.970862    9268 cruntime.go:273] skipping containerd shutdown because we are bound to it
	I0531 19:48:42.982676    9268 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0531 19:48:43.011214    9268 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/dockershim.sock
	image-endpoint: unix:///var/run/dockershim.sock
	" | sudo tee /etc/crictl.yaml"
	I0531 19:48:43.067404    9268 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0531 19:48:43.284827    9268 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0531 19:48:43.455438    9268 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0531 19:48:43.493420    9268 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0531 19:48:43.676110    9268 ssh_runner.go:195] Run: sudo systemctl start docker
	I0531 19:48:43.718551    9268 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0531 19:48:43.825365    9268 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0531 19:48:43.922674    9268 out.go:204] * Preparing Kubernetes v1.23.6 on Docker 20.10.16 ...
	I0531 19:48:43.930733    9268 cli_runner.go:164] Run: docker exec -t cilium-20220531191937-2108 dig +short host.docker.internal
	I0531 19:48:45.219163    9268 cli_runner.go:217] Completed: docker exec -t cilium-20220531191937-2108 dig +short host.docker.internal: (1.2884248s)
	I0531 19:48:45.219163    9268 network.go:96] got host ip for mount in container by digging dns: 192.168.65.2
	I0531 19:48:45.228172    9268 ssh_runner.go:195] Run: grep 192.168.65.2	host.minikube.internal$ /etc/hosts
	I0531 19:48:45.241834    9268 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.65.2	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0531 19:48:45.276834    9268 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" cilium-20220531191937-2108
	I0531 19:48:46.354390    9268 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" cilium-20220531191937-2108: (1.0770274s)
	I0531 19:48:46.354390    9268 preload.go:132] Checking if preload exists for k8s version v1.23.6 and runtime docker
	I0531 19:48:46.361386    9268 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0531 19:48:46.444859    9268 docker.go:610] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.23.6
	k8s.gcr.io/kube-scheduler:v1.23.6
	k8s.gcr.io/kube-proxy:v1.23.6
	k8s.gcr.io/kube-controller-manager:v1.23.6
	k8s.gcr.io/etcd:3.5.1-0
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0531 19:48:46.444859    9268 docker.go:541] Images already preloaded, skipping extraction
	I0531 19:48:46.452824    9268 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0531 19:48:46.520915    9268 docker.go:610] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.23.6
	k8s.gcr.io/kube-proxy:v1.23.6
	k8s.gcr.io/kube-scheduler:v1.23.6
	k8s.gcr.io/kube-controller-manager:v1.23.6
	k8s.gcr.io/etcd:3.5.1-0
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0531 19:48:46.520953    9268 cache_images.go:84] Images are preloaded, skipping loading
	I0531 19:48:46.530652    9268 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0531 19:48:46.726287    9268 cni.go:95] Creating CNI manager for "cilium"
	I0531 19:48:46.726287    9268 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0531 19:48:46.726287    9268 kubeadm.go:158] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.58.2 APIServerPort:8443 KubernetesVersion:v1.23.6 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:cilium-20220531191937-2108 NodeName:cilium-20220531191937-2108 DNSDomain:cluster.local CRISocket:/var/run/dockershim.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.58.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.58.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/mi
nikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	I0531 19:48:46.726287    9268 kubeadm.go:162] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.58.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/dockershim.sock
	  name: "cilium-20220531191937-2108"
	  kubeletExtraArgs:
	    node-ip: 192.168.58.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.58.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.23.6
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0531 19:48:46.726934    9268 kubeadm.go:961] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.23.6/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=docker --hostname-override=cilium-20220531191937-2108 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.58.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.23.6 ClusterName:cilium-20220531191937-2108 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:cilium NodeIP: NodePort:8443 NodeName:}
	I0531 19:48:46.737359    9268 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.23.6
	I0531 19:48:46.768612    9268 binaries.go:44] Found k8s binaries, skipping transfer
	I0531 19:48:46.780845    9268 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0531 19:48:46.811604    9268 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (373 bytes)
	I0531 19:48:46.850077    9268 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0531 19:48:46.894088    9268 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2048 bytes)
	I0531 19:48:46.944720    9268 ssh_runner.go:195] Run: grep 192.168.58.2	control-plane.minikube.internal$ /etc/hosts
	I0531 19:48:46.954718    9268 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.58.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0531 19:48:46.980461    9268 certs.go:54] Setting up C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\cilium-20220531191937-2108 for IP: 192.168.58.2
	I0531 19:48:46.980461    9268 certs.go:182] skipping minikubeCA CA generation: C:\Users\jenkins.minikube7\minikube-integration\.minikube\ca.key
	I0531 19:48:46.981102    9268 certs.go:182] skipping proxyClientCA CA generation: C:\Users\jenkins.minikube7\minikube-integration\.minikube\proxy-client-ca.key
	I0531 19:48:46.981961    9268 certs.go:302] generating minikube-user signed cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\cilium-20220531191937-2108\client.key
	I0531 19:48:46.981961    9268 crypto.go:68] Generating cert C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\cilium-20220531191937-2108\client.crt with IP's: []
	I0531 19:48:47.260440    9268 crypto.go:156] Writing cert to C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\cilium-20220531191937-2108\client.crt ...
	I0531 19:48:47.260440    9268 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\cilium-20220531191937-2108\client.crt: {Name:mkfe33647c3ad3aac201569723c5fc0e1edd9893 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0531 19:48:47.261866    9268 crypto.go:164] Writing key to C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\cilium-20220531191937-2108\client.key ...
	I0531 19:48:47.261866    9268 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\cilium-20220531191937-2108\client.key: {Name:mkd97d25e7ed6677e6c4d7c21b6e1c8c2455776f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0531 19:48:47.262896    9268 certs.go:302] generating minikube signed cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\cilium-20220531191937-2108\apiserver.key.cee25041
	I0531 19:48:47.263776    9268 crypto.go:68] Generating cert C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\cilium-20220531191937-2108\apiserver.crt.cee25041 with IP's: [192.168.58.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I0531 19:48:47.780014    9268 crypto.go:156] Writing cert to C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\cilium-20220531191937-2108\apiserver.crt.cee25041 ...
	I0531 19:48:47.780014    9268 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\cilium-20220531191937-2108\apiserver.crt.cee25041: {Name:mkd62a0632ab36f625f9ca0f0a1e1c55a6fae821 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0531 19:48:47.781462    9268 crypto.go:164] Writing key to C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\cilium-20220531191937-2108\apiserver.key.cee25041 ...
	I0531 19:48:47.781462    9268 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\cilium-20220531191937-2108\apiserver.key.cee25041: {Name:mkc8633657d57158f0bda8161912acf9d70414cc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0531 19:48:47.782333    9268 certs.go:320] copying C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\cilium-20220531191937-2108\apiserver.crt.cee25041 -> C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\cilium-20220531191937-2108\apiserver.crt
	I0531 19:48:47.789942    9268 certs.go:324] copying C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\cilium-20220531191937-2108\apiserver.key.cee25041 -> C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\cilium-20220531191937-2108\apiserver.key
	I0531 19:48:47.790927    9268 certs.go:302] generating aggregator signed cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\cilium-20220531191937-2108\proxy-client.key
	I0531 19:48:47.791410    9268 crypto.go:68] Generating cert C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\cilium-20220531191937-2108\proxy-client.crt with IP's: []
	I0531 19:48:48.016152    9268 crypto.go:156] Writing cert to C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\cilium-20220531191937-2108\proxy-client.crt ...
	I0531 19:48:48.016152    9268 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\cilium-20220531191937-2108\proxy-client.crt: {Name:mk4612cab5437b5b51346883ff73dc34555dceab Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0531 19:48:48.017804    9268 crypto.go:164] Writing key to C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\cilium-20220531191937-2108\proxy-client.key ...
	I0531 19:48:48.017804    9268 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\cilium-20220531191937-2108\proxy-client.key: {Name:mk8bba2692e9ee6aead2c0c19c23f73d8361e756 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0531 19:48:48.025315    9268 certs.go:388] found cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\2108.pem (1338 bytes)
	W0531 19:48:48.026469    9268 certs.go:384] ignoring C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\2108_empty.pem, impossibly tiny 0 bytes
	I0531 19:48:48.026469    9268 certs.go:388] found cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca-key.pem (1675 bytes)
	I0531 19:48:48.026732    9268 certs.go:388] found cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem (1078 bytes)
	I0531 19:48:48.026985    9268 certs.go:388] found cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0531 19:48:48.027255    9268 certs.go:388] found cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\key.pem (1679 bytes)
	I0531 19:48:48.027255    9268 certs.go:388] found cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\files\etc\ssl\certs\C:\Users\jenkins.minikube7\minikube-integration\.minikube\files\etc\ssl\certs\21082.pem (1708 bytes)
	I0531 19:48:48.028928    9268 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\cilium-20220531191937-2108\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0531 19:48:48.088403    9268 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\cilium-20220531191937-2108\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0531 19:48:48.145868    9268 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\cilium-20220531191937-2108\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0531 19:48:48.209697    9268 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\cilium-20220531191937-2108\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0531 19:48:48.269047    9268 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0531 19:48:48.326417    9268 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0531 19:48:48.408416    9268 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0531 19:48:48.458396    9268 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0531 19:48:48.511753    9268 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\2108.pem --> /usr/share/ca-certificates/2108.pem (1338 bytes)
	I0531 19:48:48.562810    9268 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\files\etc\ssl\certs\21082.pem --> /usr/share/ca-certificates/21082.pem (1708 bytes)
	I0531 19:48:48.611927    9268 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0531 19:48:48.659373    9268 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (752 bytes)
	I0531 19:48:48.702370    9268 ssh_runner.go:195] Run: openssl version
	I0531 19:48:48.730228    9268 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/21082.pem && ln -fs /usr/share/ca-certificates/21082.pem /etc/ssl/certs/21082.pem"
	I0531 19:48:48.769708    9268 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/21082.pem
	I0531 19:48:48.786475    9268 certs.go:431] hashing: -rw-r--r-- 1 root root 1708 May 31 17:31 /usr/share/ca-certificates/21082.pem
	I0531 19:48:48.797651    9268 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/21082.pem
	I0531 19:48:48.831240    9268 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/21082.pem /etc/ssl/certs/3ec20f2e.0"
	I0531 19:48:48.868959    9268 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0531 19:48:48.904841    9268 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0531 19:48:48.914846    9268 certs.go:431] hashing: -rw-r--r-- 1 root root 1111 May 31 17:19 /usr/share/ca-certificates/minikubeCA.pem
	I0531 19:48:48.923847    9268 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0531 19:48:48.945843    9268 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0531 19:48:48.978599    9268 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2108.pem && ln -fs /usr/share/ca-certificates/2108.pem /etc/ssl/certs/2108.pem"
	I0531 19:48:49.020324    9268 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2108.pem
	I0531 19:48:49.034469    9268 certs.go:431] hashing: -rw-r--r-- 1 root root 1338 May 31 17:31 /usr/share/ca-certificates/2108.pem
	I0531 19:48:49.045730    9268 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2108.pem
	I0531 19:48:49.080658    9268 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/2108.pem /etc/ssl/certs/51391683.0"
	I0531 19:48:49.103791    9268 kubeadm.go:395] StartCluster: {Name:cilium-20220531191937-2108 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653596720-14230@sha256:e953786303ac8350802546ee187d34e89f0007072a54fdbcc2f86a1fb8575418 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.6 ClusterName:cilium-20220531191937-2108 Namespace:default APIServerName:minikube
CA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:cilium NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.23.6 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:5m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube7:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMi
rror: DisableOptimizations:false DisableMetrics:false}
	I0531 19:48:49.114576    9268 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0531 19:48:49.203962    9268 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0531 19:48:49.238469    9268 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0531 19:48:49.263687    9268 kubeadm.go:221] ignoring SystemVerification for kubeadm because of docker driver
	I0531 19:48:49.273669    9268 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0531 19:48:49.301331    9268 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0531 19:48:49.301331    9268 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0531 19:49:12.473148    9268 out.go:204]   - Generating certificates and keys ...
	I0531 19:49:12.478581    9268 out.go:204]   - Booting up control plane ...
	I0531 19:49:12.483747    9268 out.go:204]   - Configuring RBAC rules ...
	I0531 19:49:12.487745    9268 cni.go:95] Creating CNI manager for "cilium"
	I0531 19:49:12.491784    9268 out.go:177] * Configuring Cilium (Container Networking Interface) ...
	I0531 19:49:12.504739    9268 ssh_runner.go:195] Run: sudo /bin/bash -c "grep 'bpffs /sys/fs/bpf' /proc/mounts || sudo mount bpffs -t bpf /sys/fs/bpf"
	I0531 19:49:12.564468    9268 cilium.go:816] Using pod CIDR: 10.244.0.0/16
	I0531 19:49:12.564569    9268 cilium.go:827] cilium options: {PodSubnet:10.244.0.0/16}
	I0531 19:49:12.564770    9268 cilium.go:831] cilium config:
	---
	# Source: cilium/templates/cilium-agent-serviceaccount.yaml
	apiVersion: v1
	kind: ServiceAccount
	metadata:
	  name: cilium
	  namespace: kube-system
	---
	# Source: cilium/templates/cilium-operator-serviceaccount.yaml
	apiVersion: v1
	kind: ServiceAccount
	metadata:
	  name: cilium-operator
	  namespace: kube-system
	---
	# Source: cilium/templates/cilium-configmap.yaml
	apiVersion: v1
	kind: ConfigMap
	metadata:
	  name: cilium-config
	  namespace: kube-system
	data:
	
	  # Identity allocation mode selects how identities are shared between cilium
	  # nodes by setting how they are stored. The options are "crd" or "kvstore".
	  # - "crd" stores identities in kubernetes as CRDs (custom resource definition).
	  #   These can be queried with:
	  #     kubectl get ciliumid
	  # - "kvstore" stores identities in a kvstore, etcd or consul, that is
	  #   configured below. Cilium versions before 1.6 supported only the kvstore
	  #   backend. Upgrades from these older cilium versions should continue using
	  #   the kvstore by commenting out the identity-allocation-mode below, or
	  #   setting it to "kvstore".
	  identity-allocation-mode: crd
	  cilium-endpoint-gc-interval: "5m0s"
	
	  # If you want to run cilium in debug mode change this value to true
	  debug: "false"
	  # The agent can be put into the following three policy enforcement modes
	  # default, always and never.
	  # https://docs.cilium.io/en/latest/policy/intro/#policy-enforcement-modes
	  enable-policy: "default"
	
	  # Enable IPv4 addressing. If enabled, all endpoints are allocated an IPv4
	  # address.
	  enable-ipv4: "true"
	
	  # Enable IPv6 addressing. If enabled, all endpoints are allocated an IPv6
	  # address.
	  enable-ipv6: "false"
	  # Users who wish to specify their own custom CNI configuration file must set
	  # custom-cni-conf to "true", otherwise Cilium may overwrite the configuration.
	  custom-cni-conf: "false"
	  enable-bpf-clock-probe: "true"
	  # If you want cilium monitor to aggregate tracing for packets, set this level
	  # to "low", "medium", or "maximum". The higher the level, the less packets
	  # that will be seen in monitor output.
	  monitor-aggregation: medium
	
	  # The monitor aggregation interval governs the typical time between monitor
	  # notification events for each allowed connection.
	  #
	  # Only effective when monitor aggregation is set to "medium" or higher.
	  monitor-aggregation-interval: 5s
	
	  # The monitor aggregation flags determine which TCP flags which, upon the
	  # first observation, cause monitor notifications to be generated.
	  #
	  # Only effective when monitor aggregation is set to "medium" or higher.
	  monitor-aggregation-flags: all
	  # Specifies the ratio (0.0-1.0) of total system memory to use for dynamic
	  # sizing of the TCP CT, non-TCP CT, NAT and policy BPF maps.
	  bpf-map-dynamic-size-ratio: "0.0025"
	  # bpf-policy-map-max specifies the maximum number of entries in endpoint
	  # policy map (per endpoint)
	  bpf-policy-map-max: "16384"
	  # bpf-lb-map-max specifies the maximum number of entries in bpf lb service,
	  # backend and affinity maps.
	  bpf-lb-map-max: "65536"
	  # Pre-allocation of map entries allows per-packet latency to be reduced, at
	  # the expense of up-front memory allocation for the entries in the maps. The
	  # default value below will minimize memory usage in the default installation;
	  # users who are sensitive to latency may consider setting this to "true".
	  #
	  # This option was introduced in Cilium 1.4. Cilium 1.3 and earlier ignore
	  # this option and behave as though it is set to "true".
	  #
	  # If this value is modified, then during the next Cilium startup the restore
	  # of existing endpoints and tracking of ongoing connections may be disrupted.
	  # As a result, reply packets may be dropped and the load-balancing decisions
	  # for established connections may change.
	  #
	  # If this option is set to "false" during an upgrade from 1.3 or earlier to
	  # 1.4 or later, then it may cause one-time disruptions during the upgrade.
	  preallocate-bpf-maps: "false"
	
	  # Regular expression matching compatible Istio sidecar istio-proxy
	  # container image names
	  sidecar-istio-proxy-image: "cilium/istio_proxy"
	
	  # Name of the cluster. Only relevant when building a mesh of clusters.
	  cluster-name: default
	  # Unique ID of the cluster. Must be unique across all conneted clusters and
	  # in the range of 1 and 255. Only relevant when building a mesh of clusters.
	  cluster-id: ""
	
	  # Encapsulation mode for communication between nodes
	  # Possible values:
	  #   - disabled
	  #   - vxlan (default)
	  #   - geneve
	  tunnel: vxlan
	  # Enables L7 proxy for L7 policy enforcement and visibility
	  enable-l7-proxy: "true"
	
	  # wait-bpf-mount makes init container wait until bpf filesystem is mounted
	  wait-bpf-mount: "false"
	
	  masquerade: "true"
	  enable-bpf-masquerade: "true"
	
	  enable-xt-socket-fallback: "true"
	  install-iptables-rules: "true"
	
	  auto-direct-node-routes: "false"
	  enable-bandwidth-manager: "false"
	  enable-local-redirect-policy: "false"
	  kube-proxy-replacement:  "probe"
	  kube-proxy-replacement-healthz-bind-address: ""
	  enable-health-check-nodeport: "true"
	  node-port-bind-protection: "true"
	  enable-auto-protect-node-port-range: "true"
	  enable-session-affinity: "true"
	  k8s-require-ipv4-pod-cidr: "true"
	  k8s-require-ipv6-pod-cidr: "false"
	  enable-endpoint-health-checking: "true"
	  enable-health-checking: "true"
	  enable-well-known-identities: "false"
	  enable-remote-node-identity: "true"
	  operator-api-serve-addr: "127.0.0.1:9234"
	  # Enable Hubble gRPC service.
	  enable-hubble: "true"
	  # UNIX domain socket for Hubble server to listen to.
	  hubble-socket-path:  "/var/run/cilium/hubble.sock"
	  # An additional address for Hubble server to listen to (e.g. ":4244").
	  hubble-listen-address: ":4244"
	  hubble-disable-tls: "false"
	  hubble-tls-cert-file: /var/lib/cilium/tls/hubble/server.crt
	  hubble-tls-key-file: /var/lib/cilium/tls/hubble/server.key
	  hubble-tls-client-ca-files: /var/lib/cilium/tls/hubble/client-ca.crt
	  ipam: "cluster-pool"
	  cluster-pool-ipv4-cidr: "10.244.0.0/16"
	  cluster-pool-ipv4-mask-size: "24"
	  disable-cnp-status-updates: "true"
	  cgroup-root: "/run/cilium/cgroupv2"
	---
	# Source: cilium/templates/cilium-agent-clusterrole.yaml
	apiVersion: rbac.authorization.k8s.io/v1
	kind: ClusterRole
	metadata:
	  name: cilium
	rules:
	- apiGroups:
	  - networking.k8s.io
	  resources:
	  - networkpolicies
	  verbs:
	  - get
	  - list
	  - watch
	- apiGroups:
	  - discovery.k8s.io
	  resources:
	  - endpointslices
	  verbs:
	  - get
	  - list
	  - watch
	- apiGroups:
	  - ""
	  resources:
	  - namespaces
	  - services
	  - nodes
	  - endpoints
	  verbs:
	  - get
	  - list
	  - watch
	- apiGroups:
	  - ""
	  resources:
	  - pods
	  - pods/finalizers
	  verbs:
	  - get
	  - list
	  - watch
	  - update
	  - delete
	- apiGroups:
	  - ""
	  resources:
	  - nodes
	  verbs:
	  - get
	  - list
	  - watch
	  - update
	- apiGroups:
	  - ""
	  resources:
	  - nodes
	  - nodes/status
	  verbs:
	  - patch
	- apiGroups:
	  - apiextensions.k8s.io
	  resources:
	  - customresourcedefinitions
	  verbs:
	  # Deprecated for removal in v1.10
	  - create
	  - list
	  - watch
	  - update
	
	  # This is used when validating policies in preflight. This will need to stay
	  # until we figure out how to avoid "get" inside the preflight, and then
	  # should be removed ideally.
	  - get
	- apiGroups:
	  - cilium.io
	  resources:
	  - ciliumnetworkpolicies
	  - ciliumnetworkpolicies/status
	  - ciliumnetworkpolicies/finalizers
	  - ciliumclusterwidenetworkpolicies
	  - ciliumclusterwidenetworkpolicies/status
	  - ciliumclusterwidenetworkpolicies/finalizers
	  - ciliumendpoints
	  - ciliumendpoints/status
	  - ciliumendpoints/finalizers
	  - ciliumnodes
	  - ciliumnodes/status
	  - ciliumnodes/finalizers
	  - ciliumidentities
	  - ciliumidentities/finalizers
	  - ciliumlocalredirectpolicies
	  - ciliumlocalredirectpolicies/status
	  - ciliumlocalredirectpolicies/finalizers
	  verbs:
	  - '*'
	---
	# Source: cilium/templates/cilium-operator-clusterrole.yaml
	apiVersion: rbac.authorization.k8s.io/v1
	kind: ClusterRole
	metadata:
	  name: cilium-operator
	rules:
	- apiGroups:
	  - ""
	  resources:
	  # to automatically delete [core|kube]dns pods so that are starting to being
	  # managed by Cilium
	  - pods
	  verbs:
	  - get
	  - list
	  - watch
	  - delete
	- apiGroups:
	  - discovery.k8s.io
	  resources:
	  - endpointslices
	  verbs:
	  - get
	  - list
	  - watch
	- apiGroups:
	  - ""
	  resources:
	  # to perform the translation of a CNP that contains 'ToGroup' to its endpoints
	  - services
	  - endpoints
	  # to check apiserver connectivity
	  - namespaces
	  verbs:
	  - get
	  - list
	  - watch
	- apiGroups:
	  - cilium.io
	  resources:
	  - ciliumnetworkpolicies
	  - ciliumnetworkpolicies/status
	  - ciliumnetworkpolicies/finalizers
	  - ciliumclusterwidenetworkpolicies
	  - ciliumclusterwidenetworkpolicies/status
	  - ciliumclusterwidenetworkpolicies/finalizers
	  - ciliumendpoints
	  - ciliumendpoints/status
	  - ciliumendpoints/finalizers
	  - ciliumnodes
	  - ciliumnodes/status
	  - ciliumnodes/finalizers
	  - ciliumidentities
	  - ciliumidentities/status
	  - ciliumidentities/finalizers
	  - ciliumlocalredirectpolicies
	  - ciliumlocalredirectpolicies/status
	  - ciliumlocalredirectpolicies/finalizers
	  verbs:
	  - '*'
	- apiGroups:
	  - apiextensions.k8s.io
	  resources:
	  - customresourcedefinitions
	  verbs:
	  - create
	  - get
	  - list
	  - update
	  - watch
	# For cilium-operator running in HA mode.
	#
	# Cilium operator running in HA mode requires the use of ResourceLock for Leader Election
	# between multiple running instances.
	# The preferred way of doing this is to use LeasesResourceLock as edits to Leases are less
	# common and fewer objects in the cluster watch "all Leases".
	# The support for leases was introduced in coordination.k8s.io/v1 during Kubernetes 1.14 release.
	# In Cilium we currently don't support HA mode for K8s version < 1.14. This condition make sure
	# that we only authorize access to leases resources in supported K8s versions.
	- apiGroups:
	  - coordination.k8s.io
	  resources:
	  - leases
	  verbs:
	  - create
	  - get
	  - update
	---
	# Source: cilium/templates/cilium-agent-clusterrolebinding.yaml
	apiVersion: rbac.authorization.k8s.io/v1
	kind: ClusterRoleBinding
	metadata:
	  name: cilium
	roleRef:
	  apiGroup: rbac.authorization.k8s.io
	  kind: ClusterRole
	  name: cilium
	subjects:
	- kind: ServiceAccount
	  name: cilium
	  namespace: kube-system
	---
	# Source: cilium/templates/cilium-operator-clusterrolebinding.yaml
	apiVersion: rbac.authorization.k8s.io/v1
	kind: ClusterRoleBinding
	metadata:
	  name: cilium-operator
	roleRef:
	  apiGroup: rbac.authorization.k8s.io
	  kind: ClusterRole
	  name: cilium-operator
	subjects:
	- kind: ServiceAccount
	  name: cilium-operator
	  namespace: kube-system
	---
	# Source: cilium/templates/cilium-agent-daemonset.yaml
	apiVersion: apps/v1
	kind: DaemonSet
	metadata:
	  labels:
	    k8s-app: cilium
	  name: cilium
	  namespace: kube-system
	spec:
	  selector:
	    matchLabels:
	      k8s-app: cilium
	  updateStrategy:
	    rollingUpdate:
	      maxUnavailable: 2
	    type: RollingUpdate
	  template:
	    metadata:
	      annotations:
	        # This annotation plus the CriticalAddonsOnly toleration makes
	        # cilium to be a critical pod in the cluster, which ensures cilium
	        # gets priority scheduling.
	        # https://kubernetes.io/docs/tasks/administer-cluster/guaranteed-scheduling-critical-addon-pods/
	        scheduler.alpha.kubernetes.io/critical-pod: ""
	      labels:
	        k8s-app: cilium
	    spec:
	      affinity:
	        podAntiAffinity:
	          requiredDuringSchedulingIgnoredDuringExecution:
	          - labelSelector:
	              matchExpressions:
	              - key: k8s-app
	                operator: In
	                values:
	                - cilium
	            topologyKey: kubernetes.io/hostname
	      containers:
	      - args:
	        - --config-dir=/tmp/cilium/config-map
	        command:
	        - cilium-agent
	        livenessProbe:
	          httpGet:
	            host: '127.0.0.1'
	            path: /healthz
	            port: 9876
	            scheme: HTTP
	            httpHeaders:
	            - name: "brief"
	              value: "true"
	          failureThreshold: 10
	          # The initial delay for the liveness probe is intentionally large to
	          # avoid an endless kill & restart cycle if in the event that the initial
	          # bootstrapping takes longer than expected.
	          initialDelaySeconds: 120
	          periodSeconds: 30
	          successThreshold: 1
	          timeoutSeconds: 5
	        readinessProbe:
	          httpGet:
	            host: '127.0.0.1'
	            path: /healthz
	            port: 9876
	            scheme: HTTP
	            httpHeaders:
	            - name: "brief"
	              value: "true"
	          failureThreshold: 3
	          initialDelaySeconds: 5
	          periodSeconds: 30
	          successThreshold: 1
	          timeoutSeconds: 5
	        env:
	        - name: K8S_NODE_NAME
	          valueFrom:
	            fieldRef:
	              apiVersion: v1
	              fieldPath: spec.nodeName
	        - name: CILIUM_K8S_NAMESPACE
	          valueFrom:
	            fieldRef:
	              apiVersion: v1
	              fieldPath: metadata.namespace
	        - name: CILIUM_FLANNEL_MASTER_DEVICE
	          valueFrom:
	            configMapKeyRef:
	              key: flannel-master-device
	              name: cilium-config
	              optional: true
	        - name: CILIUM_FLANNEL_UNINSTALL_ON_EXIT
	          valueFrom:
	            configMapKeyRef:
	              key: flannel-uninstall-on-exit
	              name: cilium-config
	              optional: true
	        - name: CILIUM_CLUSTERMESH_CONFIG
	          value: /var/lib/cilium/clustermesh/
	        - name: CILIUM_CNI_CHAINING_MODE
	          valueFrom:
	            configMapKeyRef:
	              key: cni-chaining-mode
	              name: cilium-config
	              optional: true
	        - name: CILIUM_CUSTOM_CNI_CONF
	          valueFrom:
	            configMapKeyRef:
	              key: custom-cni-conf
	              name: cilium-config
	              optional: true
	        image: "quay.io/cilium/cilium:v1.9.9@sha256:a85d5cff13f8231c2e267d9fc3c6e43d24be4a75dac9f641c11ec46e7f17624d"
	        imagePullPolicy: IfNotPresent
	        lifecycle:
	          postStart:
	            exec:
	              command:
	              - "/cni-install.sh"
	              - "--enable-debug=false"
	          preStop:
	            exec:
	              command:
	              - /cni-uninstall.sh
	        name: cilium-agent
	        securityContext:
	          capabilities:
	            add:
	            - NET_ADMIN
	            - SYS_MODULE
	          privileged: true
	        volumeMounts:
	        - mountPath: /sys/fs/bpf
	          name: bpf-maps
	        - mountPath: /var/run/cilium
	          name: cilium-run
	        - mountPath: /host/opt/cni/bin
	          name: cni-path
	        - mountPath: /host/etc/cni/net.d
	          name: etc-cni-netd
	        - mountPath: /var/lib/cilium/clustermesh
	          name: clustermesh-secrets
	          readOnly: true
	        - mountPath: /tmp/cilium/config-map
	          name: cilium-config-path
	          readOnly: true
	          # Needed to be able to load kernel modules
	        - mountPath: /lib/modules
	          name: lib-modules
	          readOnly: true
	        - mountPath: /run/xtables.lock
	          name: xtables-lock
	        - mountPath: /var/lib/cilium/tls/hubble
	          name: hubble-tls
	          readOnly: true
	      hostNetwork: true
	      initContainers:
	      # Required to mount cgroup2 filesystem on the underlying Kubernetes node.
	      # We use nsenter command with host's cgroup and mount namespaces enabled.
	      - name: mount-cgroup
	        env:
	          - name: CGROUP_ROOT
	            value: /run/cilium/cgroupv2
	          - name: BIN_PATH
	            value: /opt/cni/bin
	        command:
	          - sh
	          - -c
	          # The statically linked Go program binary is invoked to avoid any
	          # dependency on utilities like sh and mount that can be missing on certain
	          # distros installed on the underlying host. Copy the binary to the
	          # same directory where we install cilium cni plugin so that exec permissions
	          # are available.
	          - 'cp /usr/bin/cilium-mount /hostbin/cilium-mount && nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; rm /hostbin/cilium-mount'
	        image: "quay.io/cilium/cilium:v1.9.9@sha256:a85d5cff13f8231c2e267d9fc3c6e43d24be4a75dac9f641c11ec46e7f17624d"
	        imagePullPolicy: IfNotPresent
	        volumeMounts:
	          - mountPath: /hostproc
	            name: hostproc
	          - mountPath: /hostbin
	            name: cni-path
	        securityContext:
	          privileged: true
	      - command:
	        - /init-container.sh
	        env:
	        - name: CILIUM_ALL_STATE
	          valueFrom:
	            configMapKeyRef:
	              key: clean-cilium-state
	              name: cilium-config
	              optional: true
	        - name: CILIUM_BPF_STATE
	          valueFrom:
	            configMapKeyRef:
	              key: clean-cilium-bpf-state
	              name: cilium-config
	              optional: true
	        - name: CILIUM_WAIT_BPF_MOUNT
	          valueFrom:
	            configMapKeyRef:
	              key: wait-bpf-mount
	              name: cilium-config
	              optional: true
	        image: "quay.io/cilium/cilium:v1.9.9@sha256:a85d5cff13f8231c2e267d9fc3c6e43d24be4a75dac9f641c11ec46e7f17624d"
	        imagePullPolicy: IfNotPresent
	        name: clean-cilium-state
	        securityContext:
	          capabilities:
	            add:
	            - NET_ADMIN
	          privileged: true
	        volumeMounts:
	        - mountPath: /sys/fs/bpf
	          name: bpf-maps
	          mountPropagation: HostToContainer
	          # Required to mount cgroup filesystem from the host to cilium agent pod
	        - mountPath: /run/cilium/cgroupv2
	          name: cilium-cgroup
	          mountPropagation: HostToContainer
	        - mountPath: /var/run/cilium
	          name: cilium-run
	        resources:
	          requests:
	            cpu: 100m
	            memory: 100Mi
	      restartPolicy: Always
	      priorityClassName: system-node-critical
	      serviceAccount: cilium
	      serviceAccountName: cilium
	      terminationGracePeriodSeconds: 1
	      tolerations:
	      - operator: Exists
	      volumes:
	        # To keep state between restarts / upgrades
	      - hostPath:
	          path: /var/run/cilium
	          type: DirectoryOrCreate
	        name: cilium-run
	        # To keep state between restarts / upgrades for bpf maps
	      - hostPath:
	          path: /sys/fs/bpf
	          type: DirectoryOrCreate
	        name: bpf-maps
	      # To mount cgroup2 filesystem on the host
	      - hostPath:
	          path: /proc
	          type: Directory
	        name: hostproc
	      # To keep state between restarts / upgrades for cgroup2 filesystem
	      - hostPath:
	          path: /run/cilium/cgroupv2
	          type: DirectoryOrCreate
	        name: cilium-cgroup
	      # To install cilium cni plugin in the host
	      - hostPath:
	          path:  /opt/cni/bin
	          type: DirectoryOrCreate
	        name: cni-path
	        # To install cilium cni configuration in the host
	      - hostPath:
	          path: /etc/cni/net.d
	          type: DirectoryOrCreate
	        name: etc-cni-netd
	        # To be able to load kernel modules
	      - hostPath:
	          path: /lib/modules
	        name: lib-modules
	        # To access iptables concurrently with other processes (e.g. kube-proxy)
	      - hostPath:
	          path: /run/xtables.lock
	          type: FileOrCreate
	        name: xtables-lock
	        # To read the clustermesh configuration
	      - name: clustermesh-secrets
	        secret:
	          defaultMode: 420
	          optional: true
	          secretName: cilium-clustermesh
	        # To read the configuration from the config map
	      - configMap:
	          name: cilium-config
	        name: cilium-config-path
	      - name: hubble-tls
	        projected:
	          sources:
	          - secret:
	              name: hubble-server-certs
	              items:
	                - key: tls.crt
	                  path: server.crt
	                - key: tls.key
	                  path: server.key
	              optional: true
	          - configMap:
	              name: hubble-ca-cert
	              items:
	                - key: ca.crt
	                  path: client-ca.crt
	              optional: true
	---
	# Source: cilium/templates/cilium-operator-deployment.yaml
	apiVersion: apps/v1
	kind: Deployment
	metadata:
	  labels:
	    io.cilium/app: operator
	    name: cilium-operator
	  name: cilium-operator
	  namespace: kube-system
	spec:
	  # We support HA mode only for Kubernetes version > 1.14
	  # See docs on ServerCapabilities.LeasesResourceLock in file pkg/k8s/version/version.go
	  # for more details.
	  replicas: 1
	  selector:
	    matchLabels:
	      io.cilium/app: operator
	      name: cilium-operator
	  strategy:
	    rollingUpdate:
	      maxSurge: 1
	      maxUnavailable: 1
	    type: RollingUpdate
	  template:
	    metadata:
	      annotations:
	      labels:
	        io.cilium/app: operator
	        name: cilium-operator
	    spec:
	      # In HA mode, cilium-operator pods must not be scheduled on the same
	      # node as they will clash with each other.
	      affinity:
	        podAntiAffinity:
	          requiredDuringSchedulingIgnoredDuringExecution:
	          - labelSelector:
	              matchExpressions:
	              - key: io.cilium/app
	                operator: In
	                values:
	                - operator
	            topologyKey: kubernetes.io/hostname
	      containers:
	      - args:
	        - --config-dir=/tmp/cilium/config-map
	        - --debug=$(CILIUM_DEBUG)
	        command:
	        - cilium-operator-generic
	        env:
	        - name: K8S_NODE_NAME
	          valueFrom:
	            fieldRef:
	              apiVersion: v1
	              fieldPath: spec.nodeName
	        - name: CILIUM_K8S_NAMESPACE
	          valueFrom:
	            fieldRef:
	              apiVersion: v1
	              fieldPath: metadata.namespace
	        - name: CILIUM_DEBUG
	          valueFrom:
	            configMapKeyRef:
	              key: debug
	              name: cilium-config
	              optional: true
	        image: "quay.io/cilium/operator-generic:v1.9.9@sha256:3726a965cd960295ca3c5e7f2b543c02096c0912c6652eb8bbb9ce54bcaa99d8"
	        imagePullPolicy: IfNotPresent
	        name: cilium-operator
	        livenessProbe:
	          httpGet:
	            host: '127.0.0.1'
	            path: /healthz
	            port: 9234
	            scheme: HTTP
	          initialDelaySeconds: 60
	          periodSeconds: 10
	          timeoutSeconds: 3
	        volumeMounts:
	        - mountPath: /tmp/cilium/config-map
	          name: cilium-config-path
	          readOnly: true
	      hostNetwork: true
	      restartPolicy: Always
	      priorityClassName: system-cluster-critical
	      serviceAccount: cilium-operator
	      serviceAccountName: cilium-operator
	      tolerations:
	      - operator: Exists
	      volumes:
	        # To read the configuration from the config map
	      - configMap:
	          name: cilium-config
	        name: cilium-config-path
	
	I0531 19:49:12.564956    9268 cni.go:189] applying CNI manifest using /var/lib/minikube/binaries/v1.23.6/kubectl ...
	I0531 19:49:12.564956    9268 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (23204 bytes)
	I0531 19:49:12.665855    9268 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0531 19:49:16.296942    9268 ssh_runner.go:235] Completed: sudo /var/lib/minikube/binaries/v1.23.6/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml: (3.6310703s)
	I0531 19:49:16.297792    9268 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0531 19:49:16.314413    9268 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl label nodes minikube.k8s.io/version=v1.26.0-beta.1 minikube.k8s.io/commit=bd46569bd7cb517fcad2e704abdebd2826bd8454 minikube.k8s.io/name=cilium-20220531191937-2108 minikube.k8s.io/updated_at=2022_05_31T19_49_16_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 19:49:16.318475    9268 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 19:49:16.339405    9268 ops.go:34] apiserver oom_adj: -16
	I0531 19:49:16.751966    9268 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 19:49:17.464273    9268 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 19:49:17.969613    9268 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 19:49:18.464610    9268 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 19:49:18.969412    9268 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 19:49:19.470364    9268 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 19:49:19.973848    9268 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 19:49:20.465164    9268 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 19:49:20.978673    9268 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 19:49:21.465392    9268 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 19:49:21.972163    9268 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 19:49:22.478314    9268 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 19:49:23.474313    9268 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 19:49:24.461746    9268 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 19:49:25.247898    9268 kubeadm.go:1045] duration metric: took 8.9499934s to wait for elevateKubeSystemPrivileges.
	I0531 19:49:25.247898    9268 kubeadm.go:397] StartCluster complete in 36.1439561s
	I0531 19:49:25.248046    9268 settings.go:142] acquiring lock: {Name:mk2f48f1c2db86c45c5c20d13312e07e9c171d09 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0531 19:49:25.248549    9268 settings.go:150] Updating kubeconfig:  C:\Users\jenkins.minikube7\minikube-integration\kubeconfig
	I0531 19:49:25.251782    9268 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube7\minikube-integration\kubeconfig: {Name:mk3b9816be6135fef42a073128e9ed356868417a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	W0531 19:49:25.492773    9268 kapi.go:233] failed rescaling deployment, will retry: Operation cannot be fulfilled on deployments.apps "coredns": the object has been modified; please apply your changes to the latest version and try again
	I0531 19:49:26.530833    9268 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "cilium-20220531191937-2108" rescaled to 1
	I0531 19:49:26.530833    9268 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.23.6/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0531 19:49:26.530833    9268 addons.go:415] enableAddons start: toEnable=map[], additional=[]
	I0531 19:49:26.530833    9268 addons.go:65] Setting storage-provisioner=true in profile "cilium-20220531191937-2108"
	I0531 19:49:26.530833    9268 addons.go:153] Setting addon storage-provisioner=true in "cilium-20220531191937-2108"
	W0531 19:49:26.530833    9268 addons.go:165] addon storage-provisioner should already be in state true
	I0531 19:49:26.530833    9268 start.go:208] Will wait 5m0s for node &{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.23.6 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0531 19:49:26.530833    9268 host.go:66] Checking if "cilium-20220531191937-2108" exists ...
	I0531 19:49:26.535830    9268 out.go:177] * Verifying Kubernetes components...
	I0531 19:49:26.530833    9268 addons.go:65] Setting default-storageclass=true in profile "cilium-20220531191937-2108"
	I0531 19:49:26.535830    9268 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "cilium-20220531191937-2108"
	I0531 19:49:26.531834    9268 config.go:178] Loaded profile config "cilium-20220531191937-2108": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.6
	I0531 19:49:26.562831    9268 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0531 19:49:26.565848    9268 cli_runner.go:164] Run: docker container inspect cilium-20220531191937-2108 --format={{.State.Status}}
	I0531 19:49:26.565848    9268 cli_runner.go:164] Run: docker container inspect cilium-20220531191937-2108 --format={{.State.Status}}
	I0531 19:49:27.150996    9268 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.23.6/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.65.2 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.23.6/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0531 19:49:27.167003    9268 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" cilium-20220531191937-2108
	I0531 19:49:28.049610    9268 cli_runner.go:217] Completed: docker container inspect cilium-20220531191937-2108 --format={{.State.Status}}: (1.4837555s)
	I0531 19:49:28.052622    9268 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0531 19:49:28.055606    9268 addons.go:348] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0531 19:49:28.055606    9268 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0531 19:49:28.060596    9268 cli_runner.go:217] Completed: docker container inspect cilium-20220531191937-2108 --format={{.State.Status}}: (1.4947412s)
	I0531 19:49:28.063598    9268 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20220531191937-2108
	I0531 19:49:28.128802    9268 addons.go:153] Setting addon default-storageclass=true in "cilium-20220531191937-2108"
	W0531 19:49:28.128878    9268 addons.go:165] addon default-storageclass should already be in state true
	I0531 19:49:28.128993    9268 host.go:66] Checking if "cilium-20220531191937-2108" exists ...
	I0531 19:49:28.161460    9268 cli_runner.go:164] Run: docker container inspect cilium-20220531191937-2108 --format={{.State.Status}}
	I0531 19:49:28.689659    9268 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" cilium-20220531191937-2108: (1.5224334s)
	I0531 19:49:28.697582    9268 node_ready.go:35] waiting up to 5m0s for node "cilium-20220531191937-2108" to be "Ready" ...
	I0531 19:49:28.743185    9268 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.23.6/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.65.2 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.23.6/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.5921819s)
	I0531 19:49:28.743185    9268 start.go:806] {"host.minikube.internal": 192.168.65.2} host record injected into CoreDNS
	I0531 19:49:28.834064    9268 node_ready.go:49] node "cilium-20220531191937-2108" has status "Ready":"True"
	I0531 19:49:28.834064    9268 node_ready.go:38] duration metric: took 136.4814ms waiting for node "cilium-20220531191937-2108" to be "Ready" ...
	I0531 19:49:28.834064    9268 pod_ready.go:35] extra waiting up to 5m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0531 19:49:28.957945    9268 pod_ready.go:78] waiting up to 5m0s for pod "cilium-k5t52" in "kube-system" namespace to be "Ready" ...
	I0531 19:49:29.479694    9268 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20220531191937-2108: (1.4160899s)
	I0531 19:49:29.479694    9268 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:54754 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\cilium-20220531191937-2108\id_rsa Username:docker}
	I0531 19:49:29.620783    9268 cli_runner.go:217] Completed: docker container inspect cilium-20220531191937-2108 --format={{.State.Status}}: (1.4593165s)
	I0531 19:49:29.620783    9268 addons.go:348] installing /etc/kubernetes/addons/storageclass.yaml
	I0531 19:49:29.620783    9268 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0531 19:49:29.636808    9268 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20220531191937-2108
	I0531 19:49:30.355449    9268 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.6/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0531 19:49:31.150681    9268 pod_ready.go:102] pod "cilium-k5t52" in "kube-system" namespace has status "Ready":"False"
	I0531 19:49:31.166688    9268 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20220531191937-2108: (1.5298739s)
	I0531 19:49:31.166688    9268 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:54754 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\cilium-20220531191937-2108\id_rsa Username:docker}
	I0531 19:49:32.173199    9268 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.6/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0531 19:49:33.437522    9268 pod_ready.go:102] pod "cilium-k5t52" in "kube-system" namespace has status "Ready":"False"
	I0531 19:49:34.038067    9268 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.6/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (3.6826015s)
	I0531 19:49:38.945163    9268 pod_ready.go:102] pod "cilium-k5t52" in "kube-system" namespace has status "Ready":"False"
	I0531 19:49:39.028728    9268 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.6/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (6.8554987s)
	I0531 19:49:39.035716    9268 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0531 19:49:39.041725    9268 addons.go:417] enableAddons completed in 12.5108375s
	I0531 19:49:43.216524    9268 pod_ready.go:102] pod "cilium-k5t52" in "kube-system" namespace has status "Ready":"False"
	I0531 19:49:45.660397    9268 pod_ready.go:102] pod "cilium-k5t52" in "kube-system" namespace has status "Ready":"False"
	I0531 19:49:48.147652    9268 pod_ready.go:102] pod "cilium-k5t52" in "kube-system" namespace has status "Ready":"False"
	I0531 19:49:50.645797    9268 pod_ready.go:102] pod "cilium-k5t52" in "kube-system" namespace has status "Ready":"False"
	I0531 19:49:53.140920    9268 pod_ready.go:102] pod "cilium-k5t52" in "kube-system" namespace has status "Ready":"False"
	I0531 19:49:55.143006    9268 pod_ready.go:102] pod "cilium-k5t52" in "kube-system" namespace has status "Ready":"False"
	I0531 19:49:57.145340    9268 pod_ready.go:102] pod "cilium-k5t52" in "kube-system" namespace has status "Ready":"False"
	I0531 19:49:59.646777    9268 pod_ready.go:102] pod "cilium-k5t52" in "kube-system" namespace has status "Ready":"False"
	I0531 19:50:02.332392    9268 pod_ready.go:102] pod "cilium-k5t52" in "kube-system" namespace has status "Ready":"False"
	I0531 19:50:04.642744    9268 pod_ready.go:102] pod "cilium-k5t52" in "kube-system" namespace has status "Ready":"False"
	I0531 19:50:06.644986    9268 pod_ready.go:102] pod "cilium-k5t52" in "kube-system" namespace has status "Ready":"False"
	I0531 19:50:09.144450    9268 pod_ready.go:102] pod "cilium-k5t52" in "kube-system" namespace has status "Ready":"False"
	I0531 19:50:11.642196    9268 pod_ready.go:102] pod "cilium-k5t52" in "kube-system" namespace has status "Ready":"False"
	I0531 19:50:13.647209    9268 pod_ready.go:102] pod "cilium-k5t52" in "kube-system" namespace has status "Ready":"False"
	I0531 19:50:15.654692    9268 pod_ready.go:102] pod "cilium-k5t52" in "kube-system" namespace has status "Ready":"False"
	I0531 19:50:18.086828    9268 pod_ready.go:102] pod "cilium-k5t52" in "kube-system" namespace has status "Ready":"False"
	I0531 19:50:20.146417    9268 pod_ready.go:102] pod "cilium-k5t52" in "kube-system" namespace has status "Ready":"False"
	I0531 19:50:22.690233    9268 pod_ready.go:102] pod "cilium-k5t52" in "kube-system" namespace has status "Ready":"False"
	I0531 19:50:25.137333    9268 pod_ready.go:102] pod "cilium-k5t52" in "kube-system" namespace has status "Ready":"False"
	I0531 19:50:27.203556    9268 pod_ready.go:102] pod "cilium-k5t52" in "kube-system" namespace has status "Ready":"False"
	I0531 19:50:29.657641    9268 pod_ready.go:102] pod "cilium-k5t52" in "kube-system" namespace has status "Ready":"False"
	I0531 19:50:31.933017    9268 pod_ready.go:102] pod "cilium-k5t52" in "kube-system" namespace has status "Ready":"False"
	I0531 19:50:34.332888    9268 pod_ready.go:102] pod "cilium-k5t52" in "kube-system" namespace has status "Ready":"False"
	I0531 19:50:36.646199    9268 pod_ready.go:102] pod "cilium-k5t52" in "kube-system" namespace has status "Ready":"False"
	I0531 19:50:38.772665    9268 pod_ready.go:102] pod "cilium-k5t52" in "kube-system" namespace has status "Ready":"False"
	I0531 19:50:41.079065    9268 pod_ready.go:102] pod "cilium-k5t52" in "kube-system" namespace has status "Ready":"False"
	I0531 19:50:43.087683    9268 pod_ready.go:102] pod "cilium-k5t52" in "kube-system" namespace has status "Ready":"False"
	I0531 19:50:45.089192    9268 pod_ready.go:102] pod "cilium-k5t52" in "kube-system" namespace has status "Ready":"False"
	I0531 19:50:47.102726    9268 pod_ready.go:102] pod "cilium-k5t52" in "kube-system" namespace has status "Ready":"False"
	I0531 19:50:49.592858    9268 pod_ready.go:102] pod "cilium-k5t52" in "kube-system" namespace has status "Ready":"False"
	I0531 19:50:52.094119    9268 pod_ready.go:102] pod "cilium-k5t52" in "kube-system" namespace has status "Ready":"False"
	I0531 19:50:54.590673    9268 pod_ready.go:102] pod "cilium-k5t52" in "kube-system" namespace has status "Ready":"False"
	I0531 19:50:56.597659    9268 pod_ready.go:102] pod "cilium-k5t52" in "kube-system" namespace has status "Ready":"False"
	I0531 19:50:59.090202    9268 pod_ready.go:102] pod "cilium-k5t52" in "kube-system" namespace has status "Ready":"False"
	I0531 19:51:01.585861    9268 pod_ready.go:102] pod "cilium-k5t52" in "kube-system" namespace has status "Ready":"False"
	I0531 19:51:03.594656    9268 pod_ready.go:102] pod "cilium-k5t52" in "kube-system" namespace has status "Ready":"False"
	I0531 19:51:06.073904    9268 pod_ready.go:102] pod "cilium-k5t52" in "kube-system" namespace has status "Ready":"False"
	I0531 19:51:08.101133    9268 pod_ready.go:102] pod "cilium-k5t52" in "kube-system" namespace has status "Ready":"False"
	I0531 19:51:10.586032    9268 pod_ready.go:102] pod "cilium-k5t52" in "kube-system" namespace has status "Ready":"False"
	I0531 19:51:12.595767    9268 pod_ready.go:102] pod "cilium-k5t52" in "kube-system" namespace has status "Ready":"False"
	I0531 19:51:14.632262    9268 pod_ready.go:102] pod "cilium-k5t52" in "kube-system" namespace has status "Ready":"False"
	I0531 19:51:17.091454    9268 pod_ready.go:102] pod "cilium-k5t52" in "kube-system" namespace has status "Ready":"False"
	I0531 19:51:19.585051    9268 pod_ready.go:102] pod "cilium-k5t52" in "kube-system" namespace has status "Ready":"False"
	I0531 19:51:21.597865    9268 pod_ready.go:102] pod "cilium-k5t52" in "kube-system" namespace has status "Ready":"False"
	I0531 19:51:24.105041    9268 pod_ready.go:102] pod "cilium-k5t52" in "kube-system" namespace has status "Ready":"False"
	I0531 19:51:26.579529    9268 pod_ready.go:102] pod "cilium-k5t52" in "kube-system" namespace has status "Ready":"False"
	I0531 19:51:28.595723    9268 pod_ready.go:102] pod "cilium-k5t52" in "kube-system" namespace has status "Ready":"False"
	I0531 19:51:34.710626    9268 pod_ready.go:102] pod "cilium-k5t52" in "kube-system" namespace has status "Ready":"False"
	I0531 19:51:39.305965    9268 pod_ready.go:102] pod "cilium-k5t52" in "kube-system" namespace has status "Ready":"False"
	I0531 19:51:41.660629    9268 pod_ready.go:102] pod "cilium-k5t52" in "kube-system" namespace has status "Ready":"False"
	I0531 19:51:44.084831    9268 pod_ready.go:102] pod "cilium-k5t52" in "kube-system" namespace has status "Ready":"False"
	I0531 19:51:46.089411    9268 pod_ready.go:102] pod "cilium-k5t52" in "kube-system" namespace has status "Ready":"False"
	I0531 19:51:48.090501    9268 pod_ready.go:102] pod "cilium-k5t52" in "kube-system" namespace has status "Ready":"False"
	I0531 19:51:50.585097    9268 pod_ready.go:102] pod "cilium-k5t52" in "kube-system" namespace has status "Ready":"False"
	I0531 19:51:52.601404    9268 pod_ready.go:102] pod "cilium-k5t52" in "kube-system" namespace has status "Ready":"False"
	I0531 19:51:55.087146    9268 pod_ready.go:102] pod "cilium-k5t52" in "kube-system" namespace has status "Ready":"False"
	I0531 19:51:57.585738    9268 pod_ready.go:102] pod "cilium-k5t52" in "kube-system" namespace has status "Ready":"False"
	I0531 19:52:00.086100    9268 pod_ready.go:102] pod "cilium-k5t52" in "kube-system" namespace has status "Ready":"False"
	I0531 19:52:02.591637    9268 pod_ready.go:102] pod "cilium-k5t52" in "kube-system" namespace has status "Ready":"False"
	I0531 19:52:05.082519    9268 pod_ready.go:102] pod "cilium-k5t52" in "kube-system" namespace has status "Ready":"False"
	I0531 19:52:07.111644    9268 pod_ready.go:102] pod "cilium-k5t52" in "kube-system" namespace has status "Ready":"False"
	I0531 19:52:09.584011    9268 pod_ready.go:102] pod "cilium-k5t52" in "kube-system" namespace has status "Ready":"False"
	I0531 19:52:11.591034    9268 pod_ready.go:102] pod "cilium-k5t52" in "kube-system" namespace has status "Ready":"False"
	I0531 19:52:23.835787    9268 pod_ready.go:102] pod "cilium-k5t52" in "kube-system" namespace has status "Ready":"False"
	I0531 19:52:26.126317    9268 pod_ready.go:102] pod "cilium-k5t52" in "kube-system" namespace has status "Ready":"False"
	I0531 19:52:28.582937    9268 pod_ready.go:102] pod "cilium-k5t52" in "kube-system" namespace has status "Ready":"False"
	I0531 19:52:30.597794    9268 pod_ready.go:102] pod "cilium-k5t52" in "kube-system" namespace has status "Ready":"False"
	I0531 19:52:33.083972    9268 pod_ready.go:102] pod "cilium-k5t52" in "kube-system" namespace has status "Ready":"False"
	I0531 19:52:35.086645    9268 pod_ready.go:102] pod "cilium-k5t52" in "kube-system" namespace has status "Ready":"False"
	I0531 19:52:37.095700    9268 pod_ready.go:102] pod "cilium-k5t52" in "kube-system" namespace has status "Ready":"False"
	I0531 19:52:39.590145    9268 pod_ready.go:102] pod "cilium-k5t52" in "kube-system" namespace has status "Ready":"False"
	I0531 19:52:42.090764    9268 pod_ready.go:102] pod "cilium-k5t52" in "kube-system" namespace has status "Ready":"False"
	I0531 19:52:44.583109    9268 pod_ready.go:102] pod "cilium-k5t52" in "kube-system" namespace has status "Ready":"False"
	I0531 19:52:46.588323    9268 pod_ready.go:102] pod "cilium-k5t52" in "kube-system" namespace has status "Ready":"False"
	I0531 19:52:48.588603    9268 pod_ready.go:102] pod "cilium-k5t52" in "kube-system" namespace has status "Ready":"False"
	I0531 19:52:50.590425    9268 pod_ready.go:102] pod "cilium-k5t52" in "kube-system" namespace has status "Ready":"False"
	I0531 19:52:53.082897    9268 pod_ready.go:102] pod "cilium-k5t52" in "kube-system" namespace has status "Ready":"False"
	I0531 19:52:55.092486    9268 pod_ready.go:102] pod "cilium-k5t52" in "kube-system" namespace has status "Ready":"False"
	I0531 19:52:57.589445    9268 pod_ready.go:102] pod "cilium-k5t52" in "kube-system" namespace has status "Ready":"False"
	I0531 19:53:00.093596    9268 pod_ready.go:102] pod "cilium-k5t52" in "kube-system" namespace has status "Ready":"False"
	I0531 19:53:02.576106    9268 pod_ready.go:102] pod "cilium-k5t52" in "kube-system" namespace has status "Ready":"False"
	I0531 19:53:04.583100    9268 pod_ready.go:102] pod "cilium-k5t52" in "kube-system" namespace has status "Ready":"False"
	I0531 19:53:07.093578    9268 pod_ready.go:102] pod "cilium-k5t52" in "kube-system" namespace has status "Ready":"False"
	I0531 19:53:09.590835    9268 pod_ready.go:102] pod "cilium-k5t52" in "kube-system" namespace has status "Ready":"False"
	I0531 19:53:12.082143    9268 pod_ready.go:102] pod "cilium-k5t52" in "kube-system" namespace has status "Ready":"False"
	I0531 19:53:14.084391    9268 pod_ready.go:102] pod "cilium-k5t52" in "kube-system" namespace has status "Ready":"False"
	I0531 19:53:16.090299    9268 pod_ready.go:102] pod "cilium-k5t52" in "kube-system" namespace has status "Ready":"False"
	I0531 19:53:18.092088    9268 pod_ready.go:102] pod "cilium-k5t52" in "kube-system" namespace has status "Ready":"False"
	I0531 19:53:20.595408    9268 pod_ready.go:102] pod "cilium-k5t52" in "kube-system" namespace has status "Ready":"False"
	I0531 19:53:23.095495    9268 pod_ready.go:102] pod "cilium-k5t52" in "kube-system" namespace has status "Ready":"False"
	I0531 19:53:25.164988    9268 pod_ready.go:102] pod "cilium-k5t52" in "kube-system" namespace has status "Ready":"False"
	I0531 19:53:27.582989    9268 pod_ready.go:102] pod "cilium-k5t52" in "kube-system" namespace has status "Ready":"False"
	I0531 19:53:29.099077    9268 pod_ready.go:81] duration metric: took 4m0.1400756s waiting for pod "cilium-k5t52" in "kube-system" namespace to be "Ready" ...
	E0531 19:53:29.099077    9268 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting for the condition
	I0531 19:53:29.099077    9268 pod_ready.go:78] waiting up to 5m0s for pod "cilium-operator-78f49c47f-7ccp4" in "kube-system" namespace to be "Ready" ...
	I0531 19:53:29.112072    9268 pod_ready.go:92] pod "cilium-operator-78f49c47f-7ccp4" in "kube-system" namespace has status "Ready":"True"
	I0531 19:53:29.112072    9268 pod_ready.go:81] duration metric: took 12.995ms waiting for pod "cilium-operator-78f49c47f-7ccp4" in "kube-system" namespace to be "Ready" ...
	I0531 19:53:29.112072    9268 pod_ready.go:78] waiting up to 5m0s for pod "coredns-64897985d-2ksb8" in "kube-system" namespace to be "Ready" ...
	I0531 19:53:29.118076    9268 pod_ready.go:97] error getting pod "coredns-64897985d-2ksb8" in "kube-system" namespace (skipping!): pods "coredns-64897985d-2ksb8" not found
	I0531 19:53:29.118076    9268 pod_ready.go:81] duration metric: took 6.004ms waiting for pod "coredns-64897985d-2ksb8" in "kube-system" namespace to be "Ready" ...
	E0531 19:53:29.118076    9268 pod_ready.go:66] WaitExtra: waitPodCondition: error getting pod "coredns-64897985d-2ksb8" in "kube-system" namespace (skipping!): pods "coredns-64897985d-2ksb8" not found
	I0531 19:53:29.118076    9268 pod_ready.go:78] waiting up to 5m0s for pod "coredns-64897985d-nktf8" in "kube-system" namespace to be "Ready" ...
	I0531 19:53:31.171043    9268 pod_ready.go:102] pod "coredns-64897985d-nktf8" in "kube-system" namespace has status "Ready":"False"
	I0531 19:53:33.680672    9268 pod_ready.go:102] pod "coredns-64897985d-nktf8" in "kube-system" namespace has status "Ready":"False"
	I0531 19:53:36.164020    9268 pod_ready.go:102] pod "coredns-64897985d-nktf8" in "kube-system" namespace has status "Ready":"False"
	I0531 19:53:38.169363    9268 pod_ready.go:102] pod "coredns-64897985d-nktf8" in "kube-system" namespace has status "Ready":"False"
	I0531 19:53:40.175054    9268 pod_ready.go:102] pod "coredns-64897985d-nktf8" in "kube-system" namespace has status "Ready":"False"
	I0531 19:53:42.674804    9268 pod_ready.go:102] pod "coredns-64897985d-nktf8" in "kube-system" namespace has status "Ready":"False"
	I0531 19:53:44.675717    9268 pod_ready.go:102] pod "coredns-64897985d-nktf8" in "kube-system" namespace has status "Ready":"False"
	I0531 19:53:47.164419    9268 pod_ready.go:102] pod "coredns-64897985d-nktf8" in "kube-system" namespace has status "Ready":"False"
	I0531 19:53:49.167304    9268 pod_ready.go:102] pod "coredns-64897985d-nktf8" in "kube-system" namespace has status "Ready":"False"
	I0531 19:53:51.169173    9268 pod_ready.go:102] pod "coredns-64897985d-nktf8" in "kube-system" namespace has status "Ready":"False"
	I0531 19:53:53.170696    9268 pod_ready.go:102] pod "coredns-64897985d-nktf8" in "kube-system" namespace has status "Ready":"False"
	I0531 19:53:55.669074    9268 pod_ready.go:102] pod "coredns-64897985d-nktf8" in "kube-system" namespace has status "Ready":"False"
	I0531 19:53:57.751868    9268 pod_ready.go:102] pod "coredns-64897985d-nktf8" in "kube-system" namespace has status "Ready":"False"
	I0531 19:54:00.168487    9268 pod_ready.go:102] pod "coredns-64897985d-nktf8" in "kube-system" namespace has status "Ready":"False"
	I0531 19:54:02.673331    9268 pod_ready.go:102] pod "coredns-64897985d-nktf8" in "kube-system" namespace has status "Ready":"False"
	I0531 19:54:05.171251    9268 pod_ready.go:102] pod "coredns-64897985d-nktf8" in "kube-system" namespace has status "Ready":"False"
	I0531 19:54:07.679543    9268 pod_ready.go:102] pod "coredns-64897985d-nktf8" in "kube-system" namespace has status "Ready":"False"
	I0531 19:54:10.172597    9268 pod_ready.go:102] pod "coredns-64897985d-nktf8" in "kube-system" namespace has status "Ready":"False"
	I0531 19:54:12.177544    9268 pod_ready.go:102] pod "coredns-64897985d-nktf8" in "kube-system" namespace has status "Ready":"False"
	I0531 19:54:14.673026    9268 pod_ready.go:102] pod "coredns-64897985d-nktf8" in "kube-system" namespace has status "Ready":"False"
	I0531 19:54:17.164390    9268 pod_ready.go:102] pod "coredns-64897985d-nktf8" in "kube-system" namespace has status "Ready":"False"
	I0531 19:54:19.181278    9268 pod_ready.go:102] pod "coredns-64897985d-nktf8" in "kube-system" namespace has status "Ready":"False"
	I0531 19:54:21.670693    9268 pod_ready.go:102] pod "coredns-64897985d-nktf8" in "kube-system" namespace has status "Ready":"False"
	I0531 19:54:24.155293    9268 pod_ready.go:102] pod "coredns-64897985d-nktf8" in "kube-system" namespace has status "Ready":"False"
	I0531 19:54:26.167533    9268 pod_ready.go:102] pod "coredns-64897985d-nktf8" in "kube-system" namespace has status "Ready":"False"
	I0531 19:54:28.176103    9268 pod_ready.go:102] pod "coredns-64897985d-nktf8" in "kube-system" namespace has status "Ready":"False"
	I0531 19:54:30.671805    9268 pod_ready.go:102] pod "coredns-64897985d-nktf8" in "kube-system" namespace has status "Ready":"False"
	I0531 19:54:33.171691    9268 pod_ready.go:102] pod "coredns-64897985d-nktf8" in "kube-system" namespace has status "Ready":"False"
	I0531 19:54:35.670794    9268 pod_ready.go:102] pod "coredns-64897985d-nktf8" in "kube-system" namespace has status "Ready":"False"
	I0531 19:54:37.677573    9268 pod_ready.go:102] pod "coredns-64897985d-nktf8" in "kube-system" namespace has status "Ready":"False"
	I0531 19:54:39.677965    9268 pod_ready.go:102] pod "coredns-64897985d-nktf8" in "kube-system" namespace has status "Ready":"False"
	I0531 19:54:42.163647    9268 pod_ready.go:102] pod "coredns-64897985d-nktf8" in "kube-system" namespace has status "Ready":"False"
	I0531 19:54:44.175105    9268 pod_ready.go:102] pod "coredns-64897985d-nktf8" in "kube-system" namespace has status "Ready":"False"
	I0531 19:54:46.675665    9268 pod_ready.go:102] pod "coredns-64897985d-nktf8" in "kube-system" namespace has status "Ready":"False"
	I0531 19:54:49.176955    9268 pod_ready.go:102] pod "coredns-64897985d-nktf8" in "kube-system" namespace has status "Ready":"False"
	I0531 19:54:51.178121    9268 pod_ready.go:102] pod "coredns-64897985d-nktf8" in "kube-system" namespace has status "Ready":"False"
	I0531 19:54:53.663345    9268 pod_ready.go:102] pod "coredns-64897985d-nktf8" in "kube-system" namespace has status "Ready":"False"
	I0531 19:54:56.172202    9268 pod_ready.go:102] pod "coredns-64897985d-nktf8" in "kube-system" namespace has status "Ready":"False"
	I0531 19:54:58.175436    9268 pod_ready.go:102] pod "coredns-64897985d-nktf8" in "kube-system" namespace has status "Ready":"False"
	I0531 19:55:00.176836    9268 pod_ready.go:102] pod "coredns-64897985d-nktf8" in "kube-system" namespace has status "Ready":"False"
	I0531 19:55:02.666532    9268 pod_ready.go:102] pod "coredns-64897985d-nktf8" in "kube-system" namespace has status "Ready":"False"
	I0531 19:55:05.170498    9268 pod_ready.go:102] pod "coredns-64897985d-nktf8" in "kube-system" namespace has status "Ready":"False"
	I0531 19:55:07.670613    9268 pod_ready.go:102] pod "coredns-64897985d-nktf8" in "kube-system" namespace has status "Ready":"False"
	I0531 19:55:09.678652    9268 pod_ready.go:102] pod "coredns-64897985d-nktf8" in "kube-system" namespace has status "Ready":"False"
	I0531 19:55:12.187550    9268 pod_ready.go:102] pod "coredns-64897985d-nktf8" in "kube-system" namespace has status "Ready":"False"
	I0531 19:55:14.683238    9268 pod_ready.go:102] pod "coredns-64897985d-nktf8" in "kube-system" namespace has status "Ready":"False"
	I0531 19:55:17.179359    9268 pod_ready.go:102] pod "coredns-64897985d-nktf8" in "kube-system" namespace has status "Ready":"False"
	I0531 19:55:19.675111    9268 pod_ready.go:102] pod "coredns-64897985d-nktf8" in "kube-system" namespace has status "Ready":"False"
	I0531 19:55:21.730399    9268 pod_ready.go:102] pod "coredns-64897985d-nktf8" in "kube-system" namespace has status "Ready":"False"
	I0531 19:55:24.229391    9268 pod_ready.go:102] pod "coredns-64897985d-nktf8" in "kube-system" namespace has status "Ready":"False"
	I0531 19:55:26.676100    9268 pod_ready.go:102] pod "coredns-64897985d-nktf8" in "kube-system" namespace has status "Ready":"False"
	I0531 19:55:29.175353    9268 pod_ready.go:102] pod "coredns-64897985d-nktf8" in "kube-system" namespace has status "Ready":"False"
	I0531 19:55:31.194050    9268 pod_ready.go:102] pod "coredns-64897985d-nktf8" in "kube-system" namespace has status "Ready":"False"
	I0531 19:55:33.665151    9268 pod_ready.go:102] pod "coredns-64897985d-nktf8" in "kube-system" namespace has status "Ready":"False"
	I0531 19:55:35.669948    9268 pod_ready.go:102] pod "coredns-64897985d-nktf8" in "kube-system" namespace has status "Ready":"False"
	I0531 19:55:38.165863    9268 pod_ready.go:102] pod "coredns-64897985d-nktf8" in "kube-system" namespace has status "Ready":"False"
	I0531 19:55:40.664274    9268 pod_ready.go:102] pod "coredns-64897985d-nktf8" in "kube-system" namespace has status "Ready":"False"
	I0531 19:55:43.176392    9268 pod_ready.go:102] pod "coredns-64897985d-nktf8" in "kube-system" namespace has status "Ready":"False"
	I0531 19:55:45.673976    9268 pod_ready.go:102] pod "coredns-64897985d-nktf8" in "kube-system" namespace has status "Ready":"False"
	I0531 19:55:48.173929    9268 pod_ready.go:102] pod "coredns-64897985d-nktf8" in "kube-system" namespace has status "Ready":"False"
	I0531 19:55:50.177274    9268 pod_ready.go:102] pod "coredns-64897985d-nktf8" in "kube-system" namespace has status "Ready":"False"
	I0531 19:55:52.674717    9268 pod_ready.go:102] pod "coredns-64897985d-nktf8" in "kube-system" namespace has status "Ready":"False"
	I0531 19:55:55.170572    9268 pod_ready.go:102] pod "coredns-64897985d-nktf8" in "kube-system" namespace has status "Ready":"False"
	I0531 19:55:57.176717    9268 pod_ready.go:102] pod "coredns-64897985d-nktf8" in "kube-system" namespace has status "Ready":"False"
	I0531 19:55:59.665952    9268 pod_ready.go:102] pod "coredns-64897985d-nktf8" in "kube-system" namespace has status "Ready":"False"
	I0531 19:56:02.168206    9268 pod_ready.go:102] pod "coredns-64897985d-nktf8" in "kube-system" namespace has status "Ready":"False"
	I0531 19:56:04.175318    9268 pod_ready.go:102] pod "coredns-64897985d-nktf8" in "kube-system" namespace has status "Ready":"False"
	I0531 19:56:06.677868    9268 pod_ready.go:102] pod "coredns-64897985d-nktf8" in "kube-system" namespace has status "Ready":"False"
	I0531 19:56:09.250267    9268 pod_ready.go:102] pod "coredns-64897985d-nktf8" in "kube-system" namespace has status "Ready":"False"
	I0531 19:56:11.727271    9268 pod_ready.go:102] pod "coredns-64897985d-nktf8" in "kube-system" namespace has status "Ready":"False"
	I0531 19:56:14.170550    9268 pod_ready.go:102] pod "coredns-64897985d-nktf8" in "kube-system" namespace has status "Ready":"False"
	I0531 19:56:16.677035    9268 pod_ready.go:102] pod "coredns-64897985d-nktf8" in "kube-system" namespace has status "Ready":"False"
	I0531 19:56:19.172519    9268 pod_ready.go:102] pod "coredns-64897985d-nktf8" in "kube-system" namespace has status "Ready":"False"
	I0531 19:56:21.173965    9268 pod_ready.go:102] pod "coredns-64897985d-nktf8" in "kube-system" namespace has status "Ready":"False"
	I0531 19:56:23.662240    9268 pod_ready.go:102] pod "coredns-64897985d-nktf8" in "kube-system" namespace has status "Ready":"False"
	I0531 19:56:25.669983    9268 pod_ready.go:102] pod "coredns-64897985d-nktf8" in "kube-system" namespace has status "Ready":"False"
	I0531 19:56:28.176849    9268 pod_ready.go:102] pod "coredns-64897985d-nktf8" in "kube-system" namespace has status "Ready":"False"
	I0531 19:56:30.674422    9268 pod_ready.go:102] pod "coredns-64897985d-nktf8" in "kube-system" namespace has status "Ready":"False"
	I0531 19:56:33.179413    9268 pod_ready.go:102] pod "coredns-64897985d-nktf8" in "kube-system" namespace has status "Ready":"False"
	I0531 19:56:35.662738    9268 pod_ready.go:102] pod "coredns-64897985d-nktf8" in "kube-system" namespace has status "Ready":"False"
	I0531 19:56:37.670686    9268 pod_ready.go:102] pod "coredns-64897985d-nktf8" in "kube-system" namespace has status "Ready":"False"
	I0531 19:56:39.671676    9268 pod_ready.go:102] pod "coredns-64897985d-nktf8" in "kube-system" namespace has status "Ready":"False"
	I0531 19:56:42.165424    9268 pod_ready.go:102] pod "coredns-64897985d-nktf8" in "kube-system" namespace has status "Ready":"False"
	I0531 19:56:44.182800    9268 pod_ready.go:102] pod "coredns-64897985d-nktf8" in "kube-system" namespace has status "Ready":"False"
	I0531 19:56:46.669148    9268 pod_ready.go:102] pod "coredns-64897985d-nktf8" in "kube-system" namespace has status "Ready":"False"
	I0531 19:56:48.672290    9268 pod_ready.go:102] pod "coredns-64897985d-nktf8" in "kube-system" namespace has status "Ready":"False"
	I0531 19:56:51.166061    9268 pod_ready.go:102] pod "coredns-64897985d-nktf8" in "kube-system" namespace has status "Ready":"False"
	I0531 19:56:53.175432    9268 pod_ready.go:102] pod "coredns-64897985d-nktf8" in "kube-system" namespace has status "Ready":"False"
	I0531 19:56:55.668063    9268 pod_ready.go:102] pod "coredns-64897985d-nktf8" in "kube-system" namespace has status "Ready":"False"
	I0531 19:56:57.729726    9268 pod_ready.go:102] pod "coredns-64897985d-nktf8" in "kube-system" namespace has status "Ready":"False"
	I0531 19:57:00.167239    9268 pod_ready.go:102] pod "coredns-64897985d-nktf8" in "kube-system" namespace has status "Ready":"False"
	I0531 19:57:02.174654    9268 pod_ready.go:102] pod "coredns-64897985d-nktf8" in "kube-system" namespace has status "Ready":"False"
	I0531 19:57:04.180611    9268 pod_ready.go:102] pod "coredns-64897985d-nktf8" in "kube-system" namespace has status "Ready":"False"
	I0531 19:57:06.663118    9268 pod_ready.go:102] pod "coredns-64897985d-nktf8" in "kube-system" namespace has status "Ready":"False"
	I0531 19:57:08.665693    9268 pod_ready.go:102] pod "coredns-64897985d-nktf8" in "kube-system" namespace has status "Ready":"False"
	I0531 19:57:10.669276    9268 pod_ready.go:102] pod "coredns-64897985d-nktf8" in "kube-system" namespace has status "Ready":"False"
	I0531 19:57:12.675358    9268 pod_ready.go:102] pod "coredns-64897985d-nktf8" in "kube-system" namespace has status "Ready":"False"
	I0531 19:57:15.166823    9268 pod_ready.go:102] pod "coredns-64897985d-nktf8" in "kube-system" namespace has status "Ready":"False"
	I0531 19:57:17.173024    9268 pod_ready.go:102] pod "coredns-64897985d-nktf8" in "kube-system" namespace has status "Ready":"False"
	I0531 19:57:19.669927    9268 pod_ready.go:102] pod "coredns-64897985d-nktf8" in "kube-system" namespace has status "Ready":"False"
	I0531 19:57:21.677431    9268 pod_ready.go:102] pod "coredns-64897985d-nktf8" in "kube-system" namespace has status "Ready":"False"
	I0531 19:57:23.683965    9268 pod_ready.go:102] pod "coredns-64897985d-nktf8" in "kube-system" namespace has status "Ready":"False"
	I0531 19:57:26.169101    9268 pod_ready.go:102] pod "coredns-64897985d-nktf8" in "kube-system" namespace has status "Ready":"False"
	I0531 19:57:28.173555    9268 pod_ready.go:102] pod "coredns-64897985d-nktf8" in "kube-system" namespace has status "Ready":"False"
	I0531 19:57:29.191896    9268 pod_ready.go:81] duration metric: took 4m0.0727573s waiting for pod "coredns-64897985d-nktf8" in "kube-system" namespace to be "Ready" ...
	E0531 19:57:29.191896    9268 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting for the condition
	I0531 19:57:29.191896    9268 pod_ready.go:38] duration metric: took 8m0.3557122s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0531 19:57:29.195871    9268 out.go:177] 
	W0531 19:57:29.197882    9268 out.go:239] X Exiting due to GUEST_START: wait 5m0s for node: extra waiting: timed out waiting 5m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	X Exiting due to GUEST_START: wait 5m0s for node: extra waiting: timed out waiting 5m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	W0531 19:57:29.197882    9268 out.go:239] * 
	* 
	W0531 19:57:29.198861    9268 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0531 19:57:29.201861    9268 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:103: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/cilium/Start (610.92s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (587.4s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:101: (dbg) Run:  out/minikube-windows-amd64.exe start -p calico-20220531191937-2108 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=calico --driver=docker

                                                
                                                
=== CONT  TestNetworkPlugins/group/calico/Start
net_test.go:101: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p calico-20220531191937-2108 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=calico --driver=docker: exit status 1 (9m47.1335285s)

                                                
                                                
-- stdout --
	* [calico-20220531191937-2108] minikube v1.26.0-beta.1 on Microsoft Windows 10 Enterprise N 10.0.19044 Build 19044
	  - KUBECONFIG=C:\Users\jenkins.minikube7\minikube-integration\kubeconfig
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube7\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=14079
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	* Using the docker driver based on user configuration
	* Using Docker Desktop driver with the root privilege
	* Starting control plane node calico-20220531191937-2108 in cluster calico-20220531191937-2108
	* Pulling base image ...
	* Creating docker container (CPUs=2, Memory=2048MB) ...
	* Preparing Kubernetes v1.23.6 on Docker 20.10.16 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Configuring RBAC rules ...
	* Configuring Calico (Container Networking Interface) ...
	* Verifying Kubernetes components...
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	* Enabled addons: storage-provisioner, default-storageclass

                                                
                                                
-- /stdout --
** stderr ** 
	I0531 19:49:51.084809    9204 out.go:296] Setting OutFile to fd 1616 ...
	I0531 19:49:51.167390    9204 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0531 19:49:51.167390    9204 out.go:309] Setting ErrFile to fd 1840...
	I0531 19:49:51.167390    9204 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0531 19:49:51.186974    9204 out.go:303] Setting JSON to false
	I0531 19:49:51.190465    9204 start.go:115] hostinfo: {"hostname":"minikube7","uptime":84861,"bootTime":1653941730,"procs":156,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19044 Build 19044","kernelVersion":"10.0.19044 Build 19044","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f66ed2ea-6c04-4a6b-8eea-b2fb0953e990"}
	W0531 19:49:51.190465    9204 start.go:123] gopshost.Virtualization returned error: not implemented yet
	I0531 19:49:51.254771    9204 out.go:177] * [calico-20220531191937-2108] minikube v1.26.0-beta.1 on Microsoft Windows 10 Enterprise N 10.0.19044 Build 19044
	I0531 19:49:51.260770    9204 notify.go:193] Checking for updates...
	I0531 19:49:51.269758    9204 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube7\minikube-integration\kubeconfig
	I0531 19:49:51.276754    9204 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube7\minikube-integration\.minikube
	I0531 19:49:51.285755    9204 out.go:177]   - MINIKUBE_LOCATION=14079
	I0531 19:49:51.296776    9204 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0531 19:49:51.301760    9204 config.go:178] Loaded profile config "auto-20220531191922-2108": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.6
	I0531 19:49:51.301760    9204 config.go:178] Loaded profile config "cilium-20220531191937-2108": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.6
	I0531 19:49:51.302765    9204 config.go:178] Loaded profile config "embed-certs-20220531193346-2108": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.6
	I0531 19:49:51.302765    9204 driver.go:358] Setting default libvirt URI to qemu:///system
	I0531 19:49:54.992808    9204 docker.go:137] docker version: linux-20.10.14
	I0531 19:49:54.999822    9204 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0531 19:49:57.486481    9204 cli_runner.go:217] Completed: docker system info --format "{{json .}}": (2.4861987s)
	I0531 19:49:57.487080    9204 info.go:265] docker info: {ID:JKWR:L4LW:XYJC:G6AI:GZFU:RUGW:CCH6:OD2M:V572:4FTB:B7YC:DTUC Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:72 OomKillDisable:true NGoroutines:58 SystemTime:2022-05-31 19:49:56.2369805 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.102.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_
64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:53902323712 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.14 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3df54a852345ae127d1fa3092b95168e4a88e2f8 Expected:3df54a852345ae127d1fa3092b95168e4a88e2f8} RuncCommit:{ID:v1.0.3-0-gf46b6ba Expected:v1.0.3-0-gf46b6ba} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,
profile=default] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.5.1] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:C:\Program Files\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc.
Version:v0.17.0]] Warnings:<nil>}}
	I0531 19:49:57.604786    9204 out.go:177] * Using the docker driver based on user configuration
	I0531 19:49:57.608317    9204 start.go:284] selected driver: docker
	I0531 19:49:57.608513    9204 start.go:806] validating driver "docker" against <nil>
	I0531 19:49:57.608513    9204 start.go:817] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0531 19:49:57.700957    9204 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0531 19:50:00.270796    9204 cli_runner.go:217] Completed: docker system info --format "{{json .}}": (2.5698275s)
	I0531 19:50:00.270796    9204 info.go:265] docker info: {ID:JKWR:L4LW:XYJC:G6AI:GZFU:RUGW:CCH6:OD2M:V572:4FTB:B7YC:DTUC Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:72 OomKillDisable:true NGoroutines:58 SystemTime:2022-05-31 19:49:59.025748 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.102.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_6
4 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:53902323712 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.14 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3df54a852345ae127d1fa3092b95168e4a88e2f8 Expected:3df54a852345ae127d1fa3092b95168e4a88e2f8} RuncCommit:{ID:v1.0.3-0-gf46b6ba Expected:v1.0.3-0-gf46b6ba} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,p
rofile=default] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.5.1] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:C:\Program Files\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc.
Version:v0.17.0]] Warnings:<nil>}}
	I0531 19:50:00.270796    9204 start_flags.go:292] no existing cluster config was found, will generate one from the flags 
	I0531 19:50:00.272492    9204 start_flags.go:847] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0531 19:50:00.275572    9204 out.go:177] * Using Docker Desktop driver with the root privilege
	I0531 19:50:00.278649    9204 cni.go:95] Creating CNI manager for "calico"
	I0531 19:50:00.278649    9204 start_flags.go:301] Found "Calico" CNI - setting NetworkPlugin=cni
	I0531 19:50:00.278649    9204 start_flags.go:306] config:
	{Name:calico-20220531191937-2108 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653596720-14230@sha256:e953786303ac8350802546ee187d34e89f0007072a54fdbcc2f86a1fb8575418 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.6 ClusterName:calico-20220531191937-2108 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:5m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube7:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0531 19:50:00.283331    9204 out.go:177] * Starting control plane node calico-20220531191937-2108 in cluster calico-20220531191937-2108
	I0531 19:50:00.287897    9204 cache.go:120] Beginning downloading kic base image for docker with docker
	I0531 19:50:00.290886    9204 out.go:177] * Pulling base image ...
	I0531 19:50:00.295769    9204 preload.go:132] Checking if preload exists for k8s version v1.23.6 and runtime docker
	I0531 19:50:00.295769    9204 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653596720-14230@sha256:e953786303ac8350802546ee187d34e89f0007072a54fdbcc2f86a1fb8575418 in local docker daemon
	I0531 19:50:00.295769    9204 preload.go:148] Found local preload: C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.23.6-docker-overlay2-amd64.tar.lz4
	I0531 19:50:00.295769    9204 cache.go:57] Caching tarball of preloaded images
	I0531 19:50:00.296331    9204 preload.go:174] Found C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.23.6-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0531 19:50:00.296577    9204 cache.go:60] Finished verifying existence of preloaded tar for  v1.23.6 on docker
	I0531 19:50:00.296780    9204 profile.go:148] Saving config to C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\calico-20220531191937-2108\config.json ...
	I0531 19:50:00.296892    9204 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\calico-20220531191937-2108\config.json: {Name:mk395a5aeceb2554c99cc9c4c3ac1d1fc9bee949 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0531 19:50:01.564541    9204 image.go:79] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653596720-14230@sha256:e953786303ac8350802546ee187d34e89f0007072a54fdbcc2f86a1fb8575418 in local docker daemon, skipping pull
	I0531 19:50:01.564541    9204 cache.go:141] gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653596720-14230@sha256:e953786303ac8350802546ee187d34e89f0007072a54fdbcc2f86a1fb8575418 exists in daemon, skipping load
	I0531 19:50:01.564541    9204 cache.go:206] Successfully downloaded all kic artifacts
	I0531 19:50:01.564541    9204 start.go:352] acquiring machines lock for calico-20220531191937-2108: {Name:mk229298a8341a90ce561add7d1a945d7b3315d3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0531 19:50:01.564541    9204 start.go:356] acquired machines lock for "calico-20220531191937-2108" in 0s
	I0531 19:50:01.564541    9204 start.go:91] Provisioning new machine with config: &{Name:calico-20220531191937-2108 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653596720-14230@sha256:e953786303ac8350802546ee187d34e89f0007072a54fdbcc2f86a1fb8575418 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.6 ClusterName:calico-20220531191937-2108 Namespace:default
APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.23.6 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:5m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube7:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docke
r BinaryMirror: DisableOptimizations:false DisableMetrics:false} &{Name: IP: Port:8443 KubernetesVersion:v1.23.6 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0531 19:50:01.564541    9204 start.go:131] createHost starting for "" (driver="docker")
	I0531 19:50:01.568587    9204 out.go:204] * Creating docker container (CPUs=2, Memory=2048MB) ...
	I0531 19:50:01.568587    9204 start.go:165] libmachine.API.Create for "calico-20220531191937-2108" (driver="docker")
	I0531 19:50:01.568587    9204 client.go:168] LocalClient.Create starting
	I0531 19:50:01.569562    9204 main.go:134] libmachine: Reading certificate data from C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem
	I0531 19:50:01.569562    9204 main.go:134] libmachine: Decoding PEM data...
	I0531 19:50:01.569562    9204 main.go:134] libmachine: Parsing certificate...
	I0531 19:50:01.569562    9204 main.go:134] libmachine: Reading certificate data from C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\cert.pem
	I0531 19:50:01.569562    9204 main.go:134] libmachine: Decoding PEM data...
	I0531 19:50:01.569562    9204 main.go:134] libmachine: Parsing certificate...
	I0531 19:50:01.578550    9204 cli_runner.go:164] Run: docker network inspect calico-20220531191937-2108 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0531 19:50:02.878167    9204 cli_runner.go:211] docker network inspect calico-20220531191937-2108 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0531 19:50:02.878167    9204 cli_runner.go:217] Completed: docker network inspect calico-20220531191937-2108 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}": (1.2996109s)
	I0531 19:50:02.885168    9204 network_create.go:272] running [docker network inspect calico-20220531191937-2108] to gather additional debugging logs...
	I0531 19:50:02.885168    9204 cli_runner.go:164] Run: docker network inspect calico-20220531191937-2108
	W0531 19:50:04.147653    9204 cli_runner.go:211] docker network inspect calico-20220531191937-2108 returned with exit code 1
	I0531 19:50:04.147653    9204 cli_runner.go:217] Completed: docker network inspect calico-20220531191937-2108: (1.2624794s)
	I0531 19:50:04.147653    9204 network_create.go:275] error running [docker network inspect calico-20220531191937-2108]: docker network inspect calico-20220531191937-2108: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: calico-20220531191937-2108
	I0531 19:50:04.147653    9204 network_create.go:277] output of [docker network inspect calico-20220531191937-2108]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: calico-20220531191937-2108
	
	** /stderr **
	I0531 19:50:04.157637    9204 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0531 19:50:05.415515    9204 cli_runner.go:217] Completed: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}": (1.2578719s)
	I0531 19:50:05.445814    9204 network.go:288] reserving subnet 192.168.49.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[] amended:true}} dirty:map[192.168.49.0:0xc000006d88] misses:0}
	I0531 19:50:05.445814    9204 network.go:235] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0531 19:50:05.445814    9204 network_create.go:115] attempt to create docker network calico-20220531191937-2108 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0531 19:50:05.453842    9204 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true calico-20220531191937-2108
	I0531 19:50:06.783335    9204 cli_runner.go:217] Completed: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true calico-20220531191937-2108: (1.3294874s)
	I0531 19:50:06.783335    9204 network_create.go:99] docker network calico-20220531191937-2108 192.168.49.0/24 created
	I0531 19:50:06.783335    9204 kic.go:106] calculated static IP "192.168.49.2" for the "calico-20220531191937-2108" container
	I0531 19:50:06.796335    9204 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0531 19:50:08.150320    9204 cli_runner.go:217] Completed: docker ps -a --format {{.Names}}: (1.3528697s)
	I0531 19:50:08.159915    9204 cli_runner.go:164] Run: docker volume create calico-20220531191937-2108 --label name.minikube.sigs.k8s.io=calico-20220531191937-2108 --label created_by.minikube.sigs.k8s.io=true
	I0531 19:50:09.384153    9204 cli_runner.go:217] Completed: docker volume create calico-20220531191937-2108 --label name.minikube.sigs.k8s.io=calico-20220531191937-2108 --label created_by.minikube.sigs.k8s.io=true: (1.2242333s)
	I0531 19:50:09.384392    9204 oci.go:103] Successfully created a docker volume calico-20220531191937-2108
	I0531 19:50:09.393540    9204 cli_runner.go:164] Run: docker run --rm --name calico-20220531191937-2108-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=calico-20220531191937-2108 --entrypoint /usr/bin/test -v calico-20220531191937-2108:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653596720-14230@sha256:e953786303ac8350802546ee187d34e89f0007072a54fdbcc2f86a1fb8575418 -d /var/lib
	I0531 19:50:12.953319    9204 cli_runner.go:217] Completed: docker run --rm --name calico-20220531191937-2108-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=calico-20220531191937-2108 --entrypoint /usr/bin/test -v calico-20220531191937-2108:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653596720-14230@sha256:e953786303ac8350802546ee187d34e89f0007072a54fdbcc2f86a1fb8575418 -d /var/lib: (3.5597634s)
	I0531 19:50:12.953319    9204 oci.go:107] Successfully prepared a docker volume calico-20220531191937-2108
	I0531 19:50:12.953319    9204 preload.go:132] Checking if preload exists for k8s version v1.23.6 and runtime docker
	I0531 19:50:12.953319    9204 kic.go:179] Starting extracting preloaded images to volume ...
	I0531 19:50:12.965313    9204 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.23.6-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v calico-20220531191937-2108:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653596720-14230@sha256:e953786303ac8350802546ee187d34e89f0007072a54fdbcc2f86a1fb8575418 -I lz4 -xf /preloaded.tar -C /extractDir
	I0531 19:50:40.202571    9204 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.23.6-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v calico-20220531191937-2108:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653596720-14230@sha256:e953786303ac8350802546ee187d34e89f0007072a54fdbcc2f86a1fb8575418 -I lz4 -xf /preloaded.tar -C /extractDir: (27.2371385s)
	I0531 19:50:40.202571    9204 kic.go:188] duration metric: took 27.249133 seconds to extract preloaded images to volume
	I0531 19:50:40.212968    9204 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0531 19:50:42.425561    9204 cli_runner.go:217] Completed: docker system info --format "{{json .}}": (2.212583s)
	I0531 19:50:42.426310    9204 info.go:265] docker info: {ID:JKWR:L4LW:XYJC:G6AI:GZFU:RUGW:CCH6:OD2M:V572:4FTB:B7YC:DTUC Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:74 OomKillDisable:true NGoroutines:58 SystemTime:2022-05-31 19:50:41.2734932 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.102.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_
64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:53902323712 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.14 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3df54a852345ae127d1fa3092b95168e4a88e2f8 Expected:3df54a852345ae127d1fa3092b95168e4a88e2f8} RuncCommit:{ID:v1.0.3-0-gf46b6ba Expected:v1.0.3-0-gf46b6ba} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,
profile=default] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.5.1] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:C:\Program Files\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc.
Version:v0.17.0]] Warnings:<nil>}}
	I0531 19:50:42.437858    9204 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0531 19:50:44.621798    9204 cli_runner.go:217] Completed: docker info --format "'{{json .SecurityOptions}}'": (2.1839299s)
	I0531 19:50:44.628791    9204 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname calico-20220531191937-2108 --name calico-20220531191937-2108 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=calico-20220531191937-2108 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=calico-20220531191937-2108 --network calico-20220531191937-2108 --ip 192.168.49.2 --volume calico-20220531191937-2108:/var --security-opt apparmor=unconfined --memory=2048mb --memory-swap=2048mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653596720-14230@sha256:e953786303ac8350802546ee187d34e89f0007072a54fdbcc2f86a1fb8575418
	I0531 19:50:47.120642    9204 cli_runner.go:217] Completed: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname calico-20220531191937-2108 --name calico-20220531191937-2108 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=calico-20220531191937-2108 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=calico-20220531191937-2108 --network calico-20220531191937-2108 --ip 192.168.49.2 --volume calico-20220531191937-2108:/var --security-opt apparmor=unconfined --memory=2048mb --memory-swap=2048mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653596720-14230@sha256:e953786303ac8350802546ee187d34e89f0007072a54fdbcc2f86a1fb8575418: (2.4917556s)
	I0531 19:50:47.129556    9204 cli_runner.go:164] Run: docker container inspect calico-20220531191937-2108 --format={{.State.Running}}
	I0531 19:50:48.475805    9204 cli_runner.go:217] Completed: docker container inspect calico-20220531191937-2108 --format={{.State.Running}}: (1.3462433s)
	I0531 19:50:48.486615    9204 cli_runner.go:164] Run: docker container inspect calico-20220531191937-2108 --format={{.State.Status}}
	I0531 19:50:49.783563    9204 cli_runner.go:217] Completed: docker container inspect calico-20220531191937-2108 --format={{.State.Status}}: (1.2968512s)
	I0531 19:50:49.795640    9204 cli_runner.go:164] Run: docker exec calico-20220531191937-2108 stat /var/lib/dpkg/alternatives/iptables
	I0531 19:50:51.194840    9204 cli_runner.go:217] Completed: docker exec calico-20220531191937-2108 stat /var/lib/dpkg/alternatives/iptables: (1.3991943s)
	I0531 19:50:51.194840    9204 oci.go:247] the created container "calico-20220531191937-2108" has a running status.
	I0531 19:50:51.194840    9204 kic.go:210] Creating ssh key for kic: C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\calico-20220531191937-2108\id_rsa...
	I0531 19:50:51.352748    9204 kic_runner.go:191] docker (temp): C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\calico-20220531191937-2108\id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0531 19:50:52.714374    9204 cli_runner.go:164] Run: docker container inspect calico-20220531191937-2108 --format={{.State.Status}}
	I0531 19:50:53.983691    9204 cli_runner.go:217] Completed: docker container inspect calico-20220531191937-2108 --format={{.State.Status}}: (1.2693109s)
	I0531 19:50:53.999679    9204 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0531 19:50:53.999679    9204 kic_runner.go:114] Args: [docker exec --privileged calico-20220531191937-2108 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0531 19:50:55.344133    9204 kic_runner.go:123] Done: [docker exec --privileged calico-20220531191937-2108 chown docker:docker /home/docker/.ssh/authorized_keys]: (1.3442906s)
	I0531 19:50:55.347971    9204 kic.go:250] ensuring only current user has permissions to key file located at : C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\calico-20220531191937-2108\id_rsa...
	I0531 19:50:55.925795    9204 cli_runner.go:164] Run: docker container inspect calico-20220531191937-2108 --format={{.State.Status}}
	I0531 19:50:57.128250    9204 cli_runner.go:217] Completed: docker container inspect calico-20220531191937-2108 --format={{.State.Status}}: (1.2024495s)
	I0531 19:50:57.128250    9204 machine.go:88] provisioning docker machine ...
	I0531 19:50:57.128250    9204 ubuntu.go:169] provisioning hostname "calico-20220531191937-2108"
	I0531 19:50:57.136258    9204 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220531191937-2108
	I0531 19:50:58.384809    9204 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220531191937-2108: (1.2485456s)
	I0531 19:50:58.389815    9204 main.go:134] libmachine: Using SSH client type: native
	I0531 19:50:58.396816    9204 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x1322ea0] 0x1325d00 <nil>  [] 0s} 127.0.0.1 54919 <nil> <nil>}
	I0531 19:50:58.396816    9204 main.go:134] libmachine: About to run SSH command:
	sudo hostname calico-20220531191937-2108 && echo "calico-20220531191937-2108" | sudo tee /etc/hostname
	I0531 19:50:58.637577    9204 main.go:134] libmachine: SSH cmd err, output: <nil>: calico-20220531191937-2108
	
	I0531 19:50:58.647947    9204 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220531191937-2108
	I0531 19:50:59.961886    9204 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220531191937-2108: (1.3139332s)
	I0531 19:50:59.966482    9204 main.go:134] libmachine: Using SSH client type: native
	I0531 19:50:59.967181    9204 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x1322ea0] 0x1325d00 <nil>  [] 0s} 127.0.0.1 54919 <nil> <nil>}
	I0531 19:50:59.967181    9204 main.go:134] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\scalico-20220531191937-2108' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 calico-20220531191937-2108/g' /etc/hosts;
				else 
					echo '127.0.1.1 calico-20220531191937-2108' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0531 19:51:00.166174    9204 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I0531 19:51:00.166174    9204 ubuntu.go:175] set auth options {CertDir:C:\Users\jenkins.minikube7\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube7\minikube-integration\.minikube}
	I0531 19:51:00.166267    9204 ubuntu.go:177] setting up certificates
	I0531 19:51:00.166267    9204 provision.go:83] configureAuth start
	I0531 19:51:00.174598    9204 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" calico-20220531191937-2108
	I0531 19:51:01.477955    9204 cli_runner.go:217] Completed: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" calico-20220531191937-2108: (1.303351s)
	I0531 19:51:01.477955    9204 provision.go:138] copyHostCerts
	I0531 19:51:01.477955    9204 exec_runner.go:144] found C:\Users\jenkins.minikube7\minikube-integration\.minikube/key.pem, removing ...
	I0531 19:51:01.477955    9204 exec_runner.go:207] rm: C:\Users\jenkins.minikube7\minikube-integration\.minikube\key.pem
	I0531 19:51:01.478916    9204 exec_runner.go:151] cp: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube7\minikube-integration\.minikube/key.pem (1679 bytes)
	I0531 19:51:01.479924    9204 exec_runner.go:144] found C:\Users\jenkins.minikube7\minikube-integration\.minikube/ca.pem, removing ...
	I0531 19:51:01.479924    9204 exec_runner.go:207] rm: C:\Users\jenkins.minikube7\minikube-integration\.minikube\ca.pem
	I0531 19:51:01.479924    9204 exec_runner.go:151] cp: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube7\minikube-integration\.minikube/ca.pem (1078 bytes)
	I0531 19:51:01.481945    9204 exec_runner.go:144] found C:\Users\jenkins.minikube7\minikube-integration\.minikube/cert.pem, removing ...
	I0531 19:51:01.481945    9204 exec_runner.go:207] rm: C:\Users\jenkins.minikube7\minikube-integration\.minikube\cert.pem
	I0531 19:51:01.481945    9204 exec_runner.go:151] cp: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube7\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0531 19:51:01.482907    9204 provision.go:112] generating server cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.calico-20220531191937-2108 san=[192.168.49.2 127.0.0.1 localhost 127.0.0.1 minikube calico-20220531191937-2108]
	I0531 19:51:01.638392    9204 provision.go:172] copyRemoteCerts
	I0531 19:51:01.648401    9204 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0531 19:51:01.656385    9204 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220531191937-2108
	I0531 19:51:02.904839    9204 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220531191937-2108: (1.2483134s)
	I0531 19:51:02.904949    9204 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:54919 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\calico-20220531191937-2108\id_rsa Username:docker}
	I0531 19:51:03.060111    9204 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (1.4117041s)
	I0531 19:51:03.060887    9204 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0531 19:51:03.138138    9204 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1253 bytes)
	I0531 19:51:03.189140    9204 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0531 19:51:03.247164    9204 provision.go:86] duration metric: configureAuth took 3.0808835s
	I0531 19:51:03.247164    9204 ubuntu.go:193] setting minikube options for container-runtime
	I0531 19:51:03.247164    9204 config.go:178] Loaded profile config "calico-20220531191937-2108": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.6
	I0531 19:51:03.261180    9204 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220531191937-2108
	I0531 19:51:04.500656    9204 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220531191937-2108: (1.239471s)
	I0531 19:51:04.504628    9204 main.go:134] libmachine: Using SSH client type: native
	I0531 19:51:04.504628    9204 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x1322ea0] 0x1325d00 <nil>  [] 0s} 127.0.0.1 54919 <nil> <nil>}
	I0531 19:51:04.504628    9204 main.go:134] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0531 19:51:04.713593    9204 main.go:134] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0531 19:51:04.713593    9204 ubuntu.go:71] root file system type: overlay
	I0531 19:51:04.714591    9204 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0531 19:51:04.721589    9204 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220531191937-2108
	I0531 19:51:05.902536    9204 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220531191937-2108: (1.1809424s)
	I0531 19:51:05.906537    9204 main.go:134] libmachine: Using SSH client type: native
	I0531 19:51:05.907557    9204 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x1322ea0] 0x1325d00 <nil>  [] 0s} 127.0.0.1 54919 <nil> <nil>}
	I0531 19:51:05.907557    9204 main.go:134] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0531 19:51:06.138734    9204 main.go:134] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0531 19:51:06.148744    9204 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220531191937-2108
	I0531 19:51:07.427387    9204 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220531191937-2108: (1.2776444s)
	I0531 19:51:07.432952    9204 main.go:134] libmachine: Using SSH client type: native
	I0531 19:51:07.433950    9204 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x1322ea0] 0x1325d00 <nil>  [] 0s} 127.0.0.1 54919 <nil> <nil>}
	I0531 19:51:07.433950    9204 main.go:134] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0531 19:51:09.087698    9204 main.go:134] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2022-05-12 09:15:28.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2022-05-31 19:51:06.120425000 +0000
	@@ -1,30 +1,32 @@
	 [Unit]
	 Description=Docker Application Container Engine
	 Documentation=https://docs.docker.com
	-After=network-online.target docker.socket firewalld.service containerd.service
	+BindsTo=containerd.service
	+After=network-online.target firewalld.service containerd.service
	 Wants=network-online.target
	-Requires=docker.socket containerd.service
	+Requires=docker.socket
	+StartLimitBurst=3
	+StartLimitIntervalSec=60
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutSec=0
	-RestartSec=2
	-Restart=always
	-
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	+Restart=on-failure
	 
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP $MAINPID
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	@@ -32,16 +34,16 @@
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	 
	 # kill only the docker process, not all processes in the cgroup
	 KillMode=process
	-OOMScoreAdjust=-500
	 
	 [Install]
	 WantedBy=multi-user.target
	Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install enable docker
	
	I0531 19:51:09.087698    9204 machine.go:91] provisioned docker machine in 11.9593959s
	I0531 19:51:09.087698    9204 client.go:171] LocalClient.Create took 1m7.5188147s
	I0531 19:51:09.087698    9204 start.go:173] duration metric: libmachine.API.Create for "calico-20220531191937-2108" took 1m7.5188147s
	I0531 19:51:09.087698    9204 start.go:306] post-start starting for "calico-20220531191937-2108" (driver="docker")
	I0531 19:51:09.087698    9204 start.go:316] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0531 19:51:09.098732    9204 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0531 19:51:09.106693    9204 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220531191937-2108
	I0531 19:51:10.371344    9204 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220531191937-2108: (1.264614s)
	I0531 19:51:10.372478    9204 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:54919 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\calico-20220531191937-2108\id_rsa Username:docker}
	I0531 19:51:10.511036    9204 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (1.4122985s)
	I0531 19:51:10.522025    9204 ssh_runner.go:195] Run: cat /etc/os-release
	I0531 19:51:10.533022    9204 main.go:134] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0531 19:51:10.533022    9204 main.go:134] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0531 19:51:10.533022    9204 main.go:134] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0531 19:51:10.533022    9204 info.go:137] Remote host: Ubuntu 20.04.4 LTS
	I0531 19:51:10.533022    9204 filesync.go:126] Scanning C:\Users\jenkins.minikube7\minikube-integration\.minikube\addons for local assets ...
	I0531 19:51:10.533022    9204 filesync.go:126] Scanning C:\Users\jenkins.minikube7\minikube-integration\.minikube\files for local assets ...
	I0531 19:51:10.534037    9204 filesync.go:149] local asset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\files\etc\ssl\certs\21082.pem -> 21082.pem in /etc/ssl/certs
	I0531 19:51:10.551037    9204 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0531 19:51:10.572032    9204 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\files\etc\ssl\certs\21082.pem --> /etc/ssl/certs/21082.pem (1708 bytes)
	I0531 19:51:10.635039    9204 start.go:309] post-start completed in 1.5473339s
	I0531 19:51:10.649038    9204 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" calico-20220531191937-2108
	I0531 19:51:11.927436    9204 cli_runner.go:217] Completed: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" calico-20220531191937-2108: (1.2782851s)
	I0531 19:51:11.927739    9204 profile.go:148] Saving config to C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\calico-20220531191937-2108\config.json ...
	I0531 19:51:11.946471    9204 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0531 19:51:11.954471    9204 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220531191937-2108
	I0531 19:51:13.203722    9204 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220531191937-2108: (1.2492449s)
	I0531 19:51:13.203722    9204 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:54919 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\calico-20220531191937-2108\id_rsa Username:docker}
	I0531 19:51:13.292052    9204 ssh_runner.go:235] Completed: sh -c "df -h /var | awk 'NR==2{print $5}'": (1.3445869s)
	I0531 19:51:13.301045    9204 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0531 19:51:13.313047    9204 start.go:134] duration metric: createHost completed in 1m11.7481896s
	I0531 19:51:13.313047    9204 start.go:81] releasing machines lock for "calico-20220531191937-2108", held for 1m11.7481896s
	I0531 19:51:13.320047    9204 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" calico-20220531191937-2108
	I0531 19:51:14.526134    9204 cli_runner.go:217] Completed: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" calico-20220531191937-2108: (1.2058332s)
	I0531 19:51:14.530551    9204 ssh_runner.go:195] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0531 19:51:14.540587    9204 ssh_runner.go:195] Run: systemctl --version
	I0531 19:51:14.544618    9204 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220531191937-2108
	I0531 19:51:14.554196    9204 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220531191937-2108
	I0531 19:51:15.796816    9204 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220531191937-2108: (1.2521124s)
	I0531 19:51:15.797621    9204 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:54919 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\calico-20220531191937-2108\id_rsa Username:docker}
	I0531 19:51:15.820048    9204 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220531191937-2108: (1.2658464s)
	I0531 19:51:15.820048    9204 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:54919 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\calico-20220531191937-2108\id_rsa Username:docker}
	I0531 19:51:16.029994    9204 ssh_runner.go:235] Completed: curl -sS -m 2 https://k8s.gcr.io/: (1.4992808s)
	I0531 19:51:16.029994    9204 ssh_runner.go:235] Completed: systemctl --version: (1.4894001s)
	I0531 19:51:16.047688    9204 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0531 19:51:16.109228    9204 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0531 19:51:16.150091    9204 cruntime.go:273] skipping containerd shutdown because we are bound to it
	I0531 19:51:16.165654    9204 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0531 19:51:16.192715    9204 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/dockershim.sock
	image-endpoint: unix:///var/run/dockershim.sock
	" | sudo tee /etc/crictl.yaml"
	I0531 19:51:16.251284    9204 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0531 19:51:16.429590    9204 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0531 19:51:16.728499    9204 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0531 19:51:16.779238    9204 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0531 19:51:16.986896    9204 ssh_runner.go:195] Run: sudo systemctl start docker
	I0531 19:51:17.030256    9204 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0531 19:51:17.159713    9204 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0531 19:51:17.494325    9204 out.go:204] * Preparing Kubernetes v1.23.6 on Docker 20.10.16 ...
	I0531 19:51:17.508083    9204 cli_runner.go:164] Run: docker exec -t calico-20220531191937-2108 dig +short host.docker.internal
	I0531 19:51:18.989400    9204 cli_runner.go:217] Completed: docker exec -t calico-20220531191937-2108 dig +short host.docker.internal: (1.4813113s)
	I0531 19:51:18.989400    9204 network.go:96] got host ip for mount in container by digging dns: 192.168.65.2
	I0531 19:51:19.001193    9204 ssh_runner.go:195] Run: grep 192.168.65.2	host.minikube.internal$ /etc/hosts
	I0531 19:51:19.019131    9204 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.65.2	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0531 19:51:19.065499    9204 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" calico-20220531191937-2108
	I0531 19:51:20.327768    9204 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" calico-20220531191937-2108: (1.261278s)
	I0531 19:51:20.327768    9204 preload.go:132] Checking if preload exists for k8s version v1.23.6 and runtime docker
	I0531 19:51:20.337236    9204 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0531 19:51:20.420821    9204 docker.go:610] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.23.6
	k8s.gcr.io/kube-scheduler:v1.23.6
	k8s.gcr.io/kube-proxy:v1.23.6
	k8s.gcr.io/kube-controller-manager:v1.23.6
	k8s.gcr.io/etcd:3.5.1-0
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0531 19:51:20.420977    9204 docker.go:541] Images already preloaded, skipping extraction
	I0531 19:51:20.435383    9204 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0531 19:51:20.533516    9204 docker.go:610] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.23.6
	k8s.gcr.io/kube-proxy:v1.23.6
	k8s.gcr.io/kube-scheduler:v1.23.6
	k8s.gcr.io/kube-controller-manager:v1.23.6
	k8s.gcr.io/etcd:3.5.1-0
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0531 19:51:20.533582    9204 cache_images.go:84] Images are preloaded, skipping loading
	I0531 19:51:20.544179    9204 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0531 19:51:20.763164    9204 cni.go:95] Creating CNI manager for "calico"
	I0531 19:51:20.763266    9204 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0531 19:51:20.763266    9204 kubeadm.go:158] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.23.6 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:calico-20220531191937-2108 NodeName:calico-20220531191937-2108 DNSDomain:cluster.local CRISocket:/var/run/dockershim.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/mi
nikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	I0531 19:51:20.763605    9204 kubeadm.go:162] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/dockershim.sock
	  name: "calico-20220531191937-2108"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.23.6
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0531 19:51:20.763779    9204 kubeadm.go:961] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.23.6/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=docker --hostname-override=calico-20220531191937-2108 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.23.6 ClusterName:calico-20220531191937-2108 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico NodeIP: NodePort:8443 NodeName:}
	I0531 19:51:20.777560    9204 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.23.6
	I0531 19:51:20.812377    9204 binaries.go:44] Found k8s binaries, skipping transfer
	I0531 19:51:20.822173    9204 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0531 19:51:20.847759    9204 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (373 bytes)
	I0531 19:51:20.894419    9204 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0531 19:51:20.943002    9204 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2048 bytes)
	I0531 19:51:21.004999    9204 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0531 19:51:21.027970    9204 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0531 19:51:21.065422    9204 certs.go:54] Setting up C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\calico-20220531191937-2108 for IP: 192.168.49.2
	I0531 19:51:21.066491    9204 certs.go:182] skipping minikubeCA CA generation: C:\Users\jenkins.minikube7\minikube-integration\.minikube\ca.key
	I0531 19:51:21.066798    9204 certs.go:182] skipping proxyClientCA CA generation: C:\Users\jenkins.minikube7\minikube-integration\.minikube\proxy-client-ca.key
	I0531 19:51:21.068010    9204 certs.go:302] generating minikube-user signed cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\calico-20220531191937-2108\client.key
	I0531 19:51:21.068489    9204 crypto.go:68] Generating cert C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\calico-20220531191937-2108\client.crt with IP's: []
	I0531 19:51:21.239497    9204 crypto.go:156] Writing cert to C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\calico-20220531191937-2108\client.crt ...
	I0531 19:51:21.239497    9204 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\calico-20220531191937-2108\client.crt: {Name:mk7717fa2d448864e461cc54e83296f68b8463bd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0531 19:51:21.240569    9204 crypto.go:164] Writing key to C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\calico-20220531191937-2108\client.key ...
	I0531 19:51:21.240569    9204 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\calico-20220531191937-2108\client.key: {Name:mkbd89bf22718c6399768f822a13f7683a912fdb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0531 19:51:21.241576    9204 certs.go:302] generating minikube signed cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\calico-20220531191937-2108\apiserver.key.dd3b5fb2
	I0531 19:51:21.241576    9204 crypto.go:68] Generating cert C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\calico-20220531191937-2108\apiserver.crt.dd3b5fb2 with IP's: [192.168.49.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I0531 19:51:21.303988    9204 crypto.go:156] Writing cert to C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\calico-20220531191937-2108\apiserver.crt.dd3b5fb2 ...
	I0531 19:51:21.303988    9204 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\calico-20220531191937-2108\apiserver.crt.dd3b5fb2: {Name:mk1c798eabfdccece8c43513d5079e690fc5c5f5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0531 19:51:21.304577    9204 crypto.go:164] Writing key to C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\calico-20220531191937-2108\apiserver.key.dd3b5fb2 ...
	I0531 19:51:21.304577    9204 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\calico-20220531191937-2108\apiserver.key.dd3b5fb2: {Name:mk508c26789c2c5b39d18c925674707c3be71d06 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0531 19:51:21.305706    9204 certs.go:320] copying C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\calico-20220531191937-2108\apiserver.crt.dd3b5fb2 -> C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\calico-20220531191937-2108\apiserver.crt
	I0531 19:51:21.312663    9204 certs.go:324] copying C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\calico-20220531191937-2108\apiserver.key.dd3b5fb2 -> C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\calico-20220531191937-2108\apiserver.key
	I0531 19:51:21.313459    9204 certs.go:302] generating aggregator signed cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\calico-20220531191937-2108\proxy-client.key
	I0531 19:51:21.314524    9204 crypto.go:68] Generating cert C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\calico-20220531191937-2108\proxy-client.crt with IP's: []
	I0531 19:51:21.471497    9204 crypto.go:156] Writing cert to C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\calico-20220531191937-2108\proxy-client.crt ...
	I0531 19:51:21.471497    9204 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\calico-20220531191937-2108\proxy-client.crt: {Name:mk632ff53178cf3468ba9f6e8992cf6c07b84866 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0531 19:51:21.473256    9204 crypto.go:164] Writing key to C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\calico-20220531191937-2108\proxy-client.key ...
	I0531 19:51:21.473256    9204 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\calico-20220531191937-2108\proxy-client.key: {Name:mkb4191699fbbb28010ed2ba28eed8f9214b0550 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0531 19:51:21.481348    9204 certs.go:388] found cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\2108.pem (1338 bytes)
	W0531 19:51:21.482170    9204 certs.go:384] ignoring C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\2108_empty.pem, impossibly tiny 0 bytes
	I0531 19:51:21.482170    9204 certs.go:388] found cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca-key.pem (1675 bytes)
	I0531 19:51:21.482611    9204 certs.go:388] found cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem (1078 bytes)
	I0531 19:51:21.482995    9204 certs.go:388] found cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0531 19:51:21.483267    9204 certs.go:388] found cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\key.pem (1679 bytes)
	I0531 19:51:21.483525    9204 certs.go:388] found cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\files\etc\ssl\certs\C:\Users\jenkins.minikube7\minikube-integration\.minikube\files\etc\ssl\certs\21082.pem (1708 bytes)
	I0531 19:51:21.484551    9204 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\calico-20220531191937-2108\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0531 19:51:21.565598    9204 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\calico-20220531191937-2108\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0531 19:51:21.652047    9204 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\calico-20220531191937-2108\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0531 19:51:21.708339    9204 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\calico-20220531191937-2108\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0531 19:51:21.766482    9204 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0531 19:51:21.824736    9204 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0531 19:51:21.895741    9204 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0531 19:51:21.964128    9204 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0531 19:51:22.031639    9204 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0531 19:51:22.113459    9204 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\2108.pem --> /usr/share/ca-certificates/2108.pem (1338 bytes)
	I0531 19:51:22.170954    9204 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\files\etc\ssl\certs\21082.pem --> /usr/share/ca-certificates/21082.pem (1708 bytes)
	I0531 19:51:22.227855    9204 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (752 bytes)
	I0531 19:51:22.293264    9204 ssh_runner.go:195] Run: openssl version
	I0531 19:51:22.325002    9204 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2108.pem && ln -fs /usr/share/ca-certificates/2108.pem /etc/ssl/certs/2108.pem"
	I0531 19:51:22.376521    9204 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2108.pem
	I0531 19:51:22.393458    9204 certs.go:431] hashing: -rw-r--r-- 1 root root 1338 May 31 17:31 /usr/share/ca-certificates/2108.pem
	I0531 19:51:22.403431    9204 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2108.pem
	I0531 19:51:22.425445    9204 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/2108.pem /etc/ssl/certs/51391683.0"
	I0531 19:51:22.462364    9204 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/21082.pem && ln -fs /usr/share/ca-certificates/21082.pem /etc/ssl/certs/21082.pem"
	I0531 19:51:22.517805    9204 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/21082.pem
	I0531 19:51:22.527793    9204 certs.go:431] hashing: -rw-r--r-- 1 root root 1708 May 31 17:31 /usr/share/ca-certificates/21082.pem
	I0531 19:51:22.538806    9204 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/21082.pem
	I0531 19:51:22.561793    9204 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/21082.pem /etc/ssl/certs/3ec20f2e.0"
	I0531 19:51:22.598947    9204 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0531 19:51:22.642030    9204 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0531 19:51:22.653776    9204 certs.go:431] hashing: -rw-r--r-- 1 root root 1111 May 31 17:19 /usr/share/ca-certificates/minikubeCA.pem
	I0531 19:51:22.668000    9204 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0531 19:51:22.713832    9204 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0531 19:51:22.737619    9204 kubeadm.go:395] StartCluster: {Name:calico-20220531191937-2108 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653596720-14230@sha256:e953786303ac8350802546ee187d34e89f0007072a54fdbcc2f86a1fb8575418 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.6 ClusterName:calico-20220531191937-2108 Namespace:default APIServerName:minikube
CA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.23.6 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:5m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube7:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMi
rror: DisableOptimizations:false DisableMetrics:false}
	I0531 19:51:22.750642    9204 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0531 19:51:22.840394    9204 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0531 19:51:22.881382    9204 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0531 19:51:22.906373    9204 kubeadm.go:221] ignoring SystemVerification for kubeadm because of docker driver
	I0531 19:51:22.918377    9204 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0531 19:51:22.946873    9204 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0531 19:51:22.946986    9204 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0531 19:51:58.030943    9204 out.go:204]   - Generating certificates and keys ...
	I0531 19:51:58.039385    9204 out.go:204]   - Booting up control plane ...
	I0531 19:51:58.047363    9204 out.go:204]   - Configuring RBAC rules ...
	I0531 19:51:58.054027    9204 cni.go:95] Creating CNI manager for "calico"
	I0531 19:51:58.062025    9204 out.go:177] * Configuring Calico (Container Networking Interface) ...
	I0531 19:51:58.069950    9204 cni.go:189] applying CNI manifest using /var/lib/minikube/binaries/v1.23.6/kubectl ...
	I0531 19:51:58.069950    9204 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (202049 bytes)
	I0531 19:51:58.253840    9204 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0531 19:52:03.940854    9204 ssh_runner.go:235] Completed: sudo /var/lib/minikube/binaries/v1.23.6/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml: (5.6869888s)
	I0531 19:52:03.940854    9204 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0531 19:52:03.954831    9204 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 19:52:03.957834    9204 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl label nodes minikube.k8s.io/version=v1.26.0-beta.1 minikube.k8s.io/commit=bd46569bd7cb517fcad2e704abdebd2826bd8454 minikube.k8s.io/name=calico-20220531191937-2108 minikube.k8s.io/updated_at=2022_05_31T19_52_03_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 19:52:03.958840    9204 ops.go:34] apiserver oom_adj: -16
	I0531 19:52:04.261892    9204 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 19:52:05.052524    9204 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 19:52:05.552939    9204 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 19:52:06.053314    9204 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 19:52:06.543668    9204 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 19:52:07.552471    9204 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 19:52:08.041491    9204 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 19:52:08.548002    9204 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 19:52:09.545455    9204 kubeadm.go:1045] duration metric: took 5.6045765s to wait for elevateKubeSystemPrivileges.
	I0531 19:52:09.545455    9204 kubeadm.go:397] StartCluster complete in 46.8076305s
	I0531 19:52:09.545455    9204 settings.go:142] acquiring lock: {Name:mk2f48f1c2db86c45c5c20d13312e07e9c171d09 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0531 19:52:09.546498    9204 settings.go:150] Updating kubeconfig:  C:\Users\jenkins.minikube7\minikube-integration\kubeconfig
	I0531 19:52:09.550091    9204 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube7\minikube-integration\kubeconfig: {Name:mk3b9816be6135fef42a073128e9ed356868417a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0531 19:52:10.727104    9204 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "calico-20220531191937-2108" rescaled to 1
	I0531 19:52:10.727310    9204 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.23.6/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0531 19:52:10.727354    9204 addons.go:415] enableAddons start: toEnable=map[], additional=[]
	I0531 19:52:10.727460    9204 addons.go:65] Setting storage-provisioner=true in profile "calico-20220531191937-2108"
	I0531 19:52:10.727460    9204 addons.go:153] Setting addon storage-provisioner=true in "calico-20220531191937-2108"
	W0531 19:52:10.727460    9204 addons.go:165] addon storage-provisioner should already be in state true
	I0531 19:52:10.727256    9204 start.go:208] Will wait 5m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.23.6 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0531 19:52:10.733182    9204 out.go:177] * Verifying Kubernetes components...
	I0531 19:52:10.727615    9204 host.go:66] Checking if "calico-20220531191937-2108" exists ...
	I0531 19:52:10.727615    9204 addons.go:65] Setting default-storageclass=true in profile "calico-20220531191937-2108"
	I0531 19:52:10.728564    9204 config.go:178] Loaded profile config "calico-20220531191937-2108": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.6
	I0531 19:52:10.737456    9204 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "calico-20220531191937-2108"
	I0531 19:52:10.758824    9204 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0531 19:52:10.768720    9204 cli_runner.go:164] Run: docker container inspect calico-20220531191937-2108 --format={{.State.Status}}
	I0531 19:52:10.771652    9204 cli_runner.go:164] Run: docker container inspect calico-20220531191937-2108 --format={{.State.Status}}
	I0531 19:52:11.242586    9204 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.23.6/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.65.2 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.23.6/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0531 19:52:11.254583    9204 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" calico-20220531191937-2108
	I0531 19:52:12.158698    9204 cli_runner.go:217] Completed: docker container inspect calico-20220531191937-2108 --format={{.State.Status}}: (1.3896755s)
	I0531 19:52:12.189661    9204 cli_runner.go:217] Completed: docker container inspect calico-20220531191937-2108 --format={{.State.Status}}: (1.4180026s)
	I0531 19:52:12.192506    9204 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0531 19:52:12.194942    9204 addons.go:348] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0531 19:52:12.194993    9204 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0531 19:52:12.202681    9204 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220531191937-2108
	I0531 19:52:12.233357    9204 addons.go:153] Setting addon default-storageclass=true in "calico-20220531191937-2108"
	W0531 19:52:12.233357    9204 addons.go:165] addon default-storageclass should already be in state true
	I0531 19:52:12.233357    9204 host.go:66] Checking if "calico-20220531191937-2108" exists ...
	I0531 19:52:12.258602    9204 cli_runner.go:164] Run: docker container inspect calico-20220531191937-2108 --format={{.State.Status}}
	I0531 19:52:12.693527    9204 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" calico-20220531191937-2108: (1.4379358s)
	I0531 19:52:12.696520    9204 node_ready.go:35] waiting up to 5m0s for node "calico-20220531191937-2108" to be "Ready" ...
	I0531 19:52:13.527884    9204 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220531191937-2108: (1.3249201s)
	I0531 19:52:13.528484    9204 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:54919 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\calico-20220531191937-2108\id_rsa Username:docker}
	I0531 19:52:13.549378    9204 cli_runner.go:217] Completed: docker container inspect calico-20220531191937-2108 --format={{.State.Status}}: (1.2907704s)
	I0531 19:52:13.549378    9204 addons.go:348] installing /etc/kubernetes/addons/storageclass.yaml
	I0531 19:52:13.549378    9204 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0531 19:52:13.558378    9204 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220531191937-2108
	I0531 19:52:13.756638    9204 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.6/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0531 19:52:14.783173    9204 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220531191937-2108: (1.2247896s)
	I0531 19:52:14.783173    9204 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:54919 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\calico-20220531191937-2108\id_rsa Username:docker}
	I0531 19:52:14.943814    9204 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.6/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0531 19:52:19.119063    9204 node_ready.go:49] node "calico-20220531191937-2108" has status "Ready":"True"
	I0531 19:52:19.119063    9204 node_ready.go:38] duration metric: took 6.4225145s waiting for node "calico-20220531191937-2108" to be "Ready" ...
	I0531 19:52:19.119063    9204 pod_ready.go:35] extra waiting up to 5m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0531 19:52:19.154075    9204 pod_ready.go:78] waiting up to 5m0s for pod "calico-kube-controllers-8594699699-pcfnh" in "kube-system" namespace to be "Ready" ...
	I0531 19:52:23.932738    9204 pod_ready.go:102] pod "calico-kube-controllers-8594699699-pcfnh" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-31 19:52:11 +0000 GMT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:BestEffort EphemeralContainerStatuses:[]}
	I0531 19:52:24.544620    9204 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.23.6/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.65.2 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.23.6/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (13.3019754s)
	I0531 19:52:24.544620    9204 start.go:806] {"host.minikube.internal": 192.168.65.2} host record injected into CoreDNS
	I0531 19:52:25.143136    9204 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.6/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (11.3864474s)
	I0531 19:52:25.143136    9204 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.6/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (10.1992773s)
	I0531 19:52:25.147133    9204 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0531 19:52:25.151139    9204 addons.go:417] enableAddons completed in 14.4237658s
	I0531 19:52:26.041942    9204 pod_ready.go:102] pod "calico-kube-controllers-8594699699-pcfnh" in "kube-system" namespace has status "Ready":"False"
	I0531 19:52:28.472771    9204 pod_ready.go:102] pod "calico-kube-controllers-8594699699-pcfnh" in "kube-system" namespace has status "Ready":"False"
	I0531 19:52:30.530447    9204 pod_ready.go:102] pod "calico-kube-controllers-8594699699-pcfnh" in "kube-system" namespace has status "Ready":"False"
	I0531 19:52:32.542827    9204 pod_ready.go:102] pod "calico-kube-controllers-8594699699-pcfnh" in "kube-system" namespace has status "Ready":"False"
	I0531 19:52:35.029877    9204 pod_ready.go:102] pod "calico-kube-controllers-8594699699-pcfnh" in "kube-system" namespace has status "Ready":"False"
	I0531 19:52:37.044811    9204 pod_ready.go:102] pod "calico-kube-controllers-8594699699-pcfnh" in "kube-system" namespace has status "Ready":"False"
	I0531 19:52:39.542174    9204 pod_ready.go:102] pod "calico-kube-controllers-8594699699-pcfnh" in "kube-system" namespace has status "Ready":"False"
	I0531 19:52:41.542552    9204 pod_ready.go:102] pod "calico-kube-controllers-8594699699-pcfnh" in "kube-system" namespace has status "Ready":"False"
	I0531 19:52:44.030986    9204 pod_ready.go:102] pod "calico-kube-controllers-8594699699-pcfnh" in "kube-system" namespace has status "Ready":"False"
	I0531 19:52:46.051345    9204 pod_ready.go:102] pod "calico-kube-controllers-8594699699-pcfnh" in "kube-system" namespace has status "Ready":"False"
	I0531 19:52:48.528870    9204 pod_ready.go:102] pod "calico-kube-controllers-8594699699-pcfnh" in "kube-system" namespace has status "Ready":"False"
	I0531 19:52:50.532047    9204 pod_ready.go:102] pod "calico-kube-controllers-8594699699-pcfnh" in "kube-system" namespace has status "Ready":"False"
	I0531 19:52:52.540971    9204 pod_ready.go:102] pod "calico-kube-controllers-8594699699-pcfnh" in "kube-system" namespace has status "Ready":"False"
	I0531 19:52:55.028498    9204 pod_ready.go:102] pod "calico-kube-controllers-8594699699-pcfnh" in "kube-system" namespace has status "Ready":"False"
	I0531 19:52:57.044714    9204 pod_ready.go:102] pod "calico-kube-controllers-8594699699-pcfnh" in "kube-system" namespace has status "Ready":"False"
	I0531 19:52:59.460421    9204 pod_ready.go:102] pod "calico-kube-controllers-8594699699-pcfnh" in "kube-system" namespace has status "Ready":"False"
	I0531 19:53:01.630779    9204 pod_ready.go:102] pod "calico-kube-controllers-8594699699-pcfnh" in "kube-system" namespace has status "Ready":"False"
	I0531 19:53:03.959658    9204 pod_ready.go:102] pod "calico-kube-controllers-8594699699-pcfnh" in "kube-system" namespace has status "Ready":"False"
	I0531 19:53:06.462598    9204 pod_ready.go:102] pod "calico-kube-controllers-8594699699-pcfnh" in "kube-system" namespace has status "Ready":"False"
	I0531 19:53:08.964919    9204 pod_ready.go:102] pod "calico-kube-controllers-8594699699-pcfnh" in "kube-system" namespace has status "Ready":"False"
	I0531 19:53:11.027276    9204 pod_ready.go:102] pod "calico-kube-controllers-8594699699-pcfnh" in "kube-system" namespace has status "Ready":"False"
	I0531 19:53:13.030457    9204 pod_ready.go:102] pod "calico-kube-controllers-8594699699-pcfnh" in "kube-system" namespace has status "Ready":"False"
	I0531 19:53:15.031239    9204 pod_ready.go:102] pod "calico-kube-controllers-8594699699-pcfnh" in "kube-system" namespace has status "Ready":"False"
	I0531 19:53:17.536603    9204 pod_ready.go:102] pod "calico-kube-controllers-8594699699-pcfnh" in "kube-system" namespace has status "Ready":"False"
	I0531 19:53:20.028045    9204 pod_ready.go:102] pod "calico-kube-controllers-8594699699-pcfnh" in "kube-system" namespace has status "Ready":"False"
	I0531 19:53:22.031699    9204 pod_ready.go:102] pod "calico-kube-controllers-8594699699-pcfnh" in "kube-system" namespace has status "Ready":"False"
	I0531 19:53:24.464092    9204 pod_ready.go:102] pod "calico-kube-controllers-8594699699-pcfnh" in "kube-system" namespace has status "Ready":"False"
	I0531 19:53:26.539304    9204 pod_ready.go:102] pod "calico-kube-controllers-8594699699-pcfnh" in "kube-system" namespace has status "Ready":"False"
	I0531 19:53:29.044720    9204 pod_ready.go:102] pod "calico-kube-controllers-8594699699-pcfnh" in "kube-system" namespace has status "Ready":"False"
	I0531 19:53:31.541633    9204 pod_ready.go:102] pod "calico-kube-controllers-8594699699-pcfnh" in "kube-system" namespace has status "Ready":"False"
	I0531 19:53:34.028350    9204 pod_ready.go:102] pod "calico-kube-controllers-8594699699-pcfnh" in "kube-system" namespace has status "Ready":"False"
	I0531 19:53:36.460328    9204 pod_ready.go:102] pod "calico-kube-controllers-8594699699-pcfnh" in "kube-system" namespace has status "Ready":"False"
	I0531 19:53:38.470435    9204 pod_ready.go:102] pod "calico-kube-controllers-8594699699-pcfnh" in "kube-system" namespace has status "Ready":"False"
	I0531 19:53:40.963203    9204 pod_ready.go:102] pod "calico-kube-controllers-8594699699-pcfnh" in "kube-system" namespace has status "Ready":"False"
	I0531 19:53:42.968439    9204 pod_ready.go:102] pod "calico-kube-controllers-8594699699-pcfnh" in "kube-system" namespace has status "Ready":"False"
	I0531 19:53:45.532393    9204 pod_ready.go:102] pod "calico-kube-controllers-8594699699-pcfnh" in "kube-system" namespace has status "Ready":"False"
	I0531 19:53:48.026845    9204 pod_ready.go:102] pod "calico-kube-controllers-8594699699-pcfnh" in "kube-system" namespace has status "Ready":"False"
	I0531 19:53:50.531114    9204 pod_ready.go:102] pod "calico-kube-controllers-8594699699-pcfnh" in "kube-system" namespace has status "Ready":"False"
	I0531 19:53:53.062206    9204 pod_ready.go:102] pod "calico-kube-controllers-8594699699-pcfnh" in "kube-system" namespace has status "Ready":"False"
	I0531 19:53:55.476871    9204 pod_ready.go:102] pod "calico-kube-controllers-8594699699-pcfnh" in "kube-system" namespace has status "Ready":"False"
	I0531 19:53:57.531022    9204 pod_ready.go:102] pod "calico-kube-controllers-8594699699-pcfnh" in "kube-system" namespace has status "Ready":"False"
	I0531 19:54:00.032621    9204 pod_ready.go:102] pod "calico-kube-controllers-8594699699-pcfnh" in "kube-system" namespace has status "Ready":"False"
	I0531 19:54:02.467041    9204 pod_ready.go:102] pod "calico-kube-controllers-8594699699-pcfnh" in "kube-system" namespace has status "Ready":"False"
	I0531 19:54:05.028868    9204 pod_ready.go:102] pod "calico-kube-controllers-8594699699-pcfnh" in "kube-system" namespace has status "Ready":"False"
	I0531 19:54:07.465826    9204 pod_ready.go:102] pod "calico-kube-controllers-8594699699-pcfnh" in "kube-system" namespace has status "Ready":"False"
	I0531 19:54:09.539661    9204 pod_ready.go:102] pod "calico-kube-controllers-8594699699-pcfnh" in "kube-system" namespace has status "Ready":"False"
	I0531 19:54:12.030493    9204 pod_ready.go:102] pod "calico-kube-controllers-8594699699-pcfnh" in "kube-system" namespace has status "Ready":"False"
	I0531 19:54:14.461051    9204 pod_ready.go:102] pod "calico-kube-controllers-8594699699-pcfnh" in "kube-system" namespace has status "Ready":"False"
	I0531 19:54:16.463104    9204 pod_ready.go:102] pod "calico-kube-controllers-8594699699-pcfnh" in "kube-system" namespace has status "Ready":"False"
	I0531 19:54:18.965894    9204 pod_ready.go:102] pod "calico-kube-controllers-8594699699-pcfnh" in "kube-system" namespace has status "Ready":"False"
	I0531 19:54:21.034094    9204 pod_ready.go:102] pod "calico-kube-controllers-8594699699-pcfnh" in "kube-system" namespace has status "Ready":"False"
	I0531 19:54:23.106036    9204 pod_ready.go:102] pod "calico-kube-controllers-8594699699-pcfnh" in "kube-system" namespace has status "Ready":"False"
	I0531 19:54:25.531208    9204 pod_ready.go:102] pod "calico-kube-controllers-8594699699-pcfnh" in "kube-system" namespace has status "Ready":"False"
	I0531 19:54:27.963922    9204 pod_ready.go:102] pod "calico-kube-controllers-8594699699-pcfnh" in "kube-system" namespace has status "Ready":"False"
	I0531 19:54:30.531210    9204 pod_ready.go:102] pod "calico-kube-controllers-8594699699-pcfnh" in "kube-system" namespace has status "Ready":"False"
	I0531 19:54:32.959615    9204 pod_ready.go:102] pod "calico-kube-controllers-8594699699-pcfnh" in "kube-system" namespace has status "Ready":"False"
	I0531 19:54:34.964393    9204 pod_ready.go:102] pod "calico-kube-controllers-8594699699-pcfnh" in "kube-system" namespace has status "Ready":"False"
	I0531 19:54:36.975543    9204 pod_ready.go:102] pod "calico-kube-controllers-8594699699-pcfnh" in "kube-system" namespace has status "Ready":"False"
	I0531 19:54:39.030388    9204 pod_ready.go:102] pod "calico-kube-controllers-8594699699-pcfnh" in "kube-system" namespace has status "Ready":"False"
	I0531 19:54:41.031352    9204 pod_ready.go:102] pod "calico-kube-controllers-8594699699-pcfnh" in "kube-system" namespace has status "Ready":"False"
	I0531 19:54:43.457022    9204 pod_ready.go:102] pod "calico-kube-controllers-8594699699-pcfnh" in "kube-system" namespace has status "Ready":"False"
	I0531 19:54:45.460428    9204 pod_ready.go:102] pod "calico-kube-controllers-8594699699-pcfnh" in "kube-system" namespace has status "Ready":"False"
	I0531 19:54:47.464778    9204 pod_ready.go:102] pod "calico-kube-controllers-8594699699-pcfnh" in "kube-system" namespace has status "Ready":"False"
	I0531 19:54:49.961546    9204 pod_ready.go:102] pod "calico-kube-controllers-8594699699-pcfnh" in "kube-system" namespace has status "Ready":"False"
	I0531 19:54:52.628592    9204 pod_ready.go:102] pod "calico-kube-controllers-8594699699-pcfnh" in "kube-system" namespace has status "Ready":"False"
	I0531 19:54:54.962147    9204 pod_ready.go:102] pod "calico-kube-controllers-8594699699-pcfnh" in "kube-system" namespace has status "Ready":"False"
	I0531 19:54:56.975563    9204 pod_ready.go:102] pod "calico-kube-controllers-8594699699-pcfnh" in "kube-system" namespace has status "Ready":"False"
	I0531 19:54:59.459834    9204 pod_ready.go:102] pod "calico-kube-controllers-8594699699-pcfnh" in "kube-system" namespace has status "Ready":"False"
	I0531 19:55:01.466201    9204 pod_ready.go:102] pod "calico-kube-controllers-8594699699-pcfnh" in "kube-system" namespace has status "Ready":"False"
	I0531 19:55:03.531986    9204 pod_ready.go:102] pod "calico-kube-controllers-8594699699-pcfnh" in "kube-system" namespace has status "Ready":"False"
	I0531 19:55:05.964008    9204 pod_ready.go:102] pod "calico-kube-controllers-8594699699-pcfnh" in "kube-system" namespace has status "Ready":"False"
	I0531 19:55:07.970919    9204 pod_ready.go:102] pod "calico-kube-controllers-8594699699-pcfnh" in "kube-system" namespace has status "Ready":"False"
	I0531 19:55:10.030869    9204 pod_ready.go:102] pod "calico-kube-controllers-8594699699-pcfnh" in "kube-system" namespace has status "Ready":"False"
	I0531 19:55:12.463365    9204 pod_ready.go:102] pod "calico-kube-controllers-8594699699-pcfnh" in "kube-system" namespace has status "Ready":"False"
	I0531 19:55:14.531202    9204 pod_ready.go:102] pod "calico-kube-controllers-8594699699-pcfnh" in "kube-system" namespace has status "Ready":"False"
	I0531 19:55:16.958678    9204 pod_ready.go:102] pod "calico-kube-controllers-8594699699-pcfnh" in "kube-system" namespace has status "Ready":"False"
	I0531 19:55:19.462152    9204 pod_ready.go:102] pod "calico-kube-controllers-8594699699-pcfnh" in "kube-system" namespace has status "Ready":"False"
	I0531 19:55:21.532050    9204 pod_ready.go:102] pod "calico-kube-controllers-8594699699-pcfnh" in "kube-system" namespace has status "Ready":"False"
	I0531 19:55:23.973430    9204 pod_ready.go:102] pod "calico-kube-controllers-8594699699-pcfnh" in "kube-system" namespace has status "Ready":"False"
	I0531 19:55:26.463430    9204 pod_ready.go:102] pod "calico-kube-controllers-8594699699-pcfnh" in "kube-system" namespace has status "Ready":"False"
	I0531 19:55:28.530105    9204 pod_ready.go:102] pod "calico-kube-controllers-8594699699-pcfnh" in "kube-system" namespace has status "Ready":"False"
	I0531 19:55:30.543953    9204 pod_ready.go:102] pod "calico-kube-controllers-8594699699-pcfnh" in "kube-system" namespace has status "Ready":"False"
	I0531 19:55:32.968121    9204 pod_ready.go:102] pod "calico-kube-controllers-8594699699-pcfnh" in "kube-system" namespace has status "Ready":"False"
	I0531 19:55:35.468893    9204 pod_ready.go:102] pod "calico-kube-controllers-8594699699-pcfnh" in "kube-system" namespace has status "Ready":"False"
	I0531 19:55:37.960294    9204 pod_ready.go:102] pod "calico-kube-controllers-8594699699-pcfnh" in "kube-system" namespace has status "Ready":"False"
	I0531 19:55:39.964525    9204 pod_ready.go:102] pod "calico-kube-controllers-8594699699-pcfnh" in "kube-system" namespace has status "Ready":"False"
	I0531 19:55:42.460987    9204 pod_ready.go:102] pod "calico-kube-controllers-8594699699-pcfnh" in "kube-system" namespace has status "Ready":"False"
	I0531 19:55:44.527622    9204 pod_ready.go:102] pod "calico-kube-controllers-8594699699-pcfnh" in "kube-system" namespace has status "Ready":"False"
	I0531 19:55:47.028302    9204 pod_ready.go:102] pod "calico-kube-controllers-8594699699-pcfnh" in "kube-system" namespace has status "Ready":"False"
	I0531 19:55:49.528820    9204 pod_ready.go:102] pod "calico-kube-controllers-8594699699-pcfnh" in "kube-system" namespace has status "Ready":"False"
	I0531 19:55:51.542048    9204 pod_ready.go:102] pod "calico-kube-controllers-8594699699-pcfnh" in "kube-system" namespace has status "Ready":"False"
	I0531 19:55:53.547973    9204 pod_ready.go:102] pod "calico-kube-controllers-8594699699-pcfnh" in "kube-system" namespace has status "Ready":"False"
	I0531 19:55:56.047840    9204 pod_ready.go:102] pod "calico-kube-controllers-8594699699-pcfnh" in "kube-system" namespace has status "Ready":"False"
	I0531 19:55:58.465424    9204 pod_ready.go:102] pod "calico-kube-controllers-8594699699-pcfnh" in "kube-system" namespace has status "Ready":"False"
	I0531 19:56:00.962098    9204 pod_ready.go:102] pod "calico-kube-controllers-8594699699-pcfnh" in "kube-system" namespace has status "Ready":"False"
	I0531 19:56:03.030667    9204 pod_ready.go:102] pod "calico-kube-controllers-8594699699-pcfnh" in "kube-system" namespace has status "Ready":"False"
	I0531 19:56:05.460672    9204 pod_ready.go:102] pod "calico-kube-controllers-8594699699-pcfnh" in "kube-system" namespace has status "Ready":"False"
	I0531 19:56:08.031746    9204 pod_ready.go:102] pod "calico-kube-controllers-8594699699-pcfnh" in "kube-system" namespace has status "Ready":"False"
	I0531 19:56:10.041967    9204 pod_ready.go:102] pod "calico-kube-controllers-8594699699-pcfnh" in "kube-system" namespace has status "Ready":"False"
	I0531 19:56:12.471307    9204 pod_ready.go:102] pod "calico-kube-controllers-8594699699-pcfnh" in "kube-system" namespace has status "Ready":"False"
	I0531 19:56:14.530811    9204 pod_ready.go:102] pod "calico-kube-controllers-8594699699-pcfnh" in "kube-system" namespace has status "Ready":"False"
	I0531 19:56:16.967062    9204 pod_ready.go:102] pod "calico-kube-controllers-8594699699-pcfnh" in "kube-system" namespace has status "Ready":"False"
	I0531 19:56:18.969873    9204 pod_ready.go:102] pod "calico-kube-controllers-8594699699-pcfnh" in "kube-system" namespace has status "Ready":"False"
	I0531 19:56:21.459452    9204 pod_ready.go:102] pod "calico-kube-controllers-8594699699-pcfnh" in "kube-system" namespace has status "Ready":"False"
	I0531 19:56:23.470088    9204 pod_ready.go:102] pod "calico-kube-controllers-8594699699-pcfnh" in "kube-system" namespace has status "Ready":"False"
	I0531 19:56:24.056013    9204 pod_ready.go:81] duration metric: took 4m4.9008606s waiting for pod "calico-kube-controllers-8594699699-pcfnh" in "kube-system" namespace to be "Ready" ...
	E0531 19:56:24.056013    9204 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting for the condition
	I0531 19:56:24.056013    9204 pod_ready.go:78] waiting up to 5m0s for pod "calico-node-qtflq" in "kube-system" namespace to be "Ready" ...
	I0531 19:56:26.178049    9204 pod_ready.go:102] pod "calico-node-qtflq" in "kube-system" namespace has status "Ready":"False"
	I0531 19:56:28.246815    9204 pod_ready.go:102] pod "calico-node-qtflq" in "kube-system" namespace has status "Ready":"False"
	I0531 19:56:30.333070    9204 pod_ready.go:102] pod "calico-node-qtflq" in "kube-system" namespace has status "Ready":"False"
	I0531 19:56:32.732366    9204 pod_ready.go:102] pod "calico-node-qtflq" in "kube-system" namespace has status "Ready":"False"
	I0531 19:56:34.834906    9204 pod_ready.go:102] pod "calico-node-qtflq" in "kube-system" namespace has status "Ready":"False"
	I0531 19:56:37.178199    9204 pod_ready.go:102] pod "calico-node-qtflq" in "kube-system" namespace has status "Ready":"False"
	I0531 19:56:39.231619    9204 pod_ready.go:102] pod "calico-node-qtflq" in "kube-system" namespace has status "Ready":"False"
	I0531 19:56:41.246407    9204 pod_ready.go:102] pod "calico-node-qtflq" in "kube-system" namespace has status "Ready":"False"
	I0531 19:56:43.738315    9204 pod_ready.go:102] pod "calico-node-qtflq" in "kube-system" namespace has status "Ready":"False"
	I0531 19:56:46.242768    9204 pod_ready.go:102] pod "calico-node-qtflq" in "kube-system" namespace has status "Ready":"False"
	I0531 19:56:48.675661    9204 pod_ready.go:102] pod "calico-node-qtflq" in "kube-system" namespace has status "Ready":"False"
	I0531 19:56:50.745192    9204 pod_ready.go:102] pod "calico-node-qtflq" in "kube-system" namespace has status "Ready":"False"
	I0531 19:56:53.179295    9204 pod_ready.go:102] pod "calico-node-qtflq" in "kube-system" namespace has status "Ready":"False"
	I0531 19:56:55.227539    9204 pod_ready.go:102] pod "calico-node-qtflq" in "kube-system" namespace has status "Ready":"False"
	I0531 19:56:57.729726    9204 pod_ready.go:102] pod "calico-node-qtflq" in "kube-system" namespace has status "Ready":"False"
	I0531 19:56:59.747986    9204 pod_ready.go:102] pod "calico-node-qtflq" in "kube-system" namespace has status "Ready":"False"
	I0531 19:57:02.191563    9204 pod_ready.go:102] pod "calico-node-qtflq" in "kube-system" namespace has status "Ready":"False"
	I0531 19:57:04.232674    9204 pod_ready.go:102] pod "calico-node-qtflq" in "kube-system" namespace has status "Ready":"False"
	I0531 19:57:06.262232    9204 pod_ready.go:102] pod "calico-node-qtflq" in "kube-system" namespace has status "Ready":"False"
	I0531 19:57:08.746293    9204 pod_ready.go:102] pod "calico-node-qtflq" in "kube-system" namespace has status "Ready":"False"
	I0531 19:57:10.830433    9204 pod_ready.go:102] pod "calico-node-qtflq" in "kube-system" namespace has status "Ready":"False"
	I0531 19:57:13.176022    9204 pod_ready.go:102] pod "calico-node-qtflq" in "kube-system" namespace has status "Ready":"False"
	I0531 19:57:15.241198    9204 pod_ready.go:102] pod "calico-node-qtflq" in "kube-system" namespace has status "Ready":"False"
	I0531 19:57:17.673403    9204 pod_ready.go:102] pod "calico-node-qtflq" in "kube-system" namespace has status "Ready":"False"
	I0531 19:57:19.689475    9204 pod_ready.go:102] pod "calico-node-qtflq" in "kube-system" namespace has status "Ready":"False"
	I0531 19:57:22.184091    9204 pod_ready.go:102] pod "calico-node-qtflq" in "kube-system" namespace has status "Ready":"False"
	I0531 19:57:24.738663    9204 pod_ready.go:102] pod "calico-node-qtflq" in "kube-system" namespace has status "Ready":"False"
	I0531 19:57:27.188736    9204 pod_ready.go:102] pod "calico-node-qtflq" in "kube-system" namespace has status "Ready":"False"
	I0531 19:57:29.241890    9204 pod_ready.go:102] pod "calico-node-qtflq" in "kube-system" namespace has status "Ready":"False"
	I0531 19:57:31.677488    9204 pod_ready.go:102] pod "calico-node-qtflq" in "kube-system" namespace has status "Ready":"False"
	I0531 19:57:34.247639    9204 pod_ready.go:102] pod "calico-node-qtflq" in "kube-system" namespace has status "Ready":"False"
	I0531 19:57:36.681976    9204 pod_ready.go:102] pod "calico-node-qtflq" in "kube-system" namespace has status "Ready":"False"
	I0531 19:57:38.692168    9204 pod_ready.go:102] pod "calico-node-qtflq" in "kube-system" namespace has status "Ready":"False"
	I0531 19:57:40.747480    9204 pod_ready.go:102] pod "calico-node-qtflq" in "kube-system" namespace has status "Ready":"False"
	I0531 19:57:43.194374    9204 pod_ready.go:102] pod "calico-node-qtflq" in "kube-system" namespace has status "Ready":"False"
	I0531 19:57:45.244518    9204 pod_ready.go:102] pod "calico-node-qtflq" in "kube-system" namespace has status "Ready":"False"
	I0531 19:57:47.832335    9204 pod_ready.go:102] pod "calico-node-qtflq" in "kube-system" namespace has status "Ready":"False"
	I0531 19:57:50.333494    9204 pod_ready.go:102] pod "calico-node-qtflq" in "kube-system" namespace has status "Ready":"False"
	I0531 19:57:52.746440    9204 pod_ready.go:102] pod "calico-node-qtflq" in "kube-system" namespace has status "Ready":"False"
	I0531 19:57:55.242793    9204 pod_ready.go:102] pod "calico-node-qtflq" in "kube-system" namespace has status "Ready":"False"
	I0531 19:57:57.686331    9204 pod_ready.go:102] pod "calico-node-qtflq" in "kube-system" namespace has status "Ready":"False"
	I0531 19:57:59.731230    9204 pod_ready.go:102] pod "calico-node-qtflq" in "kube-system" namespace has status "Ready":"False"
	I0531 19:58:02.186480    9204 pod_ready.go:102] pod "calico-node-qtflq" in "kube-system" namespace has status "Ready":"False"
	I0531 19:58:04.245167    9204 pod_ready.go:102] pod "calico-node-qtflq" in "kube-system" namespace has status "Ready":"False"
	I0531 19:58:06.732694    9204 pod_ready.go:102] pod "calico-node-qtflq" in "kube-system" namespace has status "Ready":"False"
	I0531 19:58:09.242037    9204 pod_ready.go:102] pod "calico-node-qtflq" in "kube-system" namespace has status "Ready":"False"
	I0531 19:58:11.731873    9204 pod_ready.go:102] pod "calico-node-qtflq" in "kube-system" namespace has status "Ready":"False"
	I0531 19:58:21.527484    9204 pod_ready.go:102] pod "calico-node-qtflq" in "kube-system" namespace has status "Ready":"False"
	I0531 19:58:23.682020    9204 pod_ready.go:102] pod "calico-node-qtflq" in "kube-system" namespace has status "Ready":"False"
	I0531 19:58:25.732651    9204 pod_ready.go:102] pod "calico-node-qtflq" in "kube-system" namespace has status "Ready":"False"
	I0531 19:58:28.228374    9204 pod_ready.go:102] pod "calico-node-qtflq" in "kube-system" namespace has status "Ready":"False"
	I0531 19:58:30.240668    9204 pod_ready.go:102] pod "calico-node-qtflq" in "kube-system" namespace has status "Ready":"False"
	I0531 19:58:32.733306    9204 pod_ready.go:102] pod "calico-node-qtflq" in "kube-system" namespace has status "Ready":"False"
	I0531 19:58:35.182359    9204 pod_ready.go:102] pod "calico-node-qtflq" in "kube-system" namespace has status "Ready":"False"
	I0531 19:58:37.679558    9204 pod_ready.go:102] pod "calico-node-qtflq" in "kube-system" namespace has status "Ready":"False"
	I0531 19:58:39.729816    9204 pod_ready.go:102] pod "calico-node-qtflq" in "kube-system" namespace has status "Ready":"False"
	I0531 19:58:41.746006    9204 pod_ready.go:102] pod "calico-node-qtflq" in "kube-system" namespace has status "Ready":"False"
	I0531 19:58:44.171599    9204 pod_ready.go:102] pod "calico-node-qtflq" in "kube-system" namespace has status "Ready":"False"
	I0531 19:58:46.176963    9204 pod_ready.go:102] pod "calico-node-qtflq" in "kube-system" namespace has status "Ready":"False"
	I0531 19:58:48.684013    9204 pod_ready.go:102] pod "calico-node-qtflq" in "kube-system" namespace has status "Ready":"False"
	I0531 19:58:50.740540    9204 pod_ready.go:102] pod "calico-node-qtflq" in "kube-system" namespace has status "Ready":"False"
	I0531 19:58:53.244008    9204 pod_ready.go:102] pod "calico-node-qtflq" in "kube-system" namespace has status "Ready":"False"
	I0531 19:58:55.681376    9204 pod_ready.go:102] pod "calico-node-qtflq" in "kube-system" namespace has status "Ready":"False"
	I0531 19:58:57.688330    9204 pod_ready.go:102] pod "calico-node-qtflq" in "kube-system" namespace has status "Ready":"False"
	I0531 19:58:59.747018    9204 pod_ready.go:102] pod "calico-node-qtflq" in "kube-system" namespace has status "Ready":"False"
	I0531 19:59:02.238923    9204 pod_ready.go:102] pod "calico-node-qtflq" in "kube-system" namespace has status "Ready":"False"
	I0531 19:59:04.685616    9204 pod_ready.go:102] pod "calico-node-qtflq" in "kube-system" namespace has status "Ready":"False"
	I0531 19:59:06.730312    9204 pod_ready.go:102] pod "calico-node-qtflq" in "kube-system" namespace has status "Ready":"False"
	I0531 19:59:09.170075    9204 pod_ready.go:102] pod "calico-node-qtflq" in "kube-system" namespace has status "Ready":"False"
	I0531 19:59:11.241796    9204 pod_ready.go:102] pod "calico-node-qtflq" in "kube-system" namespace has status "Ready":"False"
	I0531 19:59:13.730663    9204 pod_ready.go:102] pod "calico-node-qtflq" in "kube-system" namespace has status "Ready":"False"
	I0531 19:59:16.170813    9204 pod_ready.go:102] pod "calico-node-qtflq" in "kube-system" namespace has status "Ready":"False"
	I0531 19:59:18.187519    9204 pod_ready.go:102] pod "calico-node-qtflq" in "kube-system" namespace has status "Ready":"False"
	I0531 19:59:20.331885    9204 pod_ready.go:102] pod "calico-node-qtflq" in "kube-system" namespace has status "Ready":"False"
	I0531 19:59:22.670573    9204 pod_ready.go:102] pod "calico-node-qtflq" in "kube-system" namespace has status "Ready":"False"
	I0531 19:59:24.748485    9204 pod_ready.go:102] pod "calico-node-qtflq" in "kube-system" namespace has status "Ready":"False"
	I0531 19:59:27.234415    9204 pod_ready.go:102] pod "calico-node-qtflq" in "kube-system" namespace has status "Ready":"False"
	I0531 19:59:29.331323    9204 pod_ready.go:102] pod "calico-node-qtflq" in "kube-system" namespace has status "Ready":"False"
	I0531 19:59:31.831885    9204 pod_ready.go:102] pod "calico-node-qtflq" in "kube-system" namespace has status "Ready":"False"
	I0531 19:59:34.261332    9204 pod_ready.go:102] pod "calico-node-qtflq" in "kube-system" namespace has status "Ready":"False"
	I0531 19:59:36.829871    9204 pod_ready.go:102] pod "calico-node-qtflq" in "kube-system" namespace has status "Ready":"False"

                                                
                                                
** /stderr **
net_test.go:103: failed start: exit status 1
--- FAIL: TestNetworkPlugins/group/calico/Start (587.40s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (64.05s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:313: (dbg) Run:  out/minikube-windows-amd64.exe pause -p embed-certs-20220531193346-2108 --alsologtostderr -v=1
start_stop_delete_test.go:313: (dbg) Non-zero exit: out/minikube-windows-amd64.exe pause -p embed-certs-20220531193346-2108 --alsologtostderr -v=1: exit status 80 (8.4588999s)

                                                
                                                
-- stdout --
	* Pausing node embed-certs-20220531193346-2108 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0531 19:50:53.770841    7572 out.go:296] Setting OutFile to fd 1732 ...
	I0531 19:50:53.825966    7572 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0531 19:50:53.826023    7572 out.go:309] Setting ErrFile to fd 1988...
	I0531 19:50:53.826023    7572 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0531 19:50:53.839232    7572 out.go:303] Setting JSON to false
	I0531 19:50:53.839232    7572 mustload.go:65] Loading cluster: embed-certs-20220531193346-2108
	I0531 19:50:53.840329    7572 config.go:178] Loaded profile config "embed-certs-20220531193346-2108": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.6
	I0531 19:50:53.858114    7572 cli_runner.go:164] Run: docker container inspect embed-certs-20220531193346-2108 --format={{.State.Status}}
	I0531 19:50:56.692674    7572 cli_runner.go:217] Completed: docker container inspect embed-certs-20220531193346-2108 --format={{.State.Status}}: (2.8345478s)
	I0531 19:50:56.692674    7572 host.go:66] Checking if "embed-certs-20220531193346-2108" exists ...
	I0531 19:50:56.709181    7572 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" embed-certs-20220531193346-2108
	I0531 19:50:57.928193    7572 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" embed-certs-20220531193346-2108: (1.2188883s)
	I0531 19:50:57.931339    7572 pause.go:58] "namespaces" ="keys" ="(MISSING)"
	I0531 19:50:57.938318    7572 out.go:177] * Pausing node embed-certs-20220531193346-2108 ... 
	I0531 19:50:57.941337    7572 host.go:66] Checking if "embed-certs-20220531193346-2108" exists ...
	I0531 19:50:57.952298    7572 ssh_runner.go:195] Run: systemctl --version
	I0531 19:50:57.959311    7572 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220531193346-2108
	I0531 19:50:59.363423    7572 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220531193346-2108: (1.4039725s)
	I0531 19:50:59.363885    7572 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:54560 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\embed-certs-20220531193346-2108\id_rsa Username:docker}
	I0531 19:50:59.506205    7572 ssh_runner.go:235] Completed: systemctl --version: (1.5528756s)
	I0531 19:50:59.517198    7572 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0531 19:50:59.553190    7572 pause.go:50] kubelet running: true
	I0531 19:50:59.564187    7572 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I0531 19:50:59.906097    7572 retry.go:31] will retry after 276.165072ms: kubelet disable --now: sudo systemctl disable --now kubelet: Process exited with status 1
	stdout:
	
	stderr:
	Synchronizing state of kubelet.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install disable kubelet
	update-rc.d: error: kubelet Default-Start contains no runlevels, aborting.
	I0531 19:51:00.204931    7572 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0531 19:51:00.242673    7572 pause.go:50] kubelet running: true
	I0531 19:51:00.262093    7572 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I0531 19:51:00.777154    7572 retry.go:31] will retry after 540.190908ms: kubelet disable --now: sudo systemctl disable --now kubelet: Process exited with status 1
	stdout:
	
	stderr:
	Synchronizing state of kubelet.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install disable kubelet
	update-rc.d: error: kubelet Default-Start contains no runlevels, aborting.
	I0531 19:51:01.343814    7572 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0531 19:51:01.431105    7572 pause.go:50] kubelet running: true
	I0531 19:51:01.444121    7572 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I0531 19:51:01.777112    7572 out.go:177] 
	W0531 19:51:01.779684    7572 out.go:239] X Exiting due to GUEST_PAUSE: kubelet disable --now: sudo systemctl disable --now kubelet: Process exited with status 1
	stdout:
	
	stderr:
	Synchronizing state of kubelet.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install disable kubelet
	update-rc.d: error: kubelet Default-Start contains no runlevels, aborting.
	
	X Exiting due to GUEST_PAUSE: kubelet disable --now: sudo systemctl disable --now kubelet: Process exited with status 1
	stdout:
	
	stderr:
	Synchronizing state of kubelet.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install disable kubelet
	update-rc.d: error: kubelet Default-Start contains no runlevels, aborting.
	
	W0531 19:51:01.779684    7572 out.go:239] * 
	* 
	W0531 19:51:01.886073    7572 out.go:239] ╭───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                       │
	│    * If the above advice does not help, please let us know:                                                           │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                         │
	│                                                                                                                       │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                              │
	│    * Please also attach the following file to the GitHub issue:                                                       │
	│    * - C:\Users\jenkins.minikube7\AppData\Local\Temp\minikube_pause_8a34b101973a5475dd3f2895f630b939c2202307_5.log    │
	│                                                                                                                       │
	╰───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                       │
	│    * If the above advice does not help, please let us know:                                                           │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                         │
	│                                                                                                                       │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                              │
	│    * Please also attach the following file to the GitHub issue:                                                       │
	│    * - C:\Users\jenkins.minikube7\AppData\Local\Temp\minikube_pause_8a34b101973a5475dd3f2895f630b939c2202307_5.log    │
	│                                                                                                                       │
	╰───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I0531 19:51:01.890158    7572 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:313: out/minikube-windows-amd64.exe pause -p embed-certs-20220531193346-2108 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect embed-certs-20220531193346-2108
helpers_test.go:231: (dbg) Done: docker inspect embed-certs-20220531193346-2108: (1.2278459s)
helpers_test.go:235: (dbg) docker inspect embed-certs-20220531193346-2108:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "b3ef283712d9f0ede7e8292a0ec99b2d65b54ce8813a8abec9392bf47d24f60e",
	        "Created": "2022-05-31T19:40:27.5730813Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 243450,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2022-05-31T19:43:04.8203689Z",
	            "FinishedAt": "2022-05-31T19:42:40.7035645Z"
	        },
	        "Image": "sha256:aedbaa58534633065a66af6f01ba15f6c7dc1b8a285b6938f9d04325ceab9ed4",
	        "ResolvConfPath": "/var/lib/docker/containers/b3ef283712d9f0ede7e8292a0ec99b2d65b54ce8813a8abec9392bf47d24f60e/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/b3ef283712d9f0ede7e8292a0ec99b2d65b54ce8813a8abec9392bf47d24f60e/hostname",
	        "HostsPath": "/var/lib/docker/containers/b3ef283712d9f0ede7e8292a0ec99b2d65b54ce8813a8abec9392bf47d24f60e/hosts",
	        "LogPath": "/var/lib/docker/containers/b3ef283712d9f0ede7e8292a0ec99b2d65b54ce8813a8abec9392bf47d24f60e/b3ef283712d9f0ede7e8292a0ec99b2d65b54ce8813a8abec9392bf47d24f60e-json.log",
	        "Name": "/embed-certs-20220531193346-2108",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "embed-certs-20220531193346-2108:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "embed-certs-20220531193346-2108",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 2306867200,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/c2e8ea02cea06b1bfa76cdee092b60de61a62f2fa5b1fefbc00c3722dae510ce-init/diff:/var/lib/docker/overlay2/42ebd8012a176a6c9bc83a2b81ffb1eb5c8e01d5410cb5d59346522bbaddf2cc/diff:/var/lib/docker/overlay2/59dce173ea661e9679f479af711a101ab0e97afb60abfd3c5b7a199b5c3e2b3b/diff:/var/lib/docker/overlay2/0328b60a223ca9f8bab93e6b86106d8b64d16fa559a56e88abbdee372b3b6a70/diff:/var/lib/docker/overlay2/b781f2620a052ee02138337819bde18c09122be2f20b7cfefaf7688f18d0c559/diff:/var/lib/docker/overlay2/af966c145b90b1748180b9ffcb1521d6fa9914e1d0ca582b239123591ffd1527/diff:/var/lib/docker/overlay2/5cd2b511f6f3bc93855ed77b5510ca4c67426eea433ccda53ea8e864342a413e/diff:/var/lib/docker/overlay2/f896d291d0c004470c3e38ea0d3be8e2b2a48ea36d45662c40fe3e105cbf4dec/diff:/var/lib/docker/overlay2/9e8994dcf5b1692245d5e40982d040298bfa7f7977892cf4be8ba3697f2c1283/diff:/var/lib/docker/overlay2/a7da4130c1b629e2a737b34701c6d4dfe6c48f92771856a887e06a1edc5456f8/diff:/var/lib/docker/overlay2/4c2573
4b9c8459489256b5f70dbb446897b9510d1cf9187e903f845ffa2a7ec2/diff:/var/lib/docker/overlay2/5c6cef49a0d0d1a36777fa7e0955ecdffb41ce354b7984f232e9cd51916416f7/diff:/var/lib/docker/overlay2/b79c799ed97edb702ed4c4ccb55ef9c645ae162e30e8f297ca5dd1152c29de41/diff:/var/lib/docker/overlay2/c84b7bc7c79ffdedf2d1265e21eec011dc3215811fb0569f7eb7d6b9aec884e8/diff:/var/lib/docker/overlay2/df8e2c3af362fd04ee17cb8d67105cf489427b2ae7cec77b79a2778e6c8c0234/diff:/var/lib/docker/overlay2/e56e356f8425868b31ada978267de73f074f211985ff1849ece7ab8341c33bae/diff:/var/lib/docker/overlay2/82c032066e83d3297742c83dd29132974e9db73a0b0b0a8edd3bcbbdb29cd53c/diff:/var/lib/docker/overlay2/15532131f3e6d0b2faf705733b06ae0c869147f2ca9592e3a80b6eaadad23544/diff:/var/lib/docker/overlay2/73fa456f504732f46cbe49368167247ca47b3099a6a75a7023ba16e7f598aee5/diff:/var/lib/docker/overlay2/e5635e020aadcc8dd1e5e3cd2eaa45cb97147f47bf406211fc61d7cbfc531193/diff:/var/lib/docker/overlay2/40b76b3249d3f7a8a737e2db80ebc1ed3b76d59724641217e8aae414ad832781/diff:/var/lib/d
ocker/overlay2/50ea2ce78d4fe52f626b2755a14f71a3c4f9b5a4f929646d9200876bdb1652c1/diff:/var/lib/docker/overlay2/d0a6e94d1f4aa73824d39c6e655bc4bdcd6568cea821b5d0f71174591c9cbbb3/diff:/var/lib/docker/overlay2/20c8fbe37a8c89a03b7bffe8cbc507e888cd5886f86f43b551d6a09fee1ce5e7/diff:/var/lib/docker/overlay2/48942b31cfe24e44c65a8be1785cd90488444f8c420a79b72a123034b01dd3f8/diff:/var/lib/docker/overlay2/c90124ab97e02facd949bfbd45815d6d73a40303b47ba4a4bc035788f5ee2dc3/diff:/var/lib/docker/overlay2/38c82aeabee1c8f46551413ecabb24f2f22680bb623f79e40c751558747a03f5/diff:/var/lib/docker/overlay2/4fa8894d1c1d773bc2e0511f273eab03fb7b8be7489eab5cd3eb57cc0d12e855/diff:/var/lib/docker/overlay2/23319fcddb47e50928e2044bac662de8153728f3a2eefa9c6ad5a5f413efec88/diff:/var/lib/docker/overlay2/b7ecd073b5b747c21ecbd1ca61887899f7e227fac3e383e24f868549b7929d74/diff:/var/lib/docker/overlay2/29a5674b4bbabfd07c4ce0b2a8b84ce98af380bf984043a4a9a6cd0743e4630c/diff:/var/lib/docker/overlay2/86a10266979ed72dc4372ade724e64741de35702626642ba60a15cca143
3682e/diff:/var/lib/docker/overlay2/03a1af7f82f1cb2b6eadbd1f13c8e9f6ca281ef3a8968d6aa45d284f286aefca/diff:/var/lib/docker/overlay2/f36cce4566278d24128326f8ef6ea446884c0c6941ccdb763ddf936e178afbff/diff:/var/lib/docker/overlay2/e54a2a61ba3597af53ec65a822821ffca97788e4b1dbfeedf98bf4d12e78973d/diff:/var/lib/docker/overlay2/dd54a25b898b0d7952f0bcb99a0450ee3d6b4269599e9355b4ae5e0c540c2caa/diff:/var/lib/docker/overlay2/ae6c1d1e9e79e03382217f21886420e3118a3f18f7c44f76c19262a84a43e219/diff:/var/lib/docker/overlay2/82faa00f86c1fa99063466464f71cdd6d510aa3e45c6c43301b2119b5bd5285a/diff:/var/lib/docker/overlay2/9f54999972b485642f042b9ed4d00316be0a1d35c060e619aca79b1583180446/diff:/var/lib/docker/overlay2/b467240c20564ba44d0946c716cf18ab5be973b43b02c37ee3ddd8f94502f41b/diff:/var/lib/docker/overlay2/21217d4ff1c5cf81dd53cfd831e0961189fb9f86812e1f53843f0022383345e6/diff",
	                "MergedDir": "/var/lib/docker/overlay2/c2e8ea02cea06b1bfa76cdee092b60de61a62f2fa5b1fefbc00c3722dae510ce/merged",
	                "UpperDir": "/var/lib/docker/overlay2/c2e8ea02cea06b1bfa76cdee092b60de61a62f2fa5b1fefbc00c3722dae510ce/diff",
	                "WorkDir": "/var/lib/docker/overlay2/c2e8ea02cea06b1bfa76cdee092b60de61a62f2fa5b1fefbc00c3722dae510ce/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "embed-certs-20220531193346-2108",
	                "Source": "/var/lib/docker/volumes/embed-certs-20220531193346-2108/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "embed-certs-20220531193346-2108",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653596720-14230@sha256:e953786303ac8350802546ee187d34e89f0007072a54fdbcc2f86a1fb8575418",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "embed-certs-20220531193346-2108",
	                "name.minikube.sigs.k8s.io": "embed-certs-20220531193346-2108",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "20b23ab4993bd1a2ae6054def826e6e68e6055d6ab2db42139d3358a41f3f701",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "54560"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "54561"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "54557"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "54558"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "54559"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/20b23ab4993b",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "embed-certs-20220531193346-2108": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.67.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "b3ef283712d9",
	                        "embed-certs-20220531193346-2108"
	                    ],
	                    "NetworkID": "fa417081acd115df61da89d4421c202d2b7d946ea3a40caf53be2b9b0c3bc79d",
	                    "EndpointID": "3f2687aa282b7b16fdfaa47b248e7c36c3d5605834959b55952f7e7bb048c2ee",
	                    "Gateway": "192.168.67.1",
	                    "IPAddress": "192.168.67.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:43:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p embed-certs-20220531193346-2108 -n embed-certs-20220531193346-2108
E0531 19:51:10.036203    2108 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\functional-20220531173104-2108\client.crt: The system cannot find the path specified.
helpers_test.go:239: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.Host}} -p embed-certs-20220531193346-2108 -n embed-certs-20220531193346-2108: (7.3798668s)
helpers_test.go:244: <<< TestStartStop/group/embed-certs/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-windows-amd64.exe -p embed-certs-20220531193346-2108 logs -n 25
E0531 19:51:18.268666    2108 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\old-k8s-version-20220531192531-2108\client.crt: The system cannot find the path specified.
helpers_test.go:247: (dbg) Done: out/minikube-windows-amd64.exe -p embed-certs-20220531193346-2108 logs -n 25: (8.6254824s)
helpers_test.go:252: TestStartStop/group/embed-certs/serial/Pause logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|------------------------------------------------------------|------------------------------------------------|-------------------|----------------|---------------------|---------------------|
	| Command |                            Args                            |                    Profile                     |       User        |    Version     |     Start Time      |      End Time       |
	|---------|------------------------------------------------------------|------------------------------------------------|-------------------|----------------|---------------------|---------------------|
	| stop    | -p                                                         | embed-certs-20220531193346-2108                | minikube7\jenkins | v1.26.0-beta.1 | 31 May 22 19:42 GMT | 31 May 22 19:42 GMT |
	|         | embed-certs-20220531193346-2108                            |                                                |                   |                |                     |                     |
	|         | --alsologtostderr -v=3                                     |                                                |                   |                |                     |                     |
	| addons  | enable dashboard -p                                        | embed-certs-20220531193346-2108                | minikube7\jenkins | v1.26.0-beta.1 | 31 May 22 19:42 GMT | 31 May 22 19:42 GMT |
	|         | embed-certs-20220531193346-2108                            |                                                |                   |                |                     |                     |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4          |                                                |                   |                |                     |                     |
	| start   | -p newest-cni-20220531193849-2108 --memory=2200            | newest-cni-20220531193849-2108                 | minikube7\jenkins | v1.26.0-beta.1 | 31 May 22 19:41 GMT | 31 May 22 19:43 GMT |
	|         | --alsologtostderr --wait=apiserver,system_pods,default_sa  |                                                |                   |                |                     |                     |
	|         | --feature-gates ServerSideApply=true --network-plugin=cni  |                                                |                   |                |                     |                     |
	|         | --extra-config=kubelet.network-plugin=cni                  |                                                |                   |                |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 |                                                |                   |                |                     |                     |
	|         | --driver=docker --kubernetes-version=v1.23.6               |                                                |                   |                |                     |                     |
	| ssh     | -p                                                         | newest-cni-20220531193849-2108                 | minikube7\jenkins | v1.26.0-beta.1 | 31 May 22 19:43 GMT | 31 May 22 19:43 GMT |
	|         | newest-cni-20220531193849-2108                             |                                                |                   |                |                     |                     |
	|         | sudo crictl images -o json                                 |                                                |                   |                |                     |                     |
	| start   | -p                                                         | old-k8s-version-20220531192531-2108            | minikube7\jenkins | v1.26.0-beta.1 | 31 May 22 19:35 GMT | 31 May 22 19:43 GMT |
	|         | old-k8s-version-20220531192531-2108                        |                                                |                   |                |                     |                     |
	|         | --memory=2200 --alsologtostderr                            |                                                |                   |                |                     |                     |
	|         | --wait=true --kvm-network=default                          |                                                |                   |                |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                              |                                                |                   |                |                     |                     |
	|         | --disable-driver-mounts                                    |                                                |                   |                |                     |                     |
	|         | --keep-context=false                                       |                                                |                   |                |                     |                     |
	|         | --driver=docker                                            |                                                |                   |                |                     |                     |
	|         | --kubernetes-version=v1.16.0                               |                                                |                   |                |                     |                     |
	| pause   | -p                                                         | newest-cni-20220531193849-2108                 | minikube7\jenkins | v1.26.0-beta.1 | 31 May 22 19:43 GMT | 31 May 22 19:43 GMT |
	|         | newest-cni-20220531193849-2108                             |                                                |                   |                |                     |                     |
	|         | --alsologtostderr -v=1                                     |                                                |                   |                |                     |                     |
	| ssh     | -p                                                         | old-k8s-version-20220531192531-2108            | minikube7\jenkins | v1.26.0-beta.1 | 31 May 22 19:43 GMT | 31 May 22 19:43 GMT |
	|         | old-k8s-version-20220531192531-2108                        |                                                |                   |                |                     |                     |
	|         | sudo crictl images -o json                                 |                                                |                   |                |                     |                     |
	| unpause | -p                                                         | newest-cni-20220531193849-2108                 | minikube7\jenkins | v1.26.0-beta.1 | 31 May 22 19:43 GMT | 31 May 22 19:43 GMT |
	|         | newest-cni-20220531193849-2108                             |                                                |                   |                |                     |                     |
	|         | --alsologtostderr -v=1                                     |                                                |                   |                |                     |                     |
	| logs    | old-k8s-version-20220531192531-2108                        | old-k8s-version-20220531192531-2108            | minikube7\jenkins | v1.26.0-beta.1 | 31 May 22 19:44 GMT | 31 May 22 19:44 GMT |
	|         | logs -n 25                                                 |                                                |                   |                |                     |                     |
	| start   | -p                                                         | default-k8s-different-port-20220531193451-2108 | minikube7\jenkins | v1.26.0-beta.1 | 31 May 22 19:37 GMT | 31 May 22 19:44 GMT |
	|         | default-k8s-different-port-20220531193451-2108             |                                                |                   |                |                     |                     |
	|         | --memory=2200 --alsologtostderr --wait=true                |                                                |                   |                |                     |                     |
	|         | --apiserver-port=8444 --driver=docker                      |                                                |                   |                |                     |                     |
	|         | --kubernetes-version=v1.23.6                               |                                                |                   |                |                     |                     |
	| delete  | -p                                                         | newest-cni-20220531193849-2108                 | minikube7\jenkins | v1.26.0-beta.1 | 31 May 22 19:44 GMT | 31 May 22 19:44 GMT |
	|         | newest-cni-20220531193849-2108                             |                                                |                   |                |                     |                     |
	| logs    | old-k8s-version-20220531192531-2108                        | old-k8s-version-20220531192531-2108            | minikube7\jenkins | v1.26.0-beta.1 | 31 May 22 19:44 GMT | 31 May 22 19:44 GMT |
	|         | logs -n 25                                                 |                                                |                   |                |                     |                     |
	| delete  | -p                                                         | newest-cni-20220531193849-2108                 | minikube7\jenkins | v1.26.0-beta.1 | 31 May 22 19:44 GMT | 31 May 22 19:44 GMT |
	|         | newest-cni-20220531193849-2108                             |                                                |                   |                |                     |                     |
	| delete  | -p                                                         | old-k8s-version-20220531192531-2108            | minikube7\jenkins | v1.26.0-beta.1 | 31 May 22 19:45 GMT | 31 May 22 19:45 GMT |
	|         | old-k8s-version-20220531192531-2108                        |                                                |                   |                |                     |                     |
	| ssh     | -p                                                         | default-k8s-different-port-20220531193451-2108 | minikube7\jenkins | v1.26.0-beta.1 | 31 May 22 19:45 GMT | 31 May 22 19:45 GMT |
	|         | default-k8s-different-port-20220531193451-2108             |                                                |                   |                |                     |                     |
	|         | sudo crictl images -o json                                 |                                                |                   |                |                     |                     |
	| delete  | -p                                                         | old-k8s-version-20220531192531-2108            | minikube7\jenkins | v1.26.0-beta.1 | 31 May 22 19:45 GMT | 31 May 22 19:45 GMT |
	|         | old-k8s-version-20220531192531-2108                        |                                                |                   |                |                     |                     |
	| pause   | -p                                                         | default-k8s-different-port-20220531193451-2108 | minikube7\jenkins | v1.26.0-beta.1 | 31 May 22 19:45 GMT | 31 May 22 19:45 GMT |
	|         | default-k8s-different-port-20220531193451-2108             |                                                |                   |                |                     |                     |
	|         | --alsologtostderr -v=1                                     |                                                |                   |                |                     |                     |
	| unpause | -p                                                         | default-k8s-different-port-20220531193451-2108 | minikube7\jenkins | v1.26.0-beta.1 | 31 May 22 19:46 GMT | 31 May 22 19:46 GMT |
	|         | default-k8s-different-port-20220531193451-2108             |                                                |                   |                |                     |                     |
	|         | --alsologtostderr -v=1                                     |                                                |                   |                |                     |                     |
	| delete  | -p                                                         | default-k8s-different-port-20220531193451-2108 | minikube7\jenkins | v1.26.0-beta.1 | 31 May 22 19:46 GMT | 31 May 22 19:47 GMT |
	|         | default-k8s-different-port-20220531193451-2108             |                                                |                   |                |                     |                     |
	| delete  | -p                                                         | default-k8s-different-port-20220531193451-2108 | minikube7\jenkins | v1.26.0-beta.1 | 31 May 22 19:47 GMT | 31 May 22 19:47 GMT |
	|         | default-k8s-different-port-20220531193451-2108             |                                                |                   |                |                     |                     |
	| start   | -p kindnet-20220531191930-2108                             | kindnet-20220531191930-2108                    | minikube7\jenkins | v1.26.0-beta.1 | 31 May 22 19:45 GMT | 31 May 22 19:48 GMT |
	|         | --memory=2048                                              |                                                |                   |                |                     |                     |
	|         | --alsologtostderr                                          |                                                |                   |                |                     |                     |
	|         | --wait=true --wait-timeout=5m                              |                                                |                   |                |                     |                     |
	|         | --cni=kindnet --driver=docker                              |                                                |                   |                |                     |                     |
	| ssh     | -p kindnet-20220531191930-2108                             | kindnet-20220531191930-2108                    | minikube7\jenkins | v1.26.0-beta.1 | 31 May 22 19:48 GMT | 31 May 22 19:48 GMT |
	|         | pgrep -a kubelet                                           |                                                |                   |                |                     |                     |
	| delete  | -p kindnet-20220531191930-2108                             | kindnet-20220531191930-2108                    | minikube7\jenkins | v1.26.0-beta.1 | 31 May 22 19:49 GMT | 31 May 22 19:49 GMT |
	| start   | -p                                                         | embed-certs-20220531193346-2108                | minikube7\jenkins | v1.26.0-beta.1 | 31 May 22 19:42 GMT | 31 May 22 19:49 GMT |
	|         | embed-certs-20220531193346-2108                            |                                                |                   |                |                     |                     |
	|         | --memory=2200 --alsologtostderr                            |                                                |                   |                |                     |                     |
	|         | --wait=true --embed-certs                                  |                                                |                   |                |                     |                     |
	|         | --driver=docker                                            |                                                |                   |                |                     |                     |
	|         | --kubernetes-version=v1.23.6                               |                                                |                   |                |                     |                     |
	| ssh     | -p                                                         | embed-certs-20220531193346-2108                | minikube7\jenkins | v1.26.0-beta.1 | 31 May 22 19:50 GMT | 31 May 22 19:50 GMT |
	|         | embed-certs-20220531193346-2108                            |                                                |                   |                |                     |                     |
	|         | sudo crictl images -o json                                 |                                                |                   |                |                     |                     |
	|---------|------------------------------------------------------------|------------------------------------------------|-------------------|----------------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* I0531 19:49:50.346266    8616 api_server.go:71] duration metric: took 18.7567329s to wait for apiserver process to appear ...
	I0531 19:49:50.346266    8616 api_server.go:87] waiting for apiserver healthz status ...
	I0531 19:49:50.346266    8616 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:54559/healthz ...
	I0531 19:49:50.439360    8616 api_server.go:266] https://127.0.0.1:54559/healthz returned 200:
	ok
	I0531 19:49:50.446782    8616 api_server.go:140] control plane version: v1.23.6
	I0531 19:49:50.446782    8616 api_server.go:130] duration metric: took 100.5156ms to wait for apiserver health ...
	I0531 19:49:50.446782    8616 system_pods.go:43] waiting for kube-system pods to appear ...
	I0531 19:49:50.557855    8616 system_pods.go:59] 9 kube-system pods found
	I0531 19:49:50.557855    8616 system_pods.go:61] "coredns-64897985d-5m9xf" [93fcde9d-8331-47a5-bb17-d9346196ab6f] Running
	I0531 19:49:50.557855    8616 system_pods.go:61] "coredns-64897985d-rx2dd" [e0ba19c1-80e0-4443-bf8c-f40c1d6ee893] Running
	I0531 19:49:50.557855    8616 system_pods.go:61] "etcd-embed-certs-20220531193346-2108" [ae630bdb-56d7-428d-ad63-ba21a1788353] Running
	I0531 19:49:50.557855    8616 system_pods.go:61] "kube-apiserver-embed-certs-20220531193346-2108" [e974f7b7-53fd-44cd-9c04-de97f963802a] Running
	I0531 19:49:50.557855    8616 system_pods.go:61] "kube-controller-manager-embed-certs-20220531193346-2108" [dae57bd0-d4e3-4bdc-8ced-4f046ccc3173] Running
	I0531 19:49:50.557855    8616 system_pods.go:61] "kube-proxy-jqpk2" [6183a47a-8d01-42ca-9726-4b2e540a42e8] Running
	I0531 19:49:50.557855    8616 system_pods.go:61] "kube-scheduler-embed-certs-20220531193346-2108" [de4aa727-145e-4ab2-a845-4b5cb54df891] Running
	I0531 19:49:50.557855    8616 system_pods.go:61] "metrics-server-b955d9d8-w48dh" [46b08093-d98e-43c9-9180-1bd5c1294a67] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0531 19:49:50.557855    8616 system_pods.go:61] "storage-provisioner" [4b942c94-ac31-4c5a-8901-728ebf0506e3] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0531 19:49:50.557855    8616 system_pods.go:74] duration metric: took 111.0723ms to wait for pod list to return data ...
	I0531 19:49:50.557855    8616 default_sa.go:34] waiting for default service account to be created ...
	I0531 19:49:50.632635    8616 default_sa.go:45] found service account: "default"
	I0531 19:49:50.632635    8616 default_sa.go:55] duration metric: took 74.7795ms for default service account to be created ...
	I0531 19:49:50.632635    8616 system_pods.go:116] waiting for k8s-apps to be running ...
	I0531 19:49:50.740763    8616 system_pods.go:86] 9 kube-system pods found
	I0531 19:49:50.740763    8616 system_pods.go:89] "coredns-64897985d-5m9xf" [93fcde9d-8331-47a5-bb17-d9346196ab6f] Running
	I0531 19:49:50.741320    8616 system_pods.go:89] "coredns-64897985d-rx2dd" [e0ba19c1-80e0-4443-bf8c-f40c1d6ee893] Running
	I0531 19:49:50.741320    8616 system_pods.go:89] "etcd-embed-certs-20220531193346-2108" [ae630bdb-56d7-428d-ad63-ba21a1788353] Running
	I0531 19:49:50.741385    8616 system_pods.go:89] "kube-apiserver-embed-certs-20220531193346-2108" [e974f7b7-53fd-44cd-9c04-de97f963802a] Running
	I0531 19:49:50.741385    8616 system_pods.go:89] "kube-controller-manager-embed-certs-20220531193346-2108" [dae57bd0-d4e3-4bdc-8ced-4f046ccc3173] Running
	I0531 19:49:50.741385    8616 system_pods.go:89] "kube-proxy-jqpk2" [6183a47a-8d01-42ca-9726-4b2e540a42e8] Running
	I0531 19:49:50.741385    8616 system_pods.go:89] "kube-scheduler-embed-certs-20220531193346-2108" [de4aa727-145e-4ab2-a845-4b5cb54df891] Running
	I0531 19:49:50.741385    8616 system_pods.go:89] "metrics-server-b955d9d8-w48dh" [46b08093-d98e-43c9-9180-1bd5c1294a67] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0531 19:49:50.741385    8616 system_pods.go:89] "storage-provisioner" [4b942c94-ac31-4c5a-8901-728ebf0506e3] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0531 19:49:50.741385    8616 system_pods.go:126] duration metric: took 108.7499ms to wait for k8s-apps to be running ...
	I0531 19:49:50.741385    8616 system_svc.go:44] waiting for kubelet service to be running ....
	I0531 19:49:50.760406    8616 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0531 19:49:50.860598    8616 system_svc.go:56] duration metric: took 119.2121ms WaitForService to wait for kubelet.
	I0531 19:49:50.860598    8616 kubeadm.go:572] duration metric: took 19.2710623s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0531 19:49:50.860598    8616 node_conditions.go:102] verifying NodePressure condition ...
	I0531 19:49:50.928192    8616 node_conditions.go:122] node storage ephemeral capacity is 263174212Ki
	I0531 19:49:50.928192    8616 node_conditions.go:123] node cpu capacity is 16
	I0531 19:49:50.928192    8616 node_conditions.go:105] duration metric: took 67.5936ms to run NodePressure ...
	I0531 19:49:50.929192    8616 start.go:213] waiting for startup goroutines ...
	I0531 19:49:51.152403    8616 start.go:504] kubectl: 1.18.2, cluster: 1.23.6 (minor skew: 5)
	Log file created at: 2022/05/31 19:49:51
	Running on machine: minikube7
	Binary: Built with gc go1.18.2 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0531 19:49:51.084809    9204 out.go:296] Setting OutFile to fd 1616 ...
	I0531 19:49:51.167390    9204 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0531 19:49:51.167390    9204 out.go:309] Setting ErrFile to fd 1840...
	I0531 19:49:51.167390    9204 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0531 19:49:51.186974    9204 out.go:303] Setting JSON to false
	I0531 19:49:51.190465    9204 start.go:115] hostinfo: {"hostname":"minikube7","uptime":84861,"bootTime":1653941730,"procs":156,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19044 Build 19044","kernelVersion":"10.0.19044 Build 19044","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f66ed2ea-6c04-4a6b-8eea-b2fb0953e990"}
	W0531 19:49:51.190465    9204 start.go:123] gopshost.Virtualization returned error: not implemented yet
	I0531 19:49:51.254771    9204 out.go:177] * [calico-20220531191937-2108] minikube v1.26.0-beta.1 on Microsoft Windows 10 Enterprise N 10.0.19044 Build 19044
	I0531 19:49:51.254771    8616 out.go:177] 
	I0531 19:49:51.260770    9204 notify.go:193] Checking for updates...
	W0531 19:49:51.260770    8616 out.go:239] ! C:\ProgramData\chocolatey\bin\kubectl.exe is version 1.18.2, which may have incompatibilites with Kubernetes 1.23.6.
	I0531 19:49:51.269758    9204 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube7\minikube-integration\kubeconfig
	I0531 19:49:51.269758    8616 out.go:177]   - Want kubectl v1.23.6? Try 'minikube kubectl -- get pods -A'
	I0531 19:49:51.276754    9204 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube7\minikube-integration\.minikube
	I0531 19:49:51.281753    8616 out.go:177] * Done! kubectl is now configured to use "embed-certs-20220531193346-2108" cluster and "default" namespace by default
	I0531 19:49:51.285755    9204 out.go:177]   - MINIKUBE_LOCATION=14079
	I0531 19:49:51.296776    9204 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0531 19:49:50.645797    9268 pod_ready.go:102] pod "cilium-k5t52" in "kube-system" namespace has status "Ready":"False"
	I0531 19:49:53.140920    9268 pod_ready.go:102] pod "cilium-k5t52" in "kube-system" namespace has status "Ready":"False"
	I0531 19:49:52.888286    9164 cli_runner.go:164] Run: docker container inspect auto-20220531191922-2108 --format={{.State.Status}}
	I0531 19:49:54.499135    9164 cli_runner.go:217] Completed: docker container inspect auto-20220531191922-2108 --format={{.State.Status}}: (1.6106726s)
	I0531 19:49:51.301760    9204 config.go:178] Loaded profile config "auto-20220531191922-2108": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.6
	I0531 19:49:51.301760    9204 config.go:178] Loaded profile config "cilium-20220531191937-2108": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.6
	I0531 19:49:51.302765    9204 config.go:178] Loaded profile config "embed-certs-20220531193346-2108": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.6
	I0531 19:49:51.302765    9204 driver.go:358] Setting default libvirt URI to qemu:///system
	I0531 19:49:54.992808    9204 docker.go:137] docker version: linux-20.10.14
	I0531 19:49:54.999822    9204 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0531 19:49:57.486481    9204 cli_runner.go:217] Completed: docker system info --format "{{json .}}": (2.4861987s)
	I0531 19:49:57.487080    9204 info.go:265] docker info: {ID:JKWR:L4LW:XYJC:G6AI:GZFU:RUGW:CCH6:OD2M:V572:4FTB:B7YC:DTUC Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:72 OomKillDisable:true NGoroutines:58 SystemTime:2022-05-31 19:49:56.2369805 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.102.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_
64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:53902323712 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.14 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3df54a852345ae127d1fa3092b95168e4a88e2f8 Expected:3df54a852345ae127d1fa3092b95168e4a88e2f8} RuncCommit:{ID:v1.0.3-0-gf46b6ba Expected:v1.0.3-0-gf46b6ba} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,
profile=default] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.5.1] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:C:\Program Files\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc.
Version:v0.17.0]] Warnings:<nil>}}
	I0531 19:49:57.604786    9204 out.go:177] * Using the docker driver based on user configuration
	I0531 19:49:55.143006    9268 pod_ready.go:102] pod "cilium-k5t52" in "kube-system" namespace has status "Ready":"False"
	I0531 19:49:57.145340    9268 pod_ready.go:102] pod "cilium-k5t52" in "kube-system" namespace has status "Ready":"False"
	I0531 19:49:57.516850    9164 cli_runner.go:164] Run: docker container inspect auto-20220531191922-2108 --format={{.State.Status}}
	I0531 19:49:58.898389    9164 cli_runner.go:217] Completed: docker container inspect auto-20220531191922-2108 --format={{.State.Status}}: (1.3813979s)
	I0531 19:49:57.608317    9204 start.go:284] selected driver: docker
	I0531 19:49:57.608513    9204 start.go:806] validating driver "docker" against <nil>
	I0531 19:49:57.608513    9204 start.go:817] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0531 19:49:57.700957    9204 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0531 19:50:00.270796    9204 cli_runner.go:217] Completed: docker system info --format "{{json .}}": (2.5698275s)
	I0531 19:50:00.270796    9204 info.go:265] docker info: {ID:JKWR:L4LW:XYJC:G6AI:GZFU:RUGW:CCH6:OD2M:V572:4FTB:B7YC:DTUC Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:72 OomKillDisable:true NGoroutines:58 SystemTime:2022-05-31 19:49:59.025748 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.102.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_6
4 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:53902323712 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.14 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3df54a852345ae127d1fa3092b95168e4a88e2f8 Expected:3df54a852345ae127d1fa3092b95168e4a88e2f8} RuncCommit:{ID:v1.0.3-0-gf46b6ba Expected:v1.0.3-0-gf46b6ba} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,p
rofile=default] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.5.1] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:C:\Program Files\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc.
Version:v0.17.0]] Warnings:<nil>}}
	I0531 19:50:00.270796    9204 start_flags.go:292] no existing cluster config was found, will generate one from the flags 
	I0531 19:50:00.272492    9204 start_flags.go:847] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0531 19:50:00.275572    9204 out.go:177] * Using Docker Desktop driver with the root privilege
	I0531 19:50:00.278649    9204 cni.go:95] Creating CNI manager for "calico"
	I0531 19:50:00.278649    9204 start_flags.go:301] Found "Calico" CNI - setting NetworkPlugin=cni
	I0531 19:50:00.278649    9204 start_flags.go:306] config:
	{Name:calico-20220531191937-2108 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653596720-14230@sha256:e953786303ac8350802546ee187d34e89f0007072a54fdbcc2f86a1fb8575418 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.6 ClusterName:calico-20220531191937-2108 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:5m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube7:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0531 19:50:00.283331    9204 out.go:177] * Starting control plane node calico-20220531191937-2108 in cluster calico-20220531191937-2108
	I0531 19:50:00.287897    9204 cache.go:120] Beginning downloading kic base image for docker with docker
	I0531 19:50:00.290886    9204 out.go:177] * Pulling base image ...
	I0531 19:50:00.295769    9204 preload.go:132] Checking if preload exists for k8s version v1.23.6 and runtime docker
	I0531 19:50:00.295769    9204 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653596720-14230@sha256:e953786303ac8350802546ee187d34e89f0007072a54fdbcc2f86a1fb8575418 in local docker daemon
	I0531 19:50:00.295769    9204 preload.go:148] Found local preload: C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.23.6-docker-overlay2-amd64.tar.lz4
	I0531 19:50:00.295769    9204 cache.go:57] Caching tarball of preloaded images
	I0531 19:50:00.296331    9204 preload.go:174] Found C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.23.6-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0531 19:50:00.296577    9204 cache.go:60] Finished verifying existence of preloaded tar for  v1.23.6 on docker
	I0531 19:50:00.296780    9204 profile.go:148] Saving config to C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\calico-20220531191937-2108\config.json ...
	I0531 19:50:00.296892    9204 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\calico-20220531191937-2108\config.json: {Name:mk395a5aeceb2554c99cc9c4c3ac1d1fc9bee949 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0531 19:50:01.564541    9204 image.go:79] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653596720-14230@sha256:e953786303ac8350802546ee187d34e89f0007072a54fdbcc2f86a1fb8575418 in local docker daemon, skipping pull
	I0531 19:50:01.564541    9204 cache.go:141] gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653596720-14230@sha256:e953786303ac8350802546ee187d34e89f0007072a54fdbcc2f86a1fb8575418 exists in daemon, skipping load
	I0531 19:50:01.564541    9204 cache.go:206] Successfully downloaded all kic artifacts
	I0531 19:50:01.564541    9204 start.go:352] acquiring machines lock for calico-20220531191937-2108: {Name:mk229298a8341a90ce561add7d1a945d7b3315d3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0531 19:50:01.564541    9204 start.go:356] acquired machines lock for "calico-20220531191937-2108" in 0s
	I0531 19:50:01.564541    9204 start.go:91] Provisioning new machine with config: &{Name:calico-20220531191937-2108 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653596720-14230@sha256:e953786303ac8350802546ee187d34e89f0007072a54fdbcc2f86a1fb8575418 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.6 ClusterName:calico-20220531191937-2108 Namespace:default
APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.23.6 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:5m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube7:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docke
r BinaryMirror: DisableOptimizations:false DisableMetrics:false} &{Name: IP: Port:8443 KubernetesVersion:v1.23.6 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0531 19:50:01.564541    9204 start.go:131] createHost starting for "" (driver="docker")
	I0531 19:49:59.646777    9268 pod_ready.go:102] pod "cilium-k5t52" in "kube-system" namespace has status "Ready":"False"
	I0531 19:50:02.332392    9268 pod_ready.go:102] pod "cilium-k5t52" in "kube-system" namespace has status "Ready":"False"
	I0531 19:50:01.915706    9164 cli_runner.go:164] Run: docker container inspect auto-20220531191922-2108 --format={{.State.Status}}
	I0531 19:50:03.216485    9164 cli_runner.go:217] Completed: docker container inspect auto-20220531191922-2108 --format={{.State.Status}}: (1.300773s)
	I0531 19:50:01.568587    9204 out.go:204] * Creating docker container (CPUs=2, Memory=2048MB) ...
	I0531 19:50:01.568587    9204 start.go:165] libmachine.API.Create for "calico-20220531191937-2108" (driver="docker")
	I0531 19:50:01.568587    9204 client.go:168] LocalClient.Create starting
	I0531 19:50:01.569562    9204 main.go:134] libmachine: Reading certificate data from C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem
	I0531 19:50:01.569562    9204 main.go:134] libmachine: Decoding PEM data...
	I0531 19:50:01.569562    9204 main.go:134] libmachine: Parsing certificate...
	I0531 19:50:01.569562    9204 main.go:134] libmachine: Reading certificate data from C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\cert.pem
	I0531 19:50:01.569562    9204 main.go:134] libmachine: Decoding PEM data...
	I0531 19:50:01.569562    9204 main.go:134] libmachine: Parsing certificate...
	I0531 19:50:01.578550    9204 cli_runner.go:164] Run: docker network inspect calico-20220531191937-2108 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0531 19:50:02.878167    9204 cli_runner.go:211] docker network inspect calico-20220531191937-2108 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0531 19:50:02.878167    9204 cli_runner.go:217] Completed: docker network inspect calico-20220531191937-2108 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}": (1.2996109s)
	I0531 19:50:02.885168    9204 network_create.go:272] running [docker network inspect calico-20220531191937-2108] to gather additional debugging logs...
	I0531 19:50:02.885168    9204 cli_runner.go:164] Run: docker network inspect calico-20220531191937-2108
	W0531 19:50:04.147653    9204 cli_runner.go:211] docker network inspect calico-20220531191937-2108 returned with exit code 1
	I0531 19:50:04.147653    9204 cli_runner.go:217] Completed: docker network inspect calico-20220531191937-2108: (1.2624794s)
	I0531 19:50:04.147653    9204 network_create.go:275] error running [docker network inspect calico-20220531191937-2108]: docker network inspect calico-20220531191937-2108: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: calico-20220531191937-2108
	I0531 19:50:04.147653    9204 network_create.go:277] output of [docker network inspect calico-20220531191937-2108]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: calico-20220531191937-2108
	
	** /stderr **
	I0531 19:50:04.157637    9204 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0531 19:50:05.415515    9204 cli_runner.go:217] Completed: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}": (1.2578719s)
	I0531 19:50:05.445814    9204 network.go:288] reserving subnet 192.168.49.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[] amended:true}} dirty:map[192.168.49.0:0xc000006d88] misses:0}
	I0531 19:50:05.445814    9204 network.go:235] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0531 19:50:05.445814    9204 network_create.go:115] attempt to create docker network calico-20220531191937-2108 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0531 19:50:05.453842    9204 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true calico-20220531191937-2108
	I0531 19:50:04.642744    9268 pod_ready.go:102] pod "cilium-k5t52" in "kube-system" namespace has status "Ready":"False"
	I0531 19:50:06.644986    9268 pod_ready.go:102] pod "cilium-k5t52" in "kube-system" namespace has status "Ready":"False"
	I0531 19:50:09.144450    9268 pod_ready.go:102] pod "cilium-k5t52" in "kube-system" namespace has status "Ready":"False"
	I0531 19:50:06.251698    9164 cli_runner.go:164] Run: docker container inspect auto-20220531191922-2108 --format={{.State.Status}}
	I0531 19:50:07.600632    9164 cli_runner.go:217] Completed: docker container inspect auto-20220531191922-2108 --format={{.State.Status}}: (1.3488025s)
	I0531 19:50:06.783335    9204 cli_runner.go:217] Completed: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true calico-20220531191937-2108: (1.3294874s)
	I0531 19:50:06.783335    9204 network_create.go:99] docker network calico-20220531191937-2108 192.168.49.0/24 created
	I0531 19:50:06.783335    9204 kic.go:106] calculated static IP "192.168.49.2" for the "calico-20220531191937-2108" container
	I0531 19:50:06.796335    9204 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0531 19:50:08.150320    9204 cli_runner.go:217] Completed: docker ps -a --format {{.Names}}: (1.3528697s)
	I0531 19:50:08.159915    9204 cli_runner.go:164] Run: docker volume create calico-20220531191937-2108 --label name.minikube.sigs.k8s.io=calico-20220531191937-2108 --label created_by.minikube.sigs.k8s.io=true
	I0531 19:50:09.384153    9204 cli_runner.go:217] Completed: docker volume create calico-20220531191937-2108 --label name.minikube.sigs.k8s.io=calico-20220531191937-2108 --label created_by.minikube.sigs.k8s.io=true: (1.2242333s)
	I0531 19:50:09.384392    9204 oci.go:103] Successfully created a docker volume calico-20220531191937-2108
	I0531 19:50:09.393540    9204 cli_runner.go:164] Run: docker run --rm --name calico-20220531191937-2108-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=calico-20220531191937-2108 --entrypoint /usr/bin/test -v calico-20220531191937-2108:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653596720-14230@sha256:e953786303ac8350802546ee187d34e89f0007072a54fdbcc2f86a1fb8575418 -d /var/lib
	I0531 19:50:11.642196    9268 pod_ready.go:102] pod "cilium-k5t52" in "kube-system" namespace has status "Ready":"False"
	I0531 19:50:13.647209    9268 pod_ready.go:102] pod "cilium-k5t52" in "kube-system" namespace has status "Ready":"False"
	I0531 19:50:10.615103    9164 cli_runner.go:164] Run: docker container inspect auto-20220531191922-2108 --format={{.State.Status}}
	I0531 19:50:12.008187    9164 cli_runner.go:217] Completed: docker container inspect auto-20220531191922-2108 --format={{.State.Status}}: (1.3920811s)
	I0531 19:50:15.042820    9164 cli_runner.go:164] Run: docker container inspect auto-20220531191922-2108 --format={{.State.Status}}
	I0531 19:50:12.953319    9204 cli_runner.go:217] Completed: docker run --rm --name calico-20220531191937-2108-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=calico-20220531191937-2108 --entrypoint /usr/bin/test -v calico-20220531191937-2108:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653596720-14230@sha256:e953786303ac8350802546ee187d34e89f0007072a54fdbcc2f86a1fb8575418 -d /var/lib: (3.5597634s)
	I0531 19:50:12.953319    9204 oci.go:107] Successfully prepared a docker volume calico-20220531191937-2108
	I0531 19:50:12.953319    9204 preload.go:132] Checking if preload exists for k8s version v1.23.6 and runtime docker
	I0531 19:50:12.953319    9204 kic.go:179] Starting extracting preloaded images to volume ...
	I0531 19:50:12.965313    9204 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.23.6-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v calico-20220531191937-2108:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653596720-14230@sha256:e953786303ac8350802546ee187d34e89f0007072a54fdbcc2f86a1fb8575418 -I lz4 -xf /preloaded.tar -C /extractDir
	I0531 19:50:15.654692    9268 pod_ready.go:102] pod "cilium-k5t52" in "kube-system" namespace has status "Ready":"False"
	I0531 19:50:18.086828    9268 pod_ready.go:102] pod "cilium-k5t52" in "kube-system" namespace has status "Ready":"False"
	I0531 19:50:16.345289    9164 cli_runner.go:217] Completed: docker container inspect auto-20220531191922-2108 --format={{.State.Status}}: (1.3024634s)
	I0531 19:50:19.377999    9164 cli_runner.go:164] Run: docker container inspect auto-20220531191922-2108 --format={{.State.Status}}
	I0531 19:50:20.146417    9268 pod_ready.go:102] pod "cilium-k5t52" in "kube-system" namespace has status "Ready":"False"
	I0531 19:50:22.690233    9268 pod_ready.go:102] pod "cilium-k5t52" in "kube-system" namespace has status "Ready":"False"
	I0531 19:50:20.665948    9164 cli_runner.go:217] Completed: docker container inspect auto-20220531191922-2108 --format={{.State.Status}}: (1.2879427s)
	I0531 19:50:23.696116    9164 cli_runner.go:164] Run: docker container inspect auto-20220531191922-2108 --format={{.State.Status}}
	I0531 19:50:24.969191    9164 cli_runner.go:217] Completed: docker container inspect auto-20220531191922-2108 --format={{.State.Status}}: (1.2730695s)
	I0531 19:50:25.137333    9268 pod_ready.go:102] pod "cilium-k5t52" in "kube-system" namespace has status "Ready":"False"
	I0531 19:50:27.203556    9268 pod_ready.go:102] pod "cilium-k5t52" in "kube-system" namespace has status "Ready":"False"
	I0531 19:50:27.988449    9164 cli_runner.go:164] Run: docker container inspect auto-20220531191922-2108 --format={{.State.Status}}
	I0531 19:50:29.200894    9164 cli_runner.go:217] Completed: docker container inspect auto-20220531191922-2108 --format={{.State.Status}}: (1.2124396s)
	I0531 19:50:29.657641    9268 pod_ready.go:102] pod "cilium-k5t52" in "kube-system" namespace has status "Ready":"False"
	I0531 19:50:31.933017    9268 pod_ready.go:102] pod "cilium-k5t52" in "kube-system" namespace has status "Ready":"False"
	I0531 19:50:32.227746    9164 cli_runner.go:164] Run: docker container inspect auto-20220531191922-2108 --format={{.State.Status}}
	I0531 19:50:33.510930    9164 cli_runner.go:217] Completed: docker container inspect auto-20220531191922-2108 --format={{.State.Status}}: (1.2831022s)
	I0531 19:50:34.332888    9268 pod_ready.go:102] pod "cilium-k5t52" in "kube-system" namespace has status "Ready":"False"
	I0531 19:50:36.646199    9268 pod_ready.go:102] pod "cilium-k5t52" in "kube-system" namespace has status "Ready":"False"
	I0531 19:50:38.772665    9268 pod_ready.go:102] pod "cilium-k5t52" in "kube-system" namespace has status "Ready":"False"
	I0531 19:50:36.545016    9164 cli_runner.go:164] Run: docker container inspect auto-20220531191922-2108 --format={{.State.Status}}
	I0531 19:50:37.748014    9164 cli_runner.go:217] Completed: docker container inspect auto-20220531191922-2108 --format={{.State.Status}}: (1.202913s)
	I0531 19:50:40.202571    9204 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.23.6-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v calico-20220531191937-2108:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653596720-14230@sha256:e953786303ac8350802546ee187d34e89f0007072a54fdbcc2f86a1fb8575418 -I lz4 -xf /preloaded.tar -C /extractDir: (27.2371385s)
	I0531 19:50:40.202571    9204 kic.go:188] duration metric: took 27.249133 seconds to extract preloaded images to volume
	I0531 19:50:40.212968    9204 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0531 19:50:41.079065    9268 pod_ready.go:102] pod "cilium-k5t52" in "kube-system" namespace has status "Ready":"False"
	I0531 19:50:43.087683    9268 pod_ready.go:102] pod "cilium-k5t52" in "kube-system" namespace has status "Ready":"False"
	I0531 19:50:40.770103    9164 cli_runner.go:164] Run: docker container inspect auto-20220531191922-2108 --format={{.State.Status}}
	I0531 19:50:41.922738    9164 cli_runner.go:217] Completed: docker container inspect auto-20220531191922-2108 --format={{.State.Status}}: (1.1526298s)
	I0531 19:50:44.926251    9164 stop.go:59] stop err: Maximum number of retries (60) exceeded
	I0531 19:50:44.926251    9164 delete.go:129] stophost failed (probably ok): Temporary Error: stop: Maximum number of retries (60) exceeded
	I0531 19:50:44.942411    9164 cli_runner.go:164] Run: docker container inspect auto-20220531191922-2108 --format={{.State.Status}}
	I0531 19:50:42.425561    9204 cli_runner.go:217] Completed: docker system info --format "{{json .}}": (2.212583s)
	I0531 19:50:42.426310    9204 info.go:265] docker info: {ID:JKWR:L4LW:XYJC:G6AI:GZFU:RUGW:CCH6:OD2M:V572:4FTB:B7YC:DTUC Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:74 OomKillDisable:true NGoroutines:58 SystemTime:2022-05-31 19:50:41.2734932 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.102.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_
64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:53902323712 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.14 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3df54a852345ae127d1fa3092b95168e4a88e2f8 Expected:3df54a852345ae127d1fa3092b95168e4a88e2f8} RuncCommit:{ID:v1.0.3-0-gf46b6ba Expected:v1.0.3-0-gf46b6ba} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,
profile=default] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.5.1] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:C:\Program Files\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc.
Version:v0.17.0]] Warnings:<nil>}}
	I0531 19:50:42.437858    9204 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0531 19:50:44.621798    9204 cli_runner.go:217] Completed: docker info --format "'{{json .SecurityOptions}}'": (2.1839299s)
	I0531 19:50:44.628791    9204 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname calico-20220531191937-2108 --name calico-20220531191937-2108 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=calico-20220531191937-2108 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=calico-20220531191937-2108 --network calico-20220531191937-2108 --ip 192.168.49.2 --volume calico-20220531191937-2108:/var --security-opt apparmor=unconfined --memory=2048mb --memory-swap=2048mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653596720-14230@sha256:e953786303ac8350802546ee187d34e89f0007072a54fdbcc2f86a1fb8575418
	I0531 19:50:45.089192    9268 pod_ready.go:102] pod "cilium-k5t52" in "kube-system" namespace has status "Ready":"False"
	I0531 19:50:47.102726    9268 pod_ready.go:102] pod "cilium-k5t52" in "kube-system" namespace has status "Ready":"False"
	I0531 19:50:46.171577    9164 cli_runner.go:217] Completed: docker container inspect auto-20220531191922-2108 --format={{.State.Status}}: (1.2291605s)
	W0531 19:50:46.171577    9164 delete.go:135] deletehost failed: Docker machine "auto-20220531191922-2108" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I0531 19:50:46.180573    9164 cli_runner.go:164] Run: docker container inspect -f {{.Id}} auto-20220531191922-2108
	I0531 19:50:47.463241    9164 cli_runner.go:217] Completed: docker container inspect -f {{.Id}} auto-20220531191922-2108: (1.2826628s)
	I0531 19:50:47.474237    9164 cli_runner.go:164] Run: docker container inspect auto-20220531191922-2108 --format={{.State.Status}}
	I0531 19:50:48.820556    9164 cli_runner.go:217] Completed: docker container inspect auto-20220531191922-2108 --format={{.State.Status}}: (1.3463122s)
	I0531 19:50:48.827551    9164 cli_runner.go:164] Run: docker exec --privileged -t auto-20220531191922-2108 /bin/bash -c "sudo init 0"
	W0531 19:50:50.067176    9164 cli_runner.go:211] docker exec --privileged -t auto-20220531191922-2108 /bin/bash -c "sudo init 0" returned with exit code 1
	I0531 19:50:50.067176    9164 cli_runner.go:217] Completed: docker exec --privileged -t auto-20220531191922-2108 /bin/bash -c "sudo init 0": (1.2396197s)
	I0531 19:50:50.067176    9164 oci.go:625] error shutdown auto-20220531191922-2108: docker exec --privileged -t auto-20220531191922-2108 /bin/bash -c "sudo init 0": exit status 1
	stdout:
	
	stderr:
	Error response from daemon: Container 340ffde1adac470b31984d86e47615244c565b791afa400115f002bf5bf8dd67 is not running
	I0531 19:50:47.120642    9204 cli_runner.go:217] Completed: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname calico-20220531191937-2108 --name calico-20220531191937-2108 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=calico-20220531191937-2108 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=calico-20220531191937-2108 --network calico-20220531191937-2108 --ip 192.168.49.2 --volume calico-20220531191937-2108:/var --security-opt apparmor=unconfined --memory=2048mb --memory-swap=2048mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653596720-14230@sha256:e953786303ac8350802546ee187d34e89f0007072a54fdbcc2f86a1fb8575418: (2.4917556s)
	I0531 19:50:47.129556    9204 cli_runner.go:164] Run: docker container inspect calico-20220531191937-2108 --format={{.State.Running}}
	I0531 19:50:48.475805    9204 cli_runner.go:217] Completed: docker container inspect calico-20220531191937-2108 --format={{.State.Running}}: (1.3462433s)
	I0531 19:50:48.486615    9204 cli_runner.go:164] Run: docker container inspect calico-20220531191937-2108 --format={{.State.Status}}
	I0531 19:50:49.783563    9204 cli_runner.go:217] Completed: docker container inspect calico-20220531191937-2108 --format={{.State.Status}}: (1.2968512s)
	I0531 19:50:49.795640    9204 cli_runner.go:164] Run: docker exec calico-20220531191937-2108 stat /var/lib/dpkg/alternatives/iptables
	I0531 19:50:49.592858    9268 pod_ready.go:102] pod "cilium-k5t52" in "kube-system" namespace has status "Ready":"False"
	I0531 19:50:52.094119    9268 pod_ready.go:102] pod "cilium-k5t52" in "kube-system" namespace has status "Ready":"False"
	I0531 19:50:51.090703    9164 cli_runner.go:164] Run: docker container inspect auto-20220531191922-2108 --format={{.State.Status}}
	I0531 19:50:52.331743    9164 cli_runner.go:217] Completed: docker container inspect auto-20220531191922-2108 --format={{.State.Status}}: (1.2408341s)
	I0531 19:50:52.331920    9164 oci.go:639] temporary error: container auto-20220531191922-2108 status is  but expect it to be exited
	I0531 19:50:52.332002    9164 oci.go:645] Successfully shutdown container auto-20220531191922-2108
	I0531 19:50:52.340140    9164 cli_runner.go:164] Run: docker rm -f -v auto-20220531191922-2108
	I0531 19:50:53.611440    9164 cli_runner.go:217] Completed: docker rm -f -v auto-20220531191922-2108: (1.2712945s)
	I0531 19:50:53.629598    9164 cli_runner.go:164] Run: docker container inspect -f {{.Id}} auto-20220531191922-2108
	W0531 19:50:54.793441    9164 cli_runner.go:211] docker container inspect -f {{.Id}} auto-20220531191922-2108 returned with exit code 1
	I0531 19:50:54.793672    9164 cli_runner.go:217] Completed: docker container inspect -f {{.Id}} auto-20220531191922-2108: (1.1638383s)
	I0531 19:50:54.808001    9164 cli_runner.go:164] Run: docker network inspect auto-20220531191922-2108 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0531 19:50:51.194840    9204 cli_runner.go:217] Completed: docker exec calico-20220531191937-2108 stat /var/lib/dpkg/alternatives/iptables: (1.3991943s)
	I0531 19:50:51.194840    9204 oci.go:247] the created container "calico-20220531191937-2108" has a running status.
	I0531 19:50:51.194840    9204 kic.go:210] Creating ssh key for kic: C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\calico-20220531191937-2108\id_rsa...
	I0531 19:50:51.352748    9204 kic_runner.go:191] docker (temp): C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\calico-20220531191937-2108\id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0531 19:50:52.714374    9204 cli_runner.go:164] Run: docker container inspect calico-20220531191937-2108 --format={{.State.Status}}
	I0531 19:50:53.983691    9204 cli_runner.go:217] Completed: docker container inspect calico-20220531191937-2108 --format={{.State.Status}}: (1.2693109s)
	I0531 19:50:53.999679    9204 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0531 19:50:53.999679    9204 kic_runner.go:114] Args: [docker exec --privileged calico-20220531191937-2108 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0531 19:50:55.344133    9204 kic_runner.go:123] Done: [docker exec --privileged calico-20220531191937-2108 chown docker:docker /home/docker/.ssh/authorized_keys]: (1.3442906s)
	I0531 19:50:55.347971    9204 kic.go:250] ensuring only current user has permissions to key file located at : C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\calico-20220531191937-2108\id_rsa...
	I0531 19:50:55.925795    9204 cli_runner.go:164] Run: docker container inspect calico-20220531191937-2108 --format={{.State.Status}}
	W0531 19:50:56.033847    9164 cli_runner.go:211] docker network inspect auto-20220531191922-2108 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0531 19:50:56.033847    9164 cli_runner.go:217] Completed: docker network inspect auto-20220531191922-2108 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}": (1.2258406s)
	I0531 19:50:56.040853    9164 network_create.go:272] running [docker network inspect auto-20220531191922-2108] to gather additional debugging logs...
	I0531 19:50:56.040853    9164 cli_runner.go:164] Run: docker network inspect auto-20220531191922-2108
	W0531 19:50:57.206246    9164 cli_runner.go:211] docker network inspect auto-20220531191922-2108 returned with exit code 1
	I0531 19:50:57.206246    9164 cli_runner.go:217] Completed: docker network inspect auto-20220531191922-2108: (1.1653883s)
	I0531 19:50:57.206246    9164 network_create.go:275] error running [docker network inspect auto-20220531191922-2108]: docker network inspect auto-20220531191922-2108: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: auto-20220531191922-2108
	I0531 19:50:57.206246    9164 network_create.go:277] output of [docker network inspect auto-20220531191922-2108]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: auto-20220531191922-2108
	
	** /stderr **
	W0531 19:50:57.207252    9164 delete.go:139] delete failed (probably ok) <nil>
	I0531 19:50:57.207252    9164 fix.go:115] Sleeping 1 second for extra luck!
	I0531 19:50:58.209593    9164 start.go:131] createHost starting for "" (driver="docker")
	I0531 19:50:54.590673    9268 pod_ready.go:102] pod "cilium-k5t52" in "kube-system" namespace has status "Ready":"False"
	I0531 19:50:56.597659    9268 pod_ready.go:102] pod "cilium-k5t52" in "kube-system" namespace has status "Ready":"False"
	I0531 19:50:59.090202    9268 pod_ready.go:102] pod "cilium-k5t52" in "kube-system" namespace has status "Ready":"False"
	I0531 19:50:58.216019    9164 out.go:204] * Creating docker container (CPUs=2, Memory=2048MB) ...
	I0531 19:50:58.216363    9164 start.go:165] libmachine.API.Create for "auto-20220531191922-2108" (driver="docker")
	I0531 19:50:58.216424    9164 client.go:168] LocalClient.Create starting
	I0531 19:50:58.216424    9164 main.go:134] libmachine: Reading certificate data from C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem
	I0531 19:50:58.217206    9164 main.go:134] libmachine: Decoding PEM data...
	I0531 19:50:58.217206    9164 main.go:134] libmachine: Parsing certificate...
	I0531 19:50:58.217598    9164 main.go:134] libmachine: Reading certificate data from C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\cert.pem
	I0531 19:50:58.217933    9164 main.go:134] libmachine: Decoding PEM data...
	I0531 19:50:58.217933    9164 main.go:134] libmachine: Parsing certificate...
	I0531 19:50:58.233355    9164 cli_runner.go:164] Run: docker network inspect auto-20220531191922-2108 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0531 19:50:59.565184    9164 cli_runner.go:211] docker network inspect auto-20220531191922-2108 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0531 19:50:59.565184    9164 cli_runner.go:217] Completed: docker network inspect auto-20220531191922-2108 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}": (1.3309144s)
	I0531 19:50:59.574188    9164 network_create.go:272] running [docker network inspect auto-20220531191922-2108] to gather additional debugging logs...
	I0531 19:50:59.574188    9164 cli_runner.go:164] Run: docker network inspect auto-20220531191922-2108
	I0531 19:50:57.128250    9204 cli_runner.go:217] Completed: docker container inspect calico-20220531191937-2108 --format={{.State.Status}}: (1.2024495s)
	I0531 19:50:57.128250    9204 machine.go:88] provisioning docker machine ...
	I0531 19:50:57.128250    9204 ubuntu.go:169] provisioning hostname "calico-20220531191937-2108"
	I0531 19:50:57.136258    9204 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220531191937-2108
	I0531 19:50:58.384809    9204 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220531191937-2108: (1.2485456s)
	I0531 19:50:58.389815    9204 main.go:134] libmachine: Using SSH client type: native
	I0531 19:50:58.396816    9204 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x1322ea0] 0x1325d00 <nil>  [] 0s} 127.0.0.1 54919 <nil> <nil>}
	I0531 19:50:58.396816    9204 main.go:134] libmachine: About to run SSH command:
	sudo hostname calico-20220531191937-2108 && echo "calico-20220531191937-2108" | sudo tee /etc/hostname
	I0531 19:50:58.637577    9204 main.go:134] libmachine: SSH cmd err, output: <nil>: calico-20220531191937-2108
	
	I0531 19:50:58.647947    9204 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220531191937-2108
	I0531 19:50:59.961886    9204 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220531191937-2108: (1.3139332s)
	I0531 19:50:59.966482    9204 main.go:134] libmachine: Using SSH client type: native
	I0531 19:50:59.967181    9204 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x1322ea0] 0x1325d00 <nil>  [] 0s} 127.0.0.1 54919 <nil> <nil>}
	I0531 19:50:59.967181    9204 main.go:134] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\scalico-20220531191937-2108' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 calico-20220531191937-2108/g' /etc/hosts;
				else 
					echo '127.0.1.1 calico-20220531191937-2108' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0531 19:51:00.166174    9204 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I0531 19:51:00.166174    9204 ubuntu.go:175] set auth options {CertDir:C:\Users\jenkins.minikube7\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube7\minikube-integration\.minikube}
	I0531 19:51:00.166267    9204 ubuntu.go:177] setting up certificates
	I0531 19:51:00.166267    9204 provision.go:83] configureAuth start
	I0531 19:51:00.174598    9204 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" calico-20220531191937-2108
	I0531 19:51:01.585861    9268 pod_ready.go:102] pod "cilium-k5t52" in "kube-system" namespace has status "Ready":"False"
	I0531 19:51:03.594656    9268 pod_ready.go:102] pod "cilium-k5t52" in "kube-system" namespace has status "Ready":"False"
	W0531 19:51:00.789363    9164 cli_runner.go:211] docker network inspect auto-20220531191922-2108 returned with exit code 1
	I0531 19:51:00.789363    9164 cli_runner.go:217] Completed: docker network inspect auto-20220531191922-2108: (1.2150736s)
	I0531 19:51:00.789465    9164 network_create.go:275] error running [docker network inspect auto-20220531191922-2108]: docker network inspect auto-20220531191922-2108: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: auto-20220531191922-2108
	I0531 19:51:00.789465    9164 network_create.go:277] output of [docker network inspect auto-20220531191922-2108]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: auto-20220531191922-2108
	
	** /stderr **
	I0531 19:51:00.799089    9164 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0531 19:51:02.037678    9164 cli_runner.go:217] Completed: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}": (1.2385829s)
	I0531 19:51:02.058285    9164 network.go:284] reusing subnet 192.168.49.0 that has expired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc0005f2b80] amended:false}} dirty:map[] misses:0}
	I0531 19:51:02.058285    9164 network.go:235] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0531 19:51:02.058285    9164 network_create.go:115] attempt to create docker network auto-20220531191922-2108 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0531 19:51:02.069439    9164 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true auto-20220531191922-2108
	W0531 19:51:03.250157    9164 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true auto-20220531191922-2108 returned with exit code 1
	I0531 19:51:03.250157    9164 cli_runner.go:217] Completed: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true auto-20220531191922-2108: (1.1806423s)
	W0531 19:51:03.250157    9164 network_create.go:107] failed to create docker network auto-20220531191922-2108 192.168.49.0/24, will retry: subnet is taken
	I0531 19:51:03.275153    9164 network.go:279] skipping subnet 192.168.49.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc0005f2b80] amended:false}} dirty:map[] misses:0}
	I0531 19:51:03.275153    9164 network.go:238] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0531 19:51:03.299141    9164 network.go:288] reserving subnet 192.168.58.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc0005f2b80] amended:true}} dirty:map[192.168.49.0:0xc0005f2b80 192.168.58.0:0xc000006c98] misses:0}
	I0531 19:51:03.299141    9164 network.go:235] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0531 19:51:03.299141    9164 network_create.go:115] attempt to create docker network auto-20220531191922-2108 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
	I0531 19:51:03.307143    9164 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true auto-20220531191922-2108
	W0531 19:51:04.516619    9164 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true auto-20220531191922-2108 returned with exit code 1
	I0531 19:51:04.516619    9164 cli_runner.go:217] Completed: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true auto-20220531191922-2108: (1.2094701s)
	W0531 19:51:04.516619    9164 network_create.go:107] failed to create docker network auto-20220531191922-2108 192.168.58.0/24, will retry: subnet is taken
	I0531 19:51:04.541855    9164 network.go:279] skipping subnet 192.168.58.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc0005f2b80] amended:true}} dirty:map[192.168.49.0:0xc0005f2b80 192.168.58.0:0xc000006c98] misses:1}
	I0531 19:51:04.541855    9164 network.go:238] skipping subnet 192.168.58.0/24 that is reserved: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0531 19:51:04.560170    9164 network.go:288] reserving subnet 192.168.67.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc0005f2b80] amended:true}} dirty:map[192.168.49.0:0xc0005f2b80 192.168.58.0:0xc000006c98 192.168.67.0:0xc0005922f8] misses:1}
	I0531 19:51:04.560170    9164 network.go:235] using free private subnet 192.168.67.0/24: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0531 19:51:04.560170    9164 network_create.go:115] attempt to create docker network auto-20220531191922-2108 192.168.67.0/24 with gateway 192.168.67.1 and MTU of 1500 ...
	I0531 19:51:04.568815    9164 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true auto-20220531191922-2108
	I0531 19:51:01.477955    9204 cli_runner.go:217] Completed: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" calico-20220531191937-2108: (1.303351s)
	I0531 19:51:01.477955    9204 provision.go:138] copyHostCerts
	I0531 19:51:01.477955    9204 exec_runner.go:144] found C:\Users\jenkins.minikube7\minikube-integration\.minikube/key.pem, removing ...
	I0531 19:51:01.477955    9204 exec_runner.go:207] rm: C:\Users\jenkins.minikube7\minikube-integration\.minikube\key.pem
	I0531 19:51:01.478916    9204 exec_runner.go:151] cp: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube7\minikube-integration\.minikube/key.pem (1679 bytes)
	I0531 19:51:01.479924    9204 exec_runner.go:144] found C:\Users\jenkins.minikube7\minikube-integration\.minikube/ca.pem, removing ...
	I0531 19:51:01.479924    9204 exec_runner.go:207] rm: C:\Users\jenkins.minikube7\minikube-integration\.minikube\ca.pem
	I0531 19:51:01.479924    9204 exec_runner.go:151] cp: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube7\minikube-integration\.minikube/ca.pem (1078 bytes)
	I0531 19:51:01.481945    9204 exec_runner.go:144] found C:\Users\jenkins.minikube7\minikube-integration\.minikube/cert.pem, removing ...
	I0531 19:51:01.481945    9204 exec_runner.go:207] rm: C:\Users\jenkins.minikube7\minikube-integration\.minikube\cert.pem
	I0531 19:51:01.481945    9204 exec_runner.go:151] cp: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube7\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0531 19:51:01.482907    9204 provision.go:112] generating server cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.calico-20220531191937-2108 san=[192.168.49.2 127.0.0.1 localhost 127.0.0.1 minikube calico-20220531191937-2108]
	I0531 19:51:01.638392    9204 provision.go:172] copyRemoteCerts
	I0531 19:51:01.648401    9204 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0531 19:51:01.656385    9204 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220531191937-2108
	I0531 19:51:02.904839    9204 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220531191937-2108: (1.2483134s)
	I0531 19:51:02.904949    9204 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:54919 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\calico-20220531191937-2108\id_rsa Username:docker}
	I0531 19:51:03.060111    9204 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (1.4117041s)
	I0531 19:51:03.060887    9204 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0531 19:51:03.138138    9204 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1253 bytes)
	I0531 19:51:03.189140    9204 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0531 19:51:03.247164    9204 provision.go:86] duration metric: configureAuth took 3.0808835s
	I0531 19:51:03.247164    9204 ubuntu.go:193] setting minikube options for container-runtime
	I0531 19:51:03.247164    9204 config.go:178] Loaded profile config "calico-20220531191937-2108": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.6
	I0531 19:51:03.261180    9204 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220531191937-2108
	I0531 19:51:04.500656    9204 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220531191937-2108: (1.239471s)
	I0531 19:51:04.504628    9204 main.go:134] libmachine: Using SSH client type: native
	I0531 19:51:04.504628    9204 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x1322ea0] 0x1325d00 <nil>  [] 0s} 127.0.0.1 54919 <nil> <nil>}
	I0531 19:51:04.504628    9204 main.go:134] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0531 19:51:04.713593    9204 main.go:134] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0531 19:51:04.713593    9204 ubuntu.go:71] root file system type: overlay
	I0531 19:51:04.714591    9204 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0531 19:51:04.721589    9204 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220531191937-2108
	I0531 19:51:05.902536    9204 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220531191937-2108: (1.1809424s)
	I0531 19:51:05.906537    9204 main.go:134] libmachine: Using SSH client type: native
	I0531 19:51:05.907557    9204 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x1322ea0] 0x1325d00 <nil>  [] 0s} 127.0.0.1 54919 <nil> <nil>}
	I0531 19:51:05.907557    9204 main.go:134] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0531 19:51:06.138734    9204 main.go:134] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0531 19:51:06.148744    9204 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220531191937-2108
	I0531 19:51:06.073904    9268 pod_ready.go:102] pod "cilium-k5t52" in "kube-system" namespace has status "Ready":"False"
	I0531 19:51:08.101133    9268 pod_ready.go:102] pod "cilium-k5t52" in "kube-system" namespace has status "Ready":"False"
	W0531 19:51:05.734225    9164 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true auto-20220531191922-2108 returned with exit code 1
	I0531 19:51:05.734225    9164 cli_runner.go:217] Completed: docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true auto-20220531191922-2108: (1.1654052s)
	W0531 19:51:05.734225    9164 network_create.go:107] failed to create docker network auto-20220531191922-2108 192.168.67.0/24, will retry: subnet is taken
	I0531 19:51:05.751381    9164 network.go:279] skipping subnet 192.168.67.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc0005f2b80] amended:true}} dirty:map[192.168.49.0:0xc0005f2b80 192.168.58.0:0xc000006c98 192.168.67.0:0xc0005922f8] misses:2}
	I0531 19:51:05.751381    9164 network.go:238] skipping subnet 192.168.67.0/24 that is reserved: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0531 19:51:05.770483    9164 network.go:288] reserving subnet 192.168.76.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc0005f2b80] amended:true}} dirty:map[192.168.49.0:0xc0005f2b80 192.168.58.0:0xc000006c98 192.168.67.0:0xc0005922f8 192.168.76.0:0xc000006dc0] misses:2}
	I0531 19:51:05.770483    9164 network.go:235] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0531 19:51:05.770483    9164 network_create.go:115] attempt to create docker network auto-20220531191922-2108 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I0531 19:51:05.777060    9164 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true auto-20220531191922-2108
	I0531 19:51:07.131208    9164 cli_runner.go:217] Completed: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true auto-20220531191922-2108: (1.3541418s)
	I0531 19:51:07.131208    9164 network_create.go:99] docker network auto-20220531191922-2108 192.168.76.0/24 created
	I0531 19:51:07.131208    9164 kic.go:106] calculated static IP "192.168.76.2" for the "auto-20220531191922-2108" container
	I0531 19:51:07.147458    9164 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0531 19:51:08.442899    9164 cli_runner.go:217] Completed: docker ps -a --format {{.Names}}: (1.2952755s)
	I0531 19:51:08.455737    9164 cli_runner.go:164] Run: docker volume create auto-20220531191922-2108 --label name.minikube.sigs.k8s.io=auto-20220531191922-2108 --label created_by.minikube.sigs.k8s.io=true
	I0531 19:51:09.639133    9164 cli_runner.go:217] Completed: docker volume create auto-20220531191922-2108 --label name.minikube.sigs.k8s.io=auto-20220531191922-2108 --label created_by.minikube.sigs.k8s.io=true: (1.1833905s)
	I0531 19:51:09.639133    9164 oci.go:103] Successfully created a docker volume auto-20220531191922-2108
	I0531 19:51:09.649152    9164 cli_runner.go:164] Run: docker run --rm --name auto-20220531191922-2108-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=auto-20220531191922-2108 --entrypoint /usr/bin/test -v auto-20220531191922-2108:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653596720-14230@sha256:e953786303ac8350802546ee187d34e89f0007072a54fdbcc2f86a1fb8575418 -d /var/lib
	
	* 
	* ==> Docker <==
	* -- Logs begin at Tue 2022-05-31 19:43:05 UTC, end at Tue 2022-05-31 19:51:17 UTC. --
	May 31 19:48:49 embed-certs-20220531193346-2108 dockerd[249]: time="2022-05-31T19:48:49.583747200Z" level=info msg="ignoring event" container=e08080094d8664d9aa91b8deae02a96b9b1bc982f6fcf644237d212578613113 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	May 31 19:48:50 embed-certs-20220531193346-2108 dockerd[249]: time="2022-05-31T19:48:50.186351000Z" level=info msg="ignoring event" container=657a49562f497935ed8cfb5450b794fcef0fb529bae8516198153c43d1a06794 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	May 31 19:48:50 embed-certs-20220531193346-2108 dockerd[249]: time="2022-05-31T19:48:50.671769800Z" level=info msg="ignoring event" container=52eebb6efca24e2f29a4c15d5ebc725494e06fb2fdec51b3c69f6b4bbd56e893 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	May 31 19:48:51 embed-certs-20220531193346-2108 dockerd[249]: time="2022-05-31T19:48:51.145577200Z" level=info msg="ignoring event" container=321b341c3da01f03172c5f814e5e3956dff334962fe69b1419d552a7088ec25e module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	May 31 19:48:51 embed-certs-20220531193346-2108 dockerd[249]: time="2022-05-31T19:48:51.560593600Z" level=info msg="ignoring event" container=42131017a14c7aa8c301405260289deb36491abb326dcf4664a7643128255cb4 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	May 31 19:48:52 embed-certs-20220531193346-2108 dockerd[249]: time="2022-05-31T19:48:52.020321600Z" level=info msg="ignoring event" container=e31760c5c1f5fffc253c944bc1585f47574225e13c0f6895e332f25ea8794a05 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	May 31 19:48:52 embed-certs-20220531193346-2108 dockerd[249]: time="2022-05-31T19:48:52.472713000Z" level=info msg="ignoring event" container=b0ce8ecd8c5e3109c0394845dbb8dbded24845410f32ebeff68a660438a5fc58 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	May 31 19:49:48 embed-certs-20220531193346-2108 dockerd[249]: time="2022-05-31T19:49:48.947001500Z" level=warning msg="Error getting v2 registry: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host"
	May 31 19:49:48 embed-certs-20220531193346-2108 dockerd[249]: time="2022-05-31T19:49:48.948075900Z" level=info msg="Attempting next endpoint for pull after error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host"
	May 31 19:49:49 embed-certs-20220531193346-2108 dockerd[249]: time="2022-05-31T19:49:49.132672800Z" level=error msg="Handler for POST /v1.41/images/create returned error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host"
	May 31 19:49:50 embed-certs-20220531193346-2108 dockerd[249]: time="2022-05-31T19:49:50.943284000Z" level=info msg="ignoring event" container=7e4df02eea3abb4773abbd8693ff77f41cabb5856ea82c9ed22215112b159445 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	May 31 19:49:51 embed-certs-20220531193346-2108 dockerd[249]: time="2022-05-31T19:49:51.296521800Z" level=warning msg="reference for unknown type: application/vnd.docker.distribution.manifest.v1+prettyjws" digest="sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb" remote="k8s.gcr.io/echoserver:1.4"
	May 31 19:49:51 embed-certs-20220531193346-2108 dockerd[249]: time="2022-05-31T19:49:51.469553400Z" level=warning msg="Error persisting manifest" digest="sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb" error="error committing manifest to content store: commit failed: unexpected commit digest sha256:eaee4c452b076cdb05b391ed7e75e1ad0aca136665875ab5d7e2f3d9f4675769, expected sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb: failed precondition" remote="k8s.gcr.io/echoserver:1.4"
	May 31 19:49:51 embed-certs-20220531193346-2108 dockerd[249]: time="2022-05-31T19:49:51.487563700Z" level=info msg="ignoring event" container=6c33a6cd399b34f3186ba32a1a024490c70b9d36496e04dd6596dc6b8e88a6ea module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	May 31 19:50:10 embed-certs-20220531193346-2108 dockerd[249]: time="2022-05-31T19:50:10.100102000Z" level=warning msg="reference for unknown type: " digest="sha256:cc746e7a0b1eec0db01cbabbb6386b23d7af97e79fa9e36bb883a95b7eb96fe2" remote="docker.io/kubernetesui/dashboard@sha256:cc746e7a0b1eec0db01cbabbb6386b23d7af97e79fa9e36bb883a95b7eb96fe2"
	May 31 19:50:10 embed-certs-20220531193346-2108 dockerd[249]: time="2022-05-31T19:50:10.851162100Z" level=info msg="ignoring event" container=3612c77016ee135003d1e5f7eb30b67ef73402742c72878b54cc53a10fc1265c module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	May 31 19:50:13 embed-certs-20220531193346-2108 dockerd[249]: time="2022-05-31T19:50:13.633607300Z" level=info msg="ignoring event" container=fb4f7abe8f01a9515de8c3f9a0f0f5847dc7e16251463a2d2f3b43bb590f1ae5 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	May 31 19:50:33 embed-certs-20220531193346-2108 dockerd[249]: time="2022-05-31T19:50:33.190591600Z" level=warning msg="Error getting v2 registry: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host"
	May 31 19:50:33 embed-certs-20220531193346-2108 dockerd[249]: time="2022-05-31T19:50:33.190764700Z" level=info msg="Attempting next endpoint for pull after error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host"
	May 31 19:50:33 embed-certs-20220531193346-2108 dockerd[249]: time="2022-05-31T19:50:33.391311500Z" level=error msg="Handler for POST /v1.41/images/create returned error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host"
	May 31 19:50:33 embed-certs-20220531193346-2108 dockerd[249]: time="2022-05-31T19:50:33.444839500Z" level=info msg="ignoring event" container=4e329316b6cf16e812f4fc5db1fe998ca74bc770f881aa43a99f5fdbe785d9b5 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	May 31 19:51:01 embed-certs-20220531193346-2108 dockerd[249]: time="2022-05-31T19:51:01.252849500Z" level=info msg="ignoring event" container=b737170ecd04f67be756f5b2dcd03440df6107b41a300d43cc2fa92e16bc4acb module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	May 31 19:51:03 embed-certs-20220531193346-2108 dockerd[249]: time="2022-05-31T19:51:03.446542400Z" level=warning msg="Error getting v2 registry: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host"
	May 31 19:51:03 embed-certs-20220531193346-2108 dockerd[249]: time="2022-05-31T19:51:03.446731300Z" level=info msg="Attempting next endpoint for pull after error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host"
	May 31 19:51:03 embed-certs-20220531193346-2108 dockerd[249]: time="2022-05-31T19:51:03.455770700Z" level=error msg="Handler for POST /v1.41/images/create returned error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host"
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                                                            CREATED              STATE               NAME                        ATTEMPT             POD ID
	b737170ecd04f       a90209bb39e3d                                                                                    17 seconds ago       Exited              dashboard-metrics-scraper   3                   573ade20600ce
	d951ec257dd10       kubernetesui/dashboard@sha256:cc746e7a0b1eec0db01cbabbb6386b23d7af97e79fa9e36bb883a95b7eb96fe2   44 seconds ago       Running             kubernetes-dashboard        0                   2bafb3ffe18c9
	9a7ffad2971b8       6e38f40d628db                                                                                    About a minute ago   Running             storage-provisioner         0                   da0d57e2765f6
	e92d705833d42       a4ca41631cc7a                                                                                    About a minute ago   Running             coredns                     0                   93fb15ca13d74
	50d3e27751736       4c03754524064                                                                                    About a minute ago   Running             kube-proxy                  0                   6a8f003c94a78
	482a2f211adf5       595f327f224a4                                                                                    2 minutes ago        Running             kube-scheduler              2                   0bbf18fbf4fa4
	0b2164659b745       25f8c7f3da61c                                                                                    2 minutes ago        Running             etcd                        2                   efe187e67f2e7
	e45e0056f239d       8fa62c12256df                                                                                    2 minutes ago        Running             kube-apiserver              2                   89c6f37c504cf
	0607075503f48       df7b72818ad2e                                                                                    2 minutes ago        Running             kube-controller-manager     2                   97f35fc644fe5
	
	* 
	* ==> coredns [e92d705833d4] <==
	* .:53
	[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	CoreDNS-1.8.6
	linux/amd64, go1.17.1, 13a9191
	[INFO] Reloading
	[INFO] plugin/health: Going into lameduck mode for 5s
	[INFO] plugin/reload: Running configuration MD5 = c23ed519c17e71ee396ed052e6209e94
	[INFO] Reloading complete
	
	* 
	* ==> describe nodes <==
	* Name:               embed-certs-20220531193346-2108
	Roles:              control-plane,master
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=embed-certs-20220531193346-2108
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=bd46569bd7cb517fcad2e704abdebd2826bd8454
	                    minikube.k8s.io/name=embed-certs-20220531193346-2108
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2022_05_31T19_49_17_0700
	                    minikube.k8s.io/version=v1.26.0-beta.1
	                    node-role.kubernetes.io/control-plane=
	                    node-role.kubernetes.io/master=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 31 May 2022 19:49:10 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-20220531193346-2108
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 31 May 2022 19:51:11 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 31 May 2022 19:50:51 +0000   Tue, 31 May 2022 19:49:06 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 31 May 2022 19:50:51 +0000   Tue, 31 May 2022 19:49:06 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 31 May 2022 19:50:51 +0000   Tue, 31 May 2022 19:49:06 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 31 May 2022 19:50:51 +0000   Tue, 31 May 2022 19:49:28 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.67.2
	  Hostname:    embed-certs-20220531193346-2108
	Capacity:
	  cpu:                16
	  ephemeral-storage:  263174212Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             52638988Ki
	  pods:               110
	Allocatable:
	  cpu:                16
	  ephemeral-storage:  263174212Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             52638988Ki
	  pods:               110
	System Info:
	  Machine ID:                 bfc82849fe6e4a6a9236307a23a8b5f1
	  System UUID:                bfc82849fe6e4a6a9236307a23a8b5f1
	  Boot ID:                    99d8680c-6839-4c5e-a5fa-8740ef80d5ef
	  Kernel Version:             5.10.102.1-microsoft-standard-WSL2
	  OS Image:                   Ubuntu 20.04.4 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://20.10.16
	  Kubelet Version:            v1.23.6
	  Kube-Proxy Version:         v1.23.6
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (10 in total)
	  Namespace                   Name                                                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                       ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-64897985d-5m9xf                                    100m (0%!)(MISSING)     0 (0%!)(MISSING)      70Mi (0%!)(MISSING)        170Mi (0%!)(MISSING)     108s
	  kube-system                 etcd-embed-certs-20220531193346-2108                       100m (0%!)(MISSING)     0 (0%!)(MISSING)      100Mi (0%!)(MISSING)       0 (0%!)(MISSING)         119s
	  kube-system                 kube-apiserver-embed-certs-20220531193346-2108             250m (1%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         119s
	  kube-system                 kube-controller-manager-embed-certs-20220531193346-2108    200m (1%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m5s
	  kube-system                 kube-proxy-jqpk2                                           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         108s
	  kube-system                 kube-scheduler-embed-certs-20220531193346-2108             100m (0%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m5s
	  kube-system                 metrics-server-b955d9d8-w48dh                              100m (0%!)(MISSING)     0 (0%!)(MISSING)      200Mi (0%!)(MISSING)       0 (0%!)(MISSING)         94s
	  kube-system                 storage-provisioner                                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         93s
	  kubernetes-dashboard        dashboard-metrics-scraper-56974995fc-g6xnz                 0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         92s
	  kubernetes-dashboard        kubernetes-dashboard-8469778f77-9rhsq                      0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         92s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (5%!)(MISSING)   0 (0%!)(MISSING)
	  memory             370Mi (0%!)(MISSING)  170Mi (0%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From        Message
	  ----    ------                   ----                   ----        -------
	  Normal  Starting                 91s                    kube-proxy  
	  Normal  NodeHasSufficientMemory  2m18s (x6 over 2m19s)  kubelet     Node embed-certs-20220531193346-2108 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m18s (x5 over 2m19s)  kubelet     Node embed-certs-20220531193346-2108 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m18s (x5 over 2m19s)  kubelet     Node embed-certs-20220531193346-2108 status is now: NodeHasSufficientPID
	  Normal  Starting                 2m1s                   kubelet     Starting kubelet.
	  Normal  NodeHasSufficientMemory  2m                     kubelet     Node embed-certs-20220531193346-2108 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m                     kubelet     Node embed-certs-20220531193346-2108 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m                     kubelet     Node embed-certs-20220531193346-2108 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  2m                     kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeReady                110s                   kubelet     Node embed-certs-20220531193346-2108 status is now: NodeReady
	
	* 
	* ==> dmesg <==
	* [  +0.000006] overlayfs: workdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
	[  +0.089750] overlayfs: upperdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
	[  +0.000005] overlayfs: workdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
	[  +0.002712] overlayfs: upperdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
	[  +0.000005] overlayfs: workdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
	[  +0.106424] overlayfs: upperdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
	[  +0.000004] overlayfs: workdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
	[  +0.091580] overlayfs: upperdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
	[  +0.000006] overlayfs: workdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
	[May31 19:22] WSL2: Performing memory compaction.
	[May31 19:23] WSL2: Performing memory compaction.
	[May31 19:24] WSL2: Performing memory compaction.
	[May31 19:25] WSL2: Performing memory compaction.
	[May31 19:26] WSL2: Performing memory compaction.
	[May31 19:27] WSL2: Performing memory compaction.
	[May31 19:28] WSL2: Performing memory compaction.
	[May31 19:30] WSL2: Performing memory compaction.
	[May31 19:32] WSL2: Performing memory compaction.
	[May31 19:34] WSL2: Performing memory compaction.
	[May31 19:37] WSL2: Performing memory compaction.
	[May31 19:39] WSL2: Performing memory compaction.
	[May31 19:40] WSL2: Performing memory compaction.
	[May31 19:45] WSL2: Performing memory compaction.
	[May31 19:46] WSL2: Performing memory compaction.
	[May31 19:48] WSL2: Performing memory compaction.
	
	* 
	* ==> etcd [0b2164659b74] <==
	* {"level":"info","ts":"2022-05-31T19:50:29.646Z","caller":"traceutil/trace.go:171","msg":"trace[1785229245] linearizableReadLoop","detail":"{readStateIndex:682; appliedIndex:682; }","duration":"123.7467ms","start":"2022-05-31T19:50:29.522Z","end":"2022-05-31T19:50:29.646Z","steps":["trace[1785229245] 'read index received'  (duration: 123.7301ms)","trace[1785229245] 'applied index is now lower than readState.Index'  (duration: 9.6µs)"],"step_count":2}
	{"level":"warn","ts":"2022-05-31T19:50:29.646Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"124.1214ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/flowschemas/\" range_end:\"/registry/flowschemas0\" count_only:true ","response":"range_response_count:0 size:7"}
	{"level":"info","ts":"2022-05-31T19:50:29.647Z","caller":"traceutil/trace.go:171","msg":"trace[424543378] range","detail":"{range_begin:/registry/flowschemas/; range_end:/registry/flowschemas0; response_count:0; response_revision:656; }","duration":"124.2132ms","start":"2022-05-31T19:50:29.522Z","end":"2022-05-31T19:50:29.647Z","steps":["trace[424543378] 'agreement among raft nodes before linearized reading'  (duration: 124.0675ms)"],"step_count":1}
	{"level":"warn","ts":"2022-05-31T19:50:29.647Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"122.8215ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/prioritylevelconfigurations/\" range_end:\"/registry/prioritylevelconfigurations0\" count_only:true ","response":"range_response_count:0 size:7"}
	{"level":"info","ts":"2022-05-31T19:50:29.647Z","caller":"traceutil/trace.go:171","msg":"trace[1092524793] range","detail":"{range_begin:/registry/prioritylevelconfigurations/; range_end:/registry/prioritylevelconfigurations0; response_count:0; response_revision:656; }","duration":"123.0841ms","start":"2022-05-31T19:50:29.524Z","end":"2022-05-31T19:50:29.647Z","steps":["trace[1092524793] 'agreement among raft nodes before linearized reading'  (duration: 122.7866ms)"],"step_count":1}
	{"level":"warn","ts":"2022-05-31T19:50:29.647Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"122.9551ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kubernetes-dashboard/\" range_end:\"/registry/pods/kubernetes-dashboard0\" ","response":"range_response_count:2 size:8146"}
	{"level":"info","ts":"2022-05-31T19:50:29.647Z","caller":"traceutil/trace.go:171","msg":"trace[990750400] range","detail":"{range_begin:/registry/pods/kubernetes-dashboard/; range_end:/registry/pods/kubernetes-dashboard0; response_count:2; response_revision:656; }","duration":"123.4794ms","start":"2022-05-31T19:50:29.524Z","end":"2022-05-31T19:50:29.647Z","steps":["trace[990750400] 'agreement among raft nodes before linearized reading'  (duration: 122.8849ms)"],"step_count":1}
	{"level":"warn","ts":"2022-05-31T19:50:30.421Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"378.2623ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2022-05-31T19:50:30.422Z","caller":"traceutil/trace.go:171","msg":"trace[70207455] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:656; }","duration":"378.4829ms","start":"2022-05-31T19:50:30.043Z","end":"2022-05-31T19:50:30.422Z","steps":["trace[70207455] 'agreement among raft nodes before linearized reading'  (duration: 77.9106ms)","trace[70207455] 'range keys from in-memory index tree'  (duration: 300.3112ms)"],"step_count":2}
	{"level":"warn","ts":"2022-05-31T19:50:30.422Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2022-05-31T19:50:30.043Z","time spent":"378.5778ms","remote":"127.0.0.1:46090","response type":"/etcdserverpb.KV/Range","request count":0,"request size":18,"response count":0,"response size":29,"request content":"key:\"/registry/health\" "}
	{"level":"info","ts":"2022-05-31T19:50:31.722Z","caller":"traceutil/trace.go:171","msg":"trace[932692058] linearizableReadLoop","detail":"{readStateIndex:685; appliedIndex:685; }","duration":"355.5231ms","start":"2022-05-31T19:50:31.366Z","end":"2022-05-31T19:50:31.722Z","steps":["trace[932692058] 'read index received'  (duration: 355.5072ms)","trace[932692058] 'applied index is now lower than readState.Index'  (duration: 11.7µs)"],"step_count":2}
	{"level":"warn","ts":"2022-05-31T19:50:31.748Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"223.4747ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kubernetes-dashboard/\" range_end:\"/registry/pods/kubernetes-dashboard0\" ","response":"range_response_count:2 size:8146"}
	{"level":"warn","ts":"2022-05-31T19:50:31.748Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"381.771ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/events/kubernetes-dashboard/dashboard-metrics-scraper-56974995fc-g6xnz.16f44870bec40978\" ","response":"range_response_count:1 size:840"}
	{"level":"info","ts":"2022-05-31T19:50:31.748Z","caller":"traceutil/trace.go:171","msg":"trace[1851800845] range","detail":"{range_begin:/registry/events/kubernetes-dashboard/dashboard-metrics-scraper-56974995fc-g6xnz.16f44870bec40978; range_end:; response_count:1; response_revision:658; }","duration":"382.1708ms","start":"2022-05-31T19:50:31.366Z","end":"2022-05-31T19:50:31.748Z","steps":["trace[1851800845] 'agreement among raft nodes before linearized reading'  (duration: 356.0326ms)","trace[1851800845] 'range keys from in-memory index tree'  (duration: 25.6695ms)"],"step_count":2}
	{"level":"warn","ts":"2022-05-31T19:50:31.749Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"217.0188ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"warn","ts":"2022-05-31T19:50:31.749Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2022-05-31T19:50:31.366Z","time spent":"382.5099ms","remote":"127.0.0.1:46046","response type":"/etcdserverpb.KV/Range","request count":0,"request size":99,"response count":1,"response size":864,"request content":"key:\"/registry/events/kubernetes-dashboard/dashboard-metrics-scraper-56974995fc-g6xnz.16f44870bec40978\" "}
	{"level":"info","ts":"2022-05-31T19:50:31.749Z","caller":"traceutil/trace.go:171","msg":"trace[574163081] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:658; }","duration":"217.5904ms","start":"2022-05-31T19:50:31.531Z","end":"2022-05-31T19:50:31.749Z","steps":["trace[574163081] 'agreement among raft nodes before linearized reading'  (duration: 190.8609ms)","trace[574163081] 'range keys from in-memory index tree'  (duration: 26.1262ms)"],"step_count":2}
	{"level":"info","ts":"2022-05-31T19:50:31.749Z","caller":"traceutil/trace.go:171","msg":"trace[341450766] range","detail":"{range_begin:/registry/pods/kubernetes-dashboard/; range_end:/registry/pods/kubernetes-dashboard0; response_count:2; response_revision:658; }","duration":"223.8695ms","start":"2022-05-31T19:50:31.524Z","end":"2022-05-31T19:50:31.748Z","steps":["trace[341450766] 'agreement among raft nodes before linearized reading'  (duration: 197.9624ms)","trace[341450766] 'range keys from in-memory index tree'  (duration: 25.4569ms)"],"step_count":2}
	{"level":"warn","ts":"2022-05-31T19:50:33.395Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"152.2137ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" ","response":"range_response_count:1 size:1133"}
	{"level":"info","ts":"2022-05-31T19:50:33.396Z","caller":"traceutil/trace.go:171","msg":"trace[1706851417] range","detail":"{range_begin:/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath; range_end:; response_count:1; response_revision:661; }","duration":"152.4521ms","start":"2022-05-31T19:50:33.243Z","end":"2022-05-31T19:50:33.396Z","steps":["trace[1706851417] 'range keys from in-memory index tree'  (duration: 152.0758ms)"],"step_count":1}
	{"level":"warn","ts":"2022-05-31T19:50:33.600Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"101.728ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kubernetes-dashboard/\" range_end:\"/registry/pods/kubernetes-dashboard0\" ","response":"range_response_count:2 size:8146"}
	{"level":"info","ts":"2022-05-31T19:50:33.600Z","caller":"traceutil/trace.go:171","msg":"trace[483012132] range","detail":"{range_begin:/registry/pods/kubernetes-dashboard/; range_end:/registry/pods/kubernetes-dashboard0; response_count:2; response_revision:664; }","duration":"101.8895ms","start":"2022-05-31T19:50:33.498Z","end":"2022-05-31T19:50:33.600Z","steps":["trace[483012132] 'range keys from in-memory index tree'  (duration: 101.4133ms)"],"step_count":1}
	{"level":"info","ts":"2022-05-31T19:50:39.659Z","caller":"traceutil/trace.go:171","msg":"trace[953193134] linearizableReadLoop","detail":"{readStateIndex:710; appliedIndex:710; }","duration":"152.8262ms","start":"2022-05-31T19:50:39.506Z","end":"2022-05-31T19:50:39.659Z","steps":["trace[953193134] 'read index received'  (duration: 152.8153ms)","trace[953193134] 'applied index is now lower than readState.Index'  (duration: 7.7µs)"],"step_count":2}
	{"level":"warn","ts":"2022-05-31T19:50:39.789Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"283.0709ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kubernetes-dashboard/\" range_end:\"/registry/pods/kubernetes-dashboard0\" ","response":"range_response_count:2 size:8226"}
	{"level":"info","ts":"2022-05-31T19:50:39.789Z","caller":"traceutil/trace.go:171","msg":"trace[399536287] range","detail":"{range_begin:/registry/pods/kubernetes-dashboard/; range_end:/registry/pods/kubernetes-dashboard0; response_count:2; response_revision:682; }","duration":"283.3096ms","start":"2022-05-31T19:50:39.506Z","end":"2022-05-31T19:50:39.789Z","steps":["trace[399536287] 'agreement among raft nodes before linearized reading'  (duration: 152.947ms)","trace[399536287] 'range keys from in-memory index tree'  (duration: 130.069ms)"],"step_count":2}
	
	* 
	* ==> kernel <==
	*  19:51:18 up  2:39,  0 users,  load average: 6.79, 6.78, 5.53
	Linux embed-certs-20220531193346-2108 5.10.102.1-microsoft-standard-WSL2 #1 SMP Wed Mar 2 00:30:59 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 20.04.4 LTS"
	
	* 
	* ==> kube-apiserver [e45e0056f239] <==
	* Trace[1285948954]: [3.4803604s] [3.4803604s] END
	W0531 19:49:44.524449       1 handler_proxy.go:104] no RequestInfo found in the context
	E0531 19:49:44.524709       1 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0531 19:49:44.524731       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0531 19:49:46.031296       1 alloc.go:329] "allocated clusterIPs" service="kube-system/metrics-server" clusterIPs=map[IPv4:10.96.73.22]
	I0531 19:49:46.153235       1 controller.go:611] quota admission added evaluator for: events.events.k8s.io
	W0531 19:49:47.228359       1 handler_proxy.go:104] no RequestInfo found in the context
	E0531 19:49:47.228494       1 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0531 19:49:47.228518       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0531 19:49:47.527887       1 alloc.go:329] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs=map[IPv4:10.104.138.203]
	I0531 19:49:47.923023       1 alloc.go:329] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs=map[IPv4:10.111.114.93]
	I0531 19:50:22.725653       1 trace.go:205] Trace[2016679469]: "Get" url:/api/v1/namespaces/kube-system/endpoints/k8s.io-minikube-hostpath,user-agent:storage-provisioner/v0.0.0 (linux/amd64) kubernetes/$Format,audit-id:ea88b0c9-56cd-47d1-897e-1dbec0f8202e,client:192.168.67.2,accept:application/json, */*,protocol:HTTP/2.0 (31-May-2022 19:50:21.650) (total time: 1074ms):
	Trace[2016679469]: ---"About to write a response" 1074ms (19:50:22.724)
	Trace[2016679469]: [1.0747322s] [1.0747322s] END
	I0531 19:50:22.725688       1 trace.go:205] Trace[1282923037]: "List etcd3" key:/pods/kubernetes-dashboard,resourceVersion:,resourceVersionMatch:,limit:0,continue: (31-May-2022 19:50:21.523) (total time: 1201ms):
	Trace[1282923037]: [1.2019758s] [1.2019758s] END
	I0531 19:50:22.727403       1 trace.go:205] Trace[2084601158]: "List" url:/api/v1/namespaces/kubernetes-dashboard/pods,user-agent:e2e-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format,audit-id:db8582f0-b85e-4082-a80e-2adc58a61f30,client:192.168.67.1,accept:application/json, */*,protocol:HTTP/2.0 (31-May-2022 19:50:21.523) (total time: 1203ms):
	Trace[2084601158]: ---"Listing from storage done" 1202ms (19:50:22.726)
	Trace[2084601158]: [1.2037177s] [1.2037177s] END
	W0531 19:50:47.229649       1 handler_proxy.go:104] no RequestInfo found in the context
	E0531 19:50:47.229759       1 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0531 19:50:47.229778       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	* 
	* ==> kube-controller-manager [0607075503f4] <==
	* E0531 19:49:45.641382       1 replica_set.go:536] sync "kubernetes-dashboard/kubernetes-dashboard-8469778f77" failed with pods "kubernetes-dashboard-8469778f77-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	E0531 19:49:45.834128       1 replica_set.go:536] sync "kubernetes-dashboard/dashboard-metrics-scraper-56974995fc" failed with pods "dashboard-metrics-scraper-56974995fc-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	E0531 19:49:45.835411       1 replica_set.go:536] sync "kubernetes-dashboard/kubernetes-dashboard-8469778f77" failed with pods "kubernetes-dashboard-8469778f77-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0531 19:49:45.835428       1 event.go:294] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-56974995fc" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"dashboard-metrics-scraper-56974995fc-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	I0531 19:49:45.835618       1 event.go:294] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-8469778f77" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"kubernetes-dashboard-8469778f77-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	E0531 19:49:45.843602       1 replica_set.go:536] sync "kubernetes-dashboard/dashboard-metrics-scraper-56974995fc" failed with pods "dashboard-metrics-scraper-56974995fc-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	E0531 19:49:45.843937       1 replica_set.go:536] sync "kubernetes-dashboard/kubernetes-dashboard-8469778f77" failed with pods "kubernetes-dashboard-8469778f77-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0531 19:49:45.843988       1 event.go:294] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-56974995fc" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"dashboard-metrics-scraper-56974995fc-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	I0531 19:49:45.844036       1 event.go:294] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-8469778f77" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"kubernetes-dashboard-8469778f77-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	E0531 19:49:45.925714       1 replica_set.go:536] sync "kubernetes-dashboard/dashboard-metrics-scraper-56974995fc" failed with pods "dashboard-metrics-scraper-56974995fc-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	E0531 19:49:45.925759       1 replica_set.go:536] sync "kubernetes-dashboard/kubernetes-dashboard-8469778f77" failed with pods "kubernetes-dashboard-8469778f77-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0531 19:49:45.925815       1 event.go:294] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-56974995fc" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"dashboard-metrics-scraper-56974995fc-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	I0531 19:49:45.926564       1 event.go:294] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-8469778f77" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"kubernetes-dashboard-8469778f77-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	E0531 19:49:46.224823       1 replica_set.go:536] sync "kubernetes-dashboard/kubernetes-dashboard-8469778f77" failed with pods "kubernetes-dashboard-8469778f77-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0531 19:49:46.225647       1 event.go:294] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-8469778f77" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"kubernetes-dashboard-8469778f77-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	E0531 19:49:46.227723       1 replica_set.go:536] sync "kubernetes-dashboard/dashboard-metrics-scraper-56974995fc" failed with pods "dashboard-metrics-scraper-56974995fc-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0531 19:49:46.227761       1 event.go:294] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-56974995fc" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"dashboard-metrics-scraper-56974995fc-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	I0531 19:49:46.439790       1 event.go:294] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-8469778f77" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kubernetes-dashboard-8469778f77-9rhsq"
	I0531 19:49:46.439839       1 event.go:294] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-56974995fc" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: dashboard-metrics-scraper-56974995fc-g6xnz"
	E0531 19:49:59.627032       1 resource_quota_controller.go:413] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0531 19:50:00.142692       1 garbagecollector.go:707] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0531 19:50:29.736689       1 resource_quota_controller.go:413] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0531 19:50:30.332044       1 garbagecollector.go:707] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0531 19:50:59.777785       1 resource_quota_controller.go:413] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0531 19:51:00.523901       1 garbagecollector.go:707] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	
	* 
	* ==> kube-proxy [50d3e2775173] <==
	* E0531 19:49:45.130628       1 proxier.go:647] "Failed to read builtin modules file, you can ignore this message when kube-proxy is running inside container without mounting /lib/modules" err="open /lib/modules/5.10.102.1-microsoft-standard-WSL2/modules.builtin: no such file or directory" filePath="/lib/modules/5.10.102.1-microsoft-standard-WSL2/modules.builtin"
	I0531 19:49:45.143693       1 proxier.go:657] "Failed to load kernel module with modprobe, you can ignore this message when kube-proxy is running inside container without mounting /lib/modules" moduleName="ip_vs"
	I0531 19:49:45.151050       1 proxier.go:657] "Failed to load kernel module with modprobe, you can ignore this message when kube-proxy is running inside container without mounting /lib/modules" moduleName="ip_vs_rr"
	I0531 19:49:45.236963       1 proxier.go:657] "Failed to load kernel module with modprobe, you can ignore this message when kube-proxy is running inside container without mounting /lib/modules" moduleName="ip_vs_wrr"
	I0531 19:49:45.244432       1 proxier.go:657] "Failed to load kernel module with modprobe, you can ignore this message when kube-proxy is running inside container without mounting /lib/modules" moduleName="ip_vs_sh"
	I0531 19:49:45.332877       1 proxier.go:657] "Failed to load kernel module with modprobe, you can ignore this message when kube-proxy is running inside container without mounting /lib/modules" moduleName="nf_conntrack"
	I0531 19:49:45.639808       1 node.go:163] Successfully retrieved node IP: 192.168.67.2
	I0531 19:49:45.639890       1 server_others.go:138] "Detected node IP" address="192.168.67.2"
	I0531 19:49:45.640670       1 server_others.go:561] "Unknown proxy mode, assuming iptables proxy" proxyMode=""
	I0531 19:49:46.142410       1 server_others.go:206] "Using iptables Proxier"
	I0531 19:49:46.142631       1 server_others.go:213] "kube-proxy running in dual-stack mode" ipFamily=IPv4
	I0531 19:49:46.142655       1 server_others.go:214] "Creating dualStackProxier for iptables"
	I0531 19:49:46.142803       1 server_others.go:491] "Detect-local-mode set to ClusterCIDR, but no IPv6 cluster CIDR defined, , defaulting to no-op detect-local for IPv6"
	I0531 19:49:46.144334       1 server.go:656] "Version info" version="v1.23.6"
	I0531 19:49:46.145879       1 config.go:317] "Starting service config controller"
	I0531 19:49:46.146010       1 shared_informer.go:240] Waiting for caches to sync for service config
	I0531 19:49:46.146722       1 config.go:226] "Starting endpoint slice config controller"
	I0531 19:49:46.146736       1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
	I0531 19:49:46.247171       1 shared_informer.go:247] Caches are synced for service config 
	I0531 19:49:46.247227       1 shared_informer.go:247] Caches are synced for endpoint slice config 
	
	* 
	* ==> kube-scheduler [482a2f211adf] <==
	* W0531 19:49:11.983904       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0531 19:49:11.984080       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0531 19:49:11.984657       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0531 19:49:11.984777       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0531 19:49:12.056586       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0531 19:49:12.056746       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0531 19:49:12.062018       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0531 19:49:12.062084       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0531 19:49:12.069118       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0531 19:49:12.069246       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0531 19:49:12.123835       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0531 19:49:12.124017       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0531 19:49:12.325108       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0531 19:49:12.325352       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1beta1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0531 19:49:12.325399       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.CSIStorageCapacity: failed to list *v1beta1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0531 19:49:12.325842       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0531 19:49:12.336385       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0531 19:49:12.336522       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0531 19:49:12.342122       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0531 19:49:12.342245       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0531 19:49:12.375792       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0531 19:49:12.375925       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0531 19:49:14.478824       1 plugin.go:138] "getting namespace, assuming empty set of namespace labels" err="namespace \"kube-system\" not found" namespace="kube-system"
	E0531 19:49:14.478976       1 plugin.go:138] "getting namespace, assuming empty set of namespace labels" err="namespace \"kube-system\" not found" namespace="kube-system"
	I0531 19:49:15.040440       1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	
	* 
	* ==> kubelet <==
	* -- Logs begin at Tue 2022-05-31 19:43:05 UTC, end at Tue 2022-05-31 19:51:18 UTC. --
	May 31 19:51:18 embed-certs-20220531193346-2108 kubelet[5169]: time="2022-05-31T19:51:18Z" level=warning msg="Failed to remove cgroup (will retry)" error="rmdir /sys/fs/cgroup/rdma/kubepods/besteffort/podcbd345e6-4310-42fe-a76c-9f335f470d4d/2b26f9f46e44565f1c2e001155defd97b977122e0e621b13581d05976c1b7b93: device or resource busy"
	May 31 19:51:18 embed-certs-20220531193346-2108 kubelet[5169]: time="2022-05-31T19:51:18Z" level=warning msg="Failed to remove cgroup (will retry)" error="rmdir /sys/fs/cgroup/rdma/kubepods/burstable/pod56c9cdb4-db41-46c5-8cd4-209677050138/71f6a19ff0ef670dfa232a840aeac80f93dc8554b902cd7cf38daa534ee954d7: device or resource busy"
	May 31 19:51:18 embed-certs-20220531193346-2108 kubelet[5169]: time="2022-05-31T19:51:18Z" level=warning msg="Failed to remove cgroup (will retry)" error="rmdir /sys/fs/cgroup/rdma/kubepods/besteffort/pod203b1d21-0785-45f8-a005-ccd8b231048b/b0eb67caf61ba77a4e12b80f810e99e7337f9d9de00929305f89327c700f3793: device or resource busy"
	May 31 19:51:18 embed-certs-20220531193346-2108 kubelet[5169]: time="2022-05-31T19:51:18Z" level=warning msg="Failed to remove cgroup (will retry)" error="rmdir /sys/fs/cgroup/rdma/kubepods/burstable/podce596a6602b29a1aa40c59cd0f2e881f/4dee898a48112a5532769c2988b800aaa45f5f0565a70bffcc8a07ade742104b: device or resource busy"
	May 31 19:51:18 embed-certs-20220531193346-2108 kubelet[5169]: time="2022-05-31T19:51:18Z" level=warning msg="Failed to remove cgroup (will retry)" error="rmdir /sys/fs/cgroup/rdma/kubepods/burstable/podbfb645df10bc1c0532ff7af877b3f38c/a069846beba9f18c6be439b3f7a00364fc4dac17fc5956f0e7274ec8b629a0fa: device or resource busy"
	May 31 19:51:18 embed-certs-20220531193346-2108 kubelet[5169]: time="2022-05-31T19:51:18Z" level=warning msg="Failed to remove cgroup (will retry)" error="rmdir /sys/fs/cgroup/rdma/kubepods/burstable/podc6857c844cb9ecf685bba20626e8b532/06dbdf88324c6f470d371247122610f9c81a280b764169403ccb1aac547fe2a3: device or resource busy"
	May 31 19:51:18 embed-certs-20220531193346-2108 kubelet[5169]: time="2022-05-31T19:51:18Z" level=warning msg="Failed to remove cgroup (will retry)" error="rmdir /sys/fs/cgroup/rdma/kubepods/besteffort/pod9a740f1b-9edc-4440-b404-ed74d49c418d/152251cf734540cadf4adce797803fc34a434bfe785b30e55447d93a047b8f71: device or resource busy"
	May 31 19:51:18 embed-certs-20220531193346-2108 kubelet[5169]: time="2022-05-31T19:51:18Z" level=error msg="Failed to remove cgroup" error="rmdir /sys/fs/cgroup/rdma/kubepods/burstable/podc6857c844cb9ecf685bba20626e8b532/06dbdf88324c6f470d371247122610f9c81a280b764169403ccb1aac547fe2a3: device or resource busy"
	May 31 19:51:18 embed-certs-20220531193346-2108 kubelet[5169]: time="2022-05-31T19:51:18Z" level=error msg="Failed to remove cgroup" error="rmdir /sys/fs/cgroup/rdma/kubepods/besteffort/pod203b1d21-0785-45f8-a005-ccd8b231048b/b0eb67caf61ba77a4e12b80f810e99e7337f9d9de00929305f89327c700f3793: device or resource busy"
	May 31 19:51:18 embed-certs-20220531193346-2108 kubelet[5169]: I0531 19:51:18.560607    5169 pod_container_manager_linux.go:194] "Failed to delete cgroup paths" cgroupName=[kubepods burstable podc6857c844cb9ecf685bba20626e8b532] err="unable to destroy cgroup paths for cgroup [kubepods burstable podc6857c844cb9ecf685bba20626e8b532] : Failed to remove paths: map[rdma:/sys/fs/cgroup/rdma/kubepods/burstable/podc6857c844cb9ecf685bba20626e8b532]"
	May 31 19:51:18 embed-certs-20220531193346-2108 kubelet[5169]: I0531 19:51:18.560763    5169 pod_container_manager_linux.go:194] "Failed to delete cgroup paths" cgroupName=[kubepods besteffort pod203b1d21-0785-45f8-a005-ccd8b231048b] err="unable to destroy cgroup paths for cgroup [kubepods besteffort pod203b1d21-0785-45f8-a005-ccd8b231048b] : Failed to remove paths: map[rdma:/sys/fs/cgroup/rdma/kubepods/besteffort/pod203b1d21-0785-45f8-a005-ccd8b231048b]"
	May 31 19:51:18 embed-certs-20220531193346-2108 kubelet[5169]: time="2022-05-31T19:51:18Z" level=error msg="Failed to remove cgroup" error="rmdir /sys/fs/cgroup/rdma/kubepods/burstable/pod4f3db99b-c044-4ef4-afa5-a5482e1646ff/5b15006aeef80bbe469cbd49cde41ad3b1653e9185e881fccd036b2adab772ac: device or resource busy"
	May 31 19:51:18 embed-certs-20220531193346-2108 kubelet[5169]: time="2022-05-31T19:51:18Z" level=error msg="Failed to remove cgroup" error="rmdir /sys/fs/cgroup/rdma/kubepods/burstable/pod338c4bbf5c704a2a0ada42f2bf66d93d/29f57e4f7f015abd015519b511289d402d27aa9450531a22ac2f95291ea19963: device or resource busy"
	May 31 19:51:18 embed-certs-20220531193346-2108 kubelet[5169]: I0531 19:51:18.560970    5169 pod_container_manager_linux.go:194] "Failed to delete cgroup paths" cgroupName=[kubepods burstable pod338c4bbf5c704a2a0ada42f2bf66d93d] err="unable to destroy cgroup paths for cgroup [kubepods burstable pod338c4bbf5c704a2a0ada42f2bf66d93d] : Failed to remove paths: map[rdma:/sys/fs/cgroup/rdma/kubepods/burstable/pod338c4bbf5c704a2a0ada42f2bf66d93d]"
	May 31 19:51:18 embed-certs-20220531193346-2108 kubelet[5169]: time="2022-05-31T19:51:18Z" level=error msg="Failed to remove cgroup" error="rmdir /sys/fs/cgroup/rdma/kubepods/burstable/podce596a6602b29a1aa40c59cd0f2e881f/4dee898a48112a5532769c2988b800aaa45f5f0565a70bffcc8a07ade742104b: device or resource busy"
	May 31 19:51:18 embed-certs-20220531193346-2108 kubelet[5169]: I0531 19:51:18.560981    5169 pod_container_manager_linux.go:194] "Failed to delete cgroup paths" cgroupName=[kubepods burstable pod4f3db99b-c044-4ef4-afa5-a5482e1646ff] err="unable to destroy cgroup paths for cgroup [kubepods burstable pod4f3db99b-c044-4ef4-afa5-a5482e1646ff] : Failed to remove paths: map[rdma:/sys/fs/cgroup/rdma/kubepods/burstable/pod4f3db99b-c044-4ef4-afa5-a5482e1646ff]"
	May 31 19:51:18 embed-certs-20220531193346-2108 kubelet[5169]: I0531 19:51:18.561026    5169 pod_container_manager_linux.go:194] "Failed to delete cgroup paths" cgroupName=[kubepods burstable podce596a6602b29a1aa40c59cd0f2e881f] err="unable to destroy cgroup paths for cgroup [kubepods burstable podce596a6602b29a1aa40c59cd0f2e881f] : Failed to remove paths: map[rdma:/sys/fs/cgroup/rdma/kubepods/burstable/podce596a6602b29a1aa40c59cd0f2e881f]"
	May 31 19:51:18 embed-certs-20220531193346-2108 kubelet[5169]: time="2022-05-31T19:51:18Z" level=error msg="Failed to remove cgroup" error="rmdir /sys/fs/cgroup/rdma/kubepods/burstable/pod56c9cdb4-db41-46c5-8cd4-209677050138/71f6a19ff0ef670dfa232a840aeac80f93dc8554b902cd7cf38daa534ee954d7: device or resource busy"
	May 31 19:51:18 embed-certs-20220531193346-2108 kubelet[5169]: I0531 19:51:18.561127    5169 pod_container_manager_linux.go:194] "Failed to delete cgroup paths" cgroupName=[kubepods burstable pod56c9cdb4-db41-46c5-8cd4-209677050138] err="unable to destroy cgroup paths for cgroup [kubepods burstable pod56c9cdb4-db41-46c5-8cd4-209677050138] : Failed to remove paths: map[rdma:/sys/fs/cgroup/rdma/kubepods/burstable/pod56c9cdb4-db41-46c5-8cd4-209677050138]"
	May 31 19:51:18 embed-certs-20220531193346-2108 kubelet[5169]: time="2022-05-31T19:51:18Z" level=error msg="Failed to remove cgroup" error="rmdir /sys/fs/cgroup/rdma/kubepods/burstable/podbfb645df10bc1c0532ff7af877b3f38c/a069846beba9f18c6be439b3f7a00364fc4dac17fc5956f0e7274ec8b629a0fa: device or resource busy"
	May 31 19:51:18 embed-certs-20220531193346-2108 kubelet[5169]: I0531 19:51:18.561408    5169 pod_container_manager_linux.go:194] "Failed to delete cgroup paths" cgroupName=[kubepods burstable podbfb645df10bc1c0532ff7af877b3f38c] err="unable to destroy cgroup paths for cgroup [kubepods burstable podbfb645df10bc1c0532ff7af877b3f38c] : Failed to remove paths: map[rdma:/sys/fs/cgroup/rdma/kubepods/burstable/podbfb645df10bc1c0532ff7af877b3f38c]"
	May 31 19:51:18 embed-certs-20220531193346-2108 kubelet[5169]: time="2022-05-31T19:51:18Z" level=error msg="Failed to remove cgroup" error="rmdir /sys/fs/cgroup/rdma/kubepods/besteffort/podcbd345e6-4310-42fe-a76c-9f335f470d4d/2b26f9f46e44565f1c2e001155defd97b977122e0e621b13581d05976c1b7b93: device or resource busy"
	May 31 19:51:18 embed-certs-20220531193346-2108 kubelet[5169]: I0531 19:51:18.563952    5169 pod_container_manager_linux.go:194] "Failed to delete cgroup paths" cgroupName=[kubepods besteffort podcbd345e6-4310-42fe-a76c-9f335f470d4d] err="unable to destroy cgroup paths for cgroup [kubepods besteffort podcbd345e6-4310-42fe-a76c-9f335f470d4d] : Failed to remove paths: map[rdma:/sys/fs/cgroup/rdma/kubepods/besteffort/podcbd345e6-4310-42fe-a76c-9f335f470d4d]"
	May 31 19:51:18 embed-certs-20220531193346-2108 kubelet[5169]: time="2022-05-31T19:51:18Z" level=error msg="Failed to remove cgroup" error="rmdir /sys/fs/cgroup/rdma/kubepods/besteffort/pod9a740f1b-9edc-4440-b404-ed74d49c418d/152251cf734540cadf4adce797803fc34a434bfe785b30e55447d93a047b8f71: device or resource busy"
	May 31 19:51:18 embed-certs-20220531193346-2108 kubelet[5169]: I0531 19:51:18.569368    5169 pod_container_manager_linux.go:194] "Failed to delete cgroup paths" cgroupName=[kubepods besteffort pod9a740f1b-9edc-4440-b404-ed74d49c418d] err="unable to destroy cgroup paths for cgroup [kubepods besteffort pod9a740f1b-9edc-4440-b404-ed74d49c418d] : Failed to remove paths: map[rdma:/sys/fs/cgroup/rdma/kubepods/besteffort/pod9a740f1b-9edc-4440-b404-ed74d49c418d]"
	
	* 
	* ==> kubernetes-dashboard [d951ec257dd1] <==
	* 2022/05/31 19:50:33 Starting overwatch
	2022/05/31 19:50:33 Using namespace: kubernetes-dashboard
	2022/05/31 19:50:33 Using in-cluster config to connect to apiserver
	2022/05/31 19:50:33 Using secret token for csrf signing
	2022/05/31 19:50:33 Initializing csrf token from kubernetes-dashboard-csrf secret
	2022/05/31 19:50:33 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2022/05/31 19:50:33 Successful initial request to the apiserver, version: v1.23.6
	2022/05/31 19:50:33 Generating JWE encryption key
	2022/05/31 19:50:33 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2022/05/31 19:50:33 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2022/05/31 19:50:34 Initializing JWE encryption key from synchronized object
	2022/05/31 19:50:34 Creating in-cluster Sidecar client
	2022/05/31 19:50:34 Serving insecurely on HTTP port: 9090
	2022/05/31 19:50:34 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2022/05/31 19:51:04 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	
	* 
	* ==> storage-provisioner [9a7ffad2971b] <==
	* I0531 19:49:50.029861       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0531 19:49:50.127335       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0531 19:49:50.127719       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0531 19:49:50.230182       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0531 19:49:50.230445       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"848a626b-366f-40da-b374-e7c69b22f1b4", APIVersion:"v1", ResourceVersion:"598", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-20220531193346-2108_77c199d1-f409-4014-82d5-d028fac289c7 became leader
	I0531 19:49:50.230610       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-20220531193346-2108_77c199d1-f409-4014-82d5-d028fac289c7!
	I0531 19:49:50.332394       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-20220531193346-2108_77c199d1-f409-4014-82d5-d028fac289c7!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p embed-certs-20220531193346-2108 -n embed-certs-20220531193346-2108
helpers_test.go:254: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p embed-certs-20220531193346-2108 -n embed-certs-20220531193346-2108: (7.3450483s)
helpers_test.go:261: (dbg) Run:  kubectl --context embed-certs-20220531193346-2108 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:270: non-running pods: metrics-server-b955d9d8-w48dh
helpers_test.go:272: ======> post-mortem[TestStartStop/group/embed-certs/serial/Pause]: describe non-running pods <======
helpers_test.go:275: (dbg) Run:  kubectl --context embed-certs-20220531193346-2108 describe pod metrics-server-b955d9d8-w48dh
helpers_test.go:275: (dbg) Non-zero exit: kubectl --context embed-certs-20220531193346-2108 describe pod metrics-server-b955d9d8-w48dh: exit status 1 (322.2053ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-b955d9d8-w48dh" not found

                                                
                                                
** /stderr **
helpers_test.go:277: kubectl --context embed-certs-20220531193346-2108 describe pod metrics-server-b955d9d8-w48dh: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect embed-certs-20220531193346-2108
helpers_test.go:231: (dbg) Done: docker inspect embed-certs-20220531193346-2108: (1.1989981s)
helpers_test.go:235: (dbg) docker inspect embed-certs-20220531193346-2108:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "b3ef283712d9f0ede7e8292a0ec99b2d65b54ce8813a8abec9392bf47d24f60e",
	        "Created": "2022-05-31T19:40:27.5730813Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 243450,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2022-05-31T19:43:04.8203689Z",
	            "FinishedAt": "2022-05-31T19:42:40.7035645Z"
	        },
	        "Image": "sha256:aedbaa58534633065a66af6f01ba15f6c7dc1b8a285b6938f9d04325ceab9ed4",
	        "ResolvConfPath": "/var/lib/docker/containers/b3ef283712d9f0ede7e8292a0ec99b2d65b54ce8813a8abec9392bf47d24f60e/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/b3ef283712d9f0ede7e8292a0ec99b2d65b54ce8813a8abec9392bf47d24f60e/hostname",
	        "HostsPath": "/var/lib/docker/containers/b3ef283712d9f0ede7e8292a0ec99b2d65b54ce8813a8abec9392bf47d24f60e/hosts",
	        "LogPath": "/var/lib/docker/containers/b3ef283712d9f0ede7e8292a0ec99b2d65b54ce8813a8abec9392bf47d24f60e/b3ef283712d9f0ede7e8292a0ec99b2d65b54ce8813a8abec9392bf47d24f60e-json.log",
	        "Name": "/embed-certs-20220531193346-2108",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "embed-certs-20220531193346-2108:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "embed-certs-20220531193346-2108",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 2306867200,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/c2e8ea02cea06b1bfa76cdee092b60de61a62f2fa5b1fefbc00c3722dae510ce-init/diff:/var/lib/docker/overlay2/42ebd8012a176a6c9bc83a2b81ffb1eb5c8e01d5410cb5d59346522bbaddf2cc/diff:/var/lib/docker/overlay2/59dce173ea661e9679f479af711a101ab0e97afb60abfd3c5b7a199b5c3e2b3b/diff:/var/lib/docker/overlay2/0328b60a223ca9f8bab93e6b86106d8b64d16fa559a56e88abbdee372b3b6a70/diff:/var/lib/docker/overlay2/b781f2620a052ee02138337819bde18c09122be2f20b7cfefaf7688f18d0c559/diff:/var/lib/docker/overlay2/af966c145b90b1748180b9ffcb1521d6fa9914e1d0ca582b239123591ffd1527/diff:/var/lib/docker/overlay2/5cd2b511f6f3bc93855ed77b5510ca4c67426eea433ccda53ea8e864342a413e/diff:/var/lib/docker/overlay2/f896d291d0c004470c3e38ea0d3be8e2b2a48ea36d45662c40fe3e105cbf4dec/diff:/var/lib/docker/overlay2/9e8994dcf5b1692245d5e40982d040298bfa7f7977892cf4be8ba3697f2c1283/diff:/var/lib/docker/overlay2/a7da4130c1b629e2a737b34701c6d4dfe6c48f92771856a887e06a1edc5456f8/diff:/var/lib/docker/overlay2/4c2573
4b9c8459489256b5f70dbb446897b9510d1cf9187e903f845ffa2a7ec2/diff:/var/lib/docker/overlay2/5c6cef49a0d0d1a36777fa7e0955ecdffb41ce354b7984f232e9cd51916416f7/diff:/var/lib/docker/overlay2/b79c799ed97edb702ed4c4ccb55ef9c645ae162e30e8f297ca5dd1152c29de41/diff:/var/lib/docker/overlay2/c84b7bc7c79ffdedf2d1265e21eec011dc3215811fb0569f7eb7d6b9aec884e8/diff:/var/lib/docker/overlay2/df8e2c3af362fd04ee17cb8d67105cf489427b2ae7cec77b79a2778e6c8c0234/diff:/var/lib/docker/overlay2/e56e356f8425868b31ada978267de73f074f211985ff1849ece7ab8341c33bae/diff:/var/lib/docker/overlay2/82c032066e83d3297742c83dd29132974e9db73a0b0b0a8edd3bcbbdb29cd53c/diff:/var/lib/docker/overlay2/15532131f3e6d0b2faf705733b06ae0c869147f2ca9592e3a80b6eaadad23544/diff:/var/lib/docker/overlay2/73fa456f504732f46cbe49368167247ca47b3099a6a75a7023ba16e7f598aee5/diff:/var/lib/docker/overlay2/e5635e020aadcc8dd1e5e3cd2eaa45cb97147f47bf406211fc61d7cbfc531193/diff:/var/lib/docker/overlay2/40b76b3249d3f7a8a737e2db80ebc1ed3b76d59724641217e8aae414ad832781/diff:/var/lib/d
ocker/overlay2/50ea2ce78d4fe52f626b2755a14f71a3c4f9b5a4f929646d9200876bdb1652c1/diff:/var/lib/docker/overlay2/d0a6e94d1f4aa73824d39c6e655bc4bdcd6568cea821b5d0f71174591c9cbbb3/diff:/var/lib/docker/overlay2/20c8fbe37a8c89a03b7bffe8cbc507e888cd5886f86f43b551d6a09fee1ce5e7/diff:/var/lib/docker/overlay2/48942b31cfe24e44c65a8be1785cd90488444f8c420a79b72a123034b01dd3f8/diff:/var/lib/docker/overlay2/c90124ab97e02facd949bfbd45815d6d73a40303b47ba4a4bc035788f5ee2dc3/diff:/var/lib/docker/overlay2/38c82aeabee1c8f46551413ecabb24f2f22680bb623f79e40c751558747a03f5/diff:/var/lib/docker/overlay2/4fa8894d1c1d773bc2e0511f273eab03fb7b8be7489eab5cd3eb57cc0d12e855/diff:/var/lib/docker/overlay2/23319fcddb47e50928e2044bac662de8153728f3a2eefa9c6ad5a5f413efec88/diff:/var/lib/docker/overlay2/b7ecd073b5b747c21ecbd1ca61887899f7e227fac3e383e24f868549b7929d74/diff:/var/lib/docker/overlay2/29a5674b4bbabfd07c4ce0b2a8b84ce98af380bf984043a4a9a6cd0743e4630c/diff:/var/lib/docker/overlay2/86a10266979ed72dc4372ade724e64741de35702626642ba60a15cca143
3682e/diff:/var/lib/docker/overlay2/03a1af7f82f1cb2b6eadbd1f13c8e9f6ca281ef3a8968d6aa45d284f286aefca/diff:/var/lib/docker/overlay2/f36cce4566278d24128326f8ef6ea446884c0c6941ccdb763ddf936e178afbff/diff:/var/lib/docker/overlay2/e54a2a61ba3597af53ec65a822821ffca97788e4b1dbfeedf98bf4d12e78973d/diff:/var/lib/docker/overlay2/dd54a25b898b0d7952f0bcb99a0450ee3d6b4269599e9355b4ae5e0c540c2caa/diff:/var/lib/docker/overlay2/ae6c1d1e9e79e03382217f21886420e3118a3f18f7c44f76c19262a84a43e219/diff:/var/lib/docker/overlay2/82faa00f86c1fa99063466464f71cdd6d510aa3e45c6c43301b2119b5bd5285a/diff:/var/lib/docker/overlay2/9f54999972b485642f042b9ed4d00316be0a1d35c060e619aca79b1583180446/diff:/var/lib/docker/overlay2/b467240c20564ba44d0946c716cf18ab5be973b43b02c37ee3ddd8f94502f41b/diff:/var/lib/docker/overlay2/21217d4ff1c5cf81dd53cfd831e0961189fb9f86812e1f53843f0022383345e6/diff",
	                "MergedDir": "/var/lib/docker/overlay2/c2e8ea02cea06b1bfa76cdee092b60de61a62f2fa5b1fefbc00c3722dae510ce/merged",
	                "UpperDir": "/var/lib/docker/overlay2/c2e8ea02cea06b1bfa76cdee092b60de61a62f2fa5b1fefbc00c3722dae510ce/diff",
	                "WorkDir": "/var/lib/docker/overlay2/c2e8ea02cea06b1bfa76cdee092b60de61a62f2fa5b1fefbc00c3722dae510ce/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "embed-certs-20220531193346-2108",
	                "Source": "/var/lib/docker/volumes/embed-certs-20220531193346-2108/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "embed-certs-20220531193346-2108",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653596720-14230@sha256:e953786303ac8350802546ee187d34e89f0007072a54fdbcc2f86a1fb8575418",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "embed-certs-20220531193346-2108",
	                "name.minikube.sigs.k8s.io": "embed-certs-20220531193346-2108",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "20b23ab4993bd1a2ae6054def826e6e68e6055d6ab2db42139d3358a41f3f701",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "54560"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "54561"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "54557"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "54558"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "54559"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/20b23ab4993b",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "embed-certs-20220531193346-2108": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.67.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "b3ef283712d9",
	                        "embed-certs-20220531193346-2108"
	                    ],
	                    "NetworkID": "fa417081acd115df61da89d4421c202d2b7d946ea3a40caf53be2b9b0c3bc79d",
	                    "EndpointID": "3f2687aa282b7b16fdfaa47b248e7c36c3d5605834959b55952f7e7bb048c2ee",
	                    "Gateway": "192.168.67.1",
	                    "IPAddress": "192.168.67.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:43:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p embed-certs-20220531193346-2108 -n embed-certs-20220531193346-2108
helpers_test.go:239: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.Host}} -p embed-certs-20220531193346-2108 -n embed-certs-20220531193346-2108: (9.1979165s)
helpers_test.go:244: <<< TestStartStop/group/embed-certs/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-windows-amd64.exe -p embed-certs-20220531193346-2108 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-windows-amd64.exe -p embed-certs-20220531193346-2108 logs -n 25: (10.8288318s)
helpers_test.go:252: TestStartStop/group/embed-certs/serial/Pause logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|------------------------------------------------------------|------------------------------------------------|-------------------|----------------|---------------------|---------------------|
	| Command |                            Args                            |                    Profile                     |       User        |    Version     |     Start Time      |      End Time       |
	|---------|------------------------------------------------------------|------------------------------------------------|-------------------|----------------|---------------------|---------------------|
	| addons  | enable dashboard -p                                        | embed-certs-20220531193346-2108                | minikube7\jenkins | v1.26.0-beta.1 | 31 May 22 19:42 GMT | 31 May 22 19:42 GMT |
	|         | embed-certs-20220531193346-2108                            |                                                |                   |                |                     |                     |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4          |                                                |                   |                |                     |                     |
	| start   | -p newest-cni-20220531193849-2108 --memory=2200            | newest-cni-20220531193849-2108                 | minikube7\jenkins | v1.26.0-beta.1 | 31 May 22 19:41 GMT | 31 May 22 19:43 GMT |
	|         | --alsologtostderr --wait=apiserver,system_pods,default_sa  |                                                |                   |                |                     |                     |
	|         | --feature-gates ServerSideApply=true --network-plugin=cni  |                                                |                   |                |                     |                     |
	|         | --extra-config=kubelet.network-plugin=cni                  |                                                |                   |                |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 |                                                |                   |                |                     |                     |
	|         | --driver=docker --kubernetes-version=v1.23.6               |                                                |                   |                |                     |                     |
	| ssh     | -p                                                         | newest-cni-20220531193849-2108                 | minikube7\jenkins | v1.26.0-beta.1 | 31 May 22 19:43 GMT | 31 May 22 19:43 GMT |
	|         | newest-cni-20220531193849-2108                             |                                                |                   |                |                     |                     |
	|         | sudo crictl images -o json                                 |                                                |                   |                |                     |                     |
	| start   | -p                                                         | old-k8s-version-20220531192531-2108            | minikube7\jenkins | v1.26.0-beta.1 | 31 May 22 19:35 GMT | 31 May 22 19:43 GMT |
	|         | old-k8s-version-20220531192531-2108                        |                                                |                   |                |                     |                     |
	|         | --memory=2200 --alsologtostderr                            |                                                |                   |                |                     |                     |
	|         | --wait=true --kvm-network=default                          |                                                |                   |                |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                              |                                                |                   |                |                     |                     |
	|         | --disable-driver-mounts                                    |                                                |                   |                |                     |                     |
	|         | --keep-context=false                                       |                                                |                   |                |                     |                     |
	|         | --driver=docker                                            |                                                |                   |                |                     |                     |
	|         | --kubernetes-version=v1.16.0                               |                                                |                   |                |                     |                     |
	| pause   | -p                                                         | newest-cni-20220531193849-2108                 | minikube7\jenkins | v1.26.0-beta.1 | 31 May 22 19:43 GMT | 31 May 22 19:43 GMT |
	|         | newest-cni-20220531193849-2108                             |                                                |                   |                |                     |                     |
	|         | --alsologtostderr -v=1                                     |                                                |                   |                |                     |                     |
	| ssh     | -p                                                         | old-k8s-version-20220531192531-2108            | minikube7\jenkins | v1.26.0-beta.1 | 31 May 22 19:43 GMT | 31 May 22 19:43 GMT |
	|         | old-k8s-version-20220531192531-2108                        |                                                |                   |                |                     |                     |
	|         | sudo crictl images -o json                                 |                                                |                   |                |                     |                     |
	| unpause | -p                                                         | newest-cni-20220531193849-2108                 | minikube7\jenkins | v1.26.0-beta.1 | 31 May 22 19:43 GMT | 31 May 22 19:43 GMT |
	|         | newest-cni-20220531193849-2108                             |                                                |                   |                |                     |                     |
	|         | --alsologtostderr -v=1                                     |                                                |                   |                |                     |                     |
	| logs    | old-k8s-version-20220531192531-2108                        | old-k8s-version-20220531192531-2108            | minikube7\jenkins | v1.26.0-beta.1 | 31 May 22 19:44 GMT | 31 May 22 19:44 GMT |
	|         | logs -n 25                                                 |                                                |                   |                |                     |                     |
	| start   | -p                                                         | default-k8s-different-port-20220531193451-2108 | minikube7\jenkins | v1.26.0-beta.1 | 31 May 22 19:37 GMT | 31 May 22 19:44 GMT |
	|         | default-k8s-different-port-20220531193451-2108             |                                                |                   |                |                     |                     |
	|         | --memory=2200 --alsologtostderr --wait=true                |                                                |                   |                |                     |                     |
	|         | --apiserver-port=8444 --driver=docker                      |                                                |                   |                |                     |                     |
	|         | --kubernetes-version=v1.23.6                               |                                                |                   |                |                     |                     |
	| delete  | -p                                                         | newest-cni-20220531193849-2108                 | minikube7\jenkins | v1.26.0-beta.1 | 31 May 22 19:44 GMT | 31 May 22 19:44 GMT |
	|         | newest-cni-20220531193849-2108                             |                                                |                   |                |                     |                     |
	| logs    | old-k8s-version-20220531192531-2108                        | old-k8s-version-20220531192531-2108            | minikube7\jenkins | v1.26.0-beta.1 | 31 May 22 19:44 GMT | 31 May 22 19:44 GMT |
	|         | logs -n 25                                                 |                                                |                   |                |                     |                     |
	| delete  | -p                                                         | newest-cni-20220531193849-2108                 | minikube7\jenkins | v1.26.0-beta.1 | 31 May 22 19:44 GMT | 31 May 22 19:44 GMT |
	|         | newest-cni-20220531193849-2108                             |                                                |                   |                |                     |                     |
	| delete  | -p                                                         | old-k8s-version-20220531192531-2108            | minikube7\jenkins | v1.26.0-beta.1 | 31 May 22 19:45 GMT | 31 May 22 19:45 GMT |
	|         | old-k8s-version-20220531192531-2108                        |                                                |                   |                |                     |                     |
	| ssh     | -p                                                         | default-k8s-different-port-20220531193451-2108 | minikube7\jenkins | v1.26.0-beta.1 | 31 May 22 19:45 GMT | 31 May 22 19:45 GMT |
	|         | default-k8s-different-port-20220531193451-2108             |                                                |                   |                |                     |                     |
	|         | sudo crictl images -o json                                 |                                                |                   |                |                     |                     |
	| delete  | -p                                                         | old-k8s-version-20220531192531-2108            | minikube7\jenkins | v1.26.0-beta.1 | 31 May 22 19:45 GMT | 31 May 22 19:45 GMT |
	|         | old-k8s-version-20220531192531-2108                        |                                                |                   |                |                     |                     |
	| pause   | -p                                                         | default-k8s-different-port-20220531193451-2108 | minikube7\jenkins | v1.26.0-beta.1 | 31 May 22 19:45 GMT | 31 May 22 19:45 GMT |
	|         | default-k8s-different-port-20220531193451-2108             |                                                |                   |                |                     |                     |
	|         | --alsologtostderr -v=1                                     |                                                |                   |                |                     |                     |
	| unpause | -p                                                         | default-k8s-different-port-20220531193451-2108 | minikube7\jenkins | v1.26.0-beta.1 | 31 May 22 19:46 GMT | 31 May 22 19:46 GMT |
	|         | default-k8s-different-port-20220531193451-2108             |                                                |                   |                |                     |                     |
	|         | --alsologtostderr -v=1                                     |                                                |                   |                |                     |                     |
	| delete  | -p                                                         | default-k8s-different-port-20220531193451-2108 | minikube7\jenkins | v1.26.0-beta.1 | 31 May 22 19:46 GMT | 31 May 22 19:47 GMT |
	|         | default-k8s-different-port-20220531193451-2108             |                                                |                   |                |                     |                     |
	| delete  | -p                                                         | default-k8s-different-port-20220531193451-2108 | minikube7\jenkins | v1.26.0-beta.1 | 31 May 22 19:47 GMT | 31 May 22 19:47 GMT |
	|         | default-k8s-different-port-20220531193451-2108             |                                                |                   |                |                     |                     |
	| start   | -p kindnet-20220531191930-2108                             | kindnet-20220531191930-2108                    | minikube7\jenkins | v1.26.0-beta.1 | 31 May 22 19:45 GMT | 31 May 22 19:48 GMT |
	|         | --memory=2048                                              |                                                |                   |                |                     |                     |
	|         | --alsologtostderr                                          |                                                |                   |                |                     |                     |
	|         | --wait=true --wait-timeout=5m                              |                                                |                   |                |                     |                     |
	|         | --cni=kindnet --driver=docker                              |                                                |                   |                |                     |                     |
	| ssh     | -p kindnet-20220531191930-2108                             | kindnet-20220531191930-2108                    | minikube7\jenkins | v1.26.0-beta.1 | 31 May 22 19:48 GMT | 31 May 22 19:48 GMT |
	|         | pgrep -a kubelet                                           |                                                |                   |                |                     |                     |
	| delete  | -p kindnet-20220531191930-2108                             | kindnet-20220531191930-2108                    | minikube7\jenkins | v1.26.0-beta.1 | 31 May 22 19:49 GMT | 31 May 22 19:49 GMT |
	| start   | -p                                                         | embed-certs-20220531193346-2108                | minikube7\jenkins | v1.26.0-beta.1 | 31 May 22 19:42 GMT | 31 May 22 19:49 GMT |
	|         | embed-certs-20220531193346-2108                            |                                                |                   |                |                     |                     |
	|         | --memory=2200 --alsologtostderr                            |                                                |                   |                |                     |                     |
	|         | --wait=true --embed-certs                                  |                                                |                   |                |                     |                     |
	|         | --driver=docker                                            |                                                |                   |                |                     |                     |
	|         | --kubernetes-version=v1.23.6                               |                                                |                   |                |                     |                     |
	| ssh     | -p                                                         | embed-certs-20220531193346-2108                | minikube7\jenkins | v1.26.0-beta.1 | 31 May 22 19:50 GMT | 31 May 22 19:50 GMT |
	|         | embed-certs-20220531193346-2108                            |                                                |                   |                |                     |                     |
	|         | sudo crictl images -o json                                 |                                                |                   |                |                     |                     |
	| logs    | embed-certs-20220531193346-2108                            | embed-certs-20220531193346-2108                | minikube7\jenkins | v1.26.0-beta.1 | 31 May 22 19:51 GMT | 31 May 22 19:51 GMT |
	|         | logs -n 25                                                 |                                                |                   |                |                     |                     |
	|---------|------------------------------------------------------------|------------------------------------------------|-------------------|----------------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* I0531 19:49:50.346266    8616 api_server.go:71] duration metric: took 18.7567329s to wait for apiserver process to appear ...
	I0531 19:49:50.346266    8616 api_server.go:87] waiting for apiserver healthz status ...
	I0531 19:49:50.346266    8616 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:54559/healthz ...
	I0531 19:49:50.439360    8616 api_server.go:266] https://127.0.0.1:54559/healthz returned 200:
	ok
	I0531 19:49:50.446782    8616 api_server.go:140] control plane version: v1.23.6
	I0531 19:49:50.446782    8616 api_server.go:130] duration metric: took 100.5156ms to wait for apiserver health ...
	I0531 19:49:50.446782    8616 system_pods.go:43] waiting for kube-system pods to appear ...
	I0531 19:49:50.557855    8616 system_pods.go:59] 9 kube-system pods found
	I0531 19:49:50.557855    8616 system_pods.go:61] "coredns-64897985d-5m9xf" [93fcde9d-8331-47a5-bb17-d9346196ab6f] Running
	I0531 19:49:50.557855    8616 system_pods.go:61] "coredns-64897985d-rx2dd" [e0ba19c1-80e0-4443-bf8c-f40c1d6ee893] Running
	I0531 19:49:50.557855    8616 system_pods.go:61] "etcd-embed-certs-20220531193346-2108" [ae630bdb-56d7-428d-ad63-ba21a1788353] Running
	I0531 19:49:50.557855    8616 system_pods.go:61] "kube-apiserver-embed-certs-20220531193346-2108" [e974f7b7-53fd-44cd-9c04-de97f963802a] Running
	I0531 19:49:50.557855    8616 system_pods.go:61] "kube-controller-manager-embed-certs-20220531193346-2108" [dae57bd0-d4e3-4bdc-8ced-4f046ccc3173] Running
	I0531 19:49:50.557855    8616 system_pods.go:61] "kube-proxy-jqpk2" [6183a47a-8d01-42ca-9726-4b2e540a42e8] Running
	I0531 19:49:50.557855    8616 system_pods.go:61] "kube-scheduler-embed-certs-20220531193346-2108" [de4aa727-145e-4ab2-a845-4b5cb54df891] Running
	I0531 19:49:50.557855    8616 system_pods.go:61] "metrics-server-b955d9d8-w48dh" [46b08093-d98e-43c9-9180-1bd5c1294a67] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0531 19:49:50.557855    8616 system_pods.go:61] "storage-provisioner" [4b942c94-ac31-4c5a-8901-728ebf0506e3] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0531 19:49:50.557855    8616 system_pods.go:74] duration metric: took 111.0723ms to wait for pod list to return data ...
	I0531 19:49:50.557855    8616 default_sa.go:34] waiting for default service account to be created ...
	I0531 19:49:50.632635    8616 default_sa.go:45] found service account: "default"
	I0531 19:49:50.632635    8616 default_sa.go:55] duration metric: took 74.7795ms for default service account to be created ...
	I0531 19:49:50.632635    8616 system_pods.go:116] waiting for k8s-apps to be running ...
	I0531 19:49:50.740763    8616 system_pods.go:86] 9 kube-system pods found
	I0531 19:49:50.740763    8616 system_pods.go:89] "coredns-64897985d-5m9xf" [93fcde9d-8331-47a5-bb17-d9346196ab6f] Running
	I0531 19:49:50.741320    8616 system_pods.go:89] "coredns-64897985d-rx2dd" [e0ba19c1-80e0-4443-bf8c-f40c1d6ee893] Running
	I0531 19:49:50.741320    8616 system_pods.go:89] "etcd-embed-certs-20220531193346-2108" [ae630bdb-56d7-428d-ad63-ba21a1788353] Running
	I0531 19:49:50.741385    8616 system_pods.go:89] "kube-apiserver-embed-certs-20220531193346-2108" [e974f7b7-53fd-44cd-9c04-de97f963802a] Running
	I0531 19:49:50.741385    8616 system_pods.go:89] "kube-controller-manager-embed-certs-20220531193346-2108" [dae57bd0-d4e3-4bdc-8ced-4f046ccc3173] Running
	I0531 19:49:50.741385    8616 system_pods.go:89] "kube-proxy-jqpk2" [6183a47a-8d01-42ca-9726-4b2e540a42e8] Running
	I0531 19:49:50.741385    8616 system_pods.go:89] "kube-scheduler-embed-certs-20220531193346-2108" [de4aa727-145e-4ab2-a845-4b5cb54df891] Running
	I0531 19:49:50.741385    8616 system_pods.go:89] "metrics-server-b955d9d8-w48dh" [46b08093-d98e-43c9-9180-1bd5c1294a67] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0531 19:49:50.741385    8616 system_pods.go:89] "storage-provisioner" [4b942c94-ac31-4c5a-8901-728ebf0506e3] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0531 19:49:50.741385    8616 system_pods.go:126] duration metric: took 108.7499ms to wait for k8s-apps to be running ...
	I0531 19:49:50.741385    8616 system_svc.go:44] waiting for kubelet service to be running ....
	I0531 19:49:50.760406    8616 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0531 19:49:50.860598    8616 system_svc.go:56] duration metric: took 119.2121ms WaitForService to wait for kubelet.
	I0531 19:49:50.860598    8616 kubeadm.go:572] duration metric: took 19.2710623s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0531 19:49:50.860598    8616 node_conditions.go:102] verifying NodePressure condition ...
	I0531 19:49:50.928192    8616 node_conditions.go:122] node storage ephemeral capacity is 263174212Ki
	I0531 19:49:50.928192    8616 node_conditions.go:123] node cpu capacity is 16
	I0531 19:49:50.928192    8616 node_conditions.go:105] duration metric: took 67.5936ms to run NodePressure ...
	I0531 19:49:50.929192    8616 start.go:213] waiting for startup goroutines ...
	I0531 19:49:51.152403    8616 start.go:504] kubectl: 1.18.2, cluster: 1.23.6 (minor skew: 5)
	Log file created at: 2022/05/31 19:49:51
	Running on machine: minikube7
	Binary: Built with gc go1.18.2 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0531 19:49:51.084809    9204 out.go:296] Setting OutFile to fd 1616 ...
	I0531 19:49:51.167390    9204 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0531 19:49:51.167390    9204 out.go:309] Setting ErrFile to fd 1840...
	I0531 19:49:51.167390    9204 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0531 19:49:51.186974    9204 out.go:303] Setting JSON to false
	I0531 19:49:51.190465    9204 start.go:115] hostinfo: {"hostname":"minikube7","uptime":84861,"bootTime":1653941730,"procs":156,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19044 Build 19044","kernelVersion":"10.0.19044 Build 19044","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f66ed2ea-6c04-4a6b-8eea-b2fb0953e990"}
	W0531 19:49:51.190465    9204 start.go:123] gopshost.Virtualization returned error: not implemented yet
	I0531 19:49:51.254771    9204 out.go:177] * [calico-20220531191937-2108] minikube v1.26.0-beta.1 on Microsoft Windows 10 Enterprise N 10.0.19044 Build 19044
	I0531 19:49:51.254771    8616 out.go:177] 
	I0531 19:49:51.260770    9204 notify.go:193] Checking for updates...
	W0531 19:49:51.260770    8616 out.go:239] ! C:\ProgramData\chocolatey\bin\kubectl.exe is version 1.18.2, which may have incompatibilites with Kubernetes 1.23.6.
	I0531 19:49:51.269758    9204 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube7\minikube-integration\kubeconfig
	I0531 19:49:51.269758    8616 out.go:177]   - Want kubectl v1.23.6? Try 'minikube kubectl -- get pods -A'
	I0531 19:49:51.276754    9204 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube7\minikube-integration\.minikube
	I0531 19:49:51.281753    8616 out.go:177] * Done! kubectl is now configured to use "embed-certs-20220531193346-2108" cluster and "default" namespace by default
	I0531 19:49:51.285755    9204 out.go:177]   - MINIKUBE_LOCATION=14079
	I0531 19:49:51.296776    9204 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0531 19:49:50.645797    9268 pod_ready.go:102] pod "cilium-k5t52" in "kube-system" namespace has status "Ready":"False"
	I0531 19:49:53.140920    9268 pod_ready.go:102] pod "cilium-k5t52" in "kube-system" namespace has status "Ready":"False"
	I0531 19:49:52.888286    9164 cli_runner.go:164] Run: docker container inspect auto-20220531191922-2108 --format={{.State.Status}}
	I0531 19:49:54.499135    9164 cli_runner.go:217] Completed: docker container inspect auto-20220531191922-2108 --format={{.State.Status}}: (1.6106726s)
	I0531 19:49:51.301760    9204 config.go:178] Loaded profile config "auto-20220531191922-2108": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.6
	I0531 19:49:51.301760    9204 config.go:178] Loaded profile config "cilium-20220531191937-2108": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.6
	I0531 19:49:51.302765    9204 config.go:178] Loaded profile config "embed-certs-20220531193346-2108": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.6
	I0531 19:49:51.302765    9204 driver.go:358] Setting default libvirt URI to qemu:///system
	I0531 19:49:54.992808    9204 docker.go:137] docker version: linux-20.10.14
	I0531 19:49:54.999822    9204 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0531 19:49:57.486481    9204 cli_runner.go:217] Completed: docker system info --format "{{json .}}": (2.4861987s)
	I0531 19:49:57.487080    9204 info.go:265] docker info: {ID:JKWR:L4LW:XYJC:G6AI:GZFU:RUGW:CCH6:OD2M:V572:4FTB:B7YC:DTUC Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:72 OomKillDisable:true NGoroutines:58 SystemTime:2022-05-31 19:49:56.2369805 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.102.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_
64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:53902323712 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.14 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3df54a852345ae127d1fa3092b95168e4a88e2f8 Expected:3df54a852345ae127d1fa3092b95168e4a88e2f8} RuncCommit:{ID:v1.0.3-0-gf46b6ba Expected:v1.0.3-0-gf46b6ba} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,
profile=default] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.5.1] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:C:\Program Files\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc.
Version:v0.17.0]] Warnings:<nil>}}
	I0531 19:49:57.604786    9204 out.go:177] * Using the docker driver based on user configuration
	I0531 19:49:55.143006    9268 pod_ready.go:102] pod "cilium-k5t52" in "kube-system" namespace has status "Ready":"False"
	I0531 19:49:57.145340    9268 pod_ready.go:102] pod "cilium-k5t52" in "kube-system" namespace has status "Ready":"False"
	I0531 19:49:57.516850    9164 cli_runner.go:164] Run: docker container inspect auto-20220531191922-2108 --format={{.State.Status}}
	I0531 19:49:58.898389    9164 cli_runner.go:217] Completed: docker container inspect auto-20220531191922-2108 --format={{.State.Status}}: (1.3813979s)
	I0531 19:49:57.608317    9204 start.go:284] selected driver: docker
	I0531 19:49:57.608513    9204 start.go:806] validating driver "docker" against <nil>
	I0531 19:49:57.608513    9204 start.go:817] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0531 19:49:57.700957    9204 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0531 19:50:00.270796    9204 cli_runner.go:217] Completed: docker system info --format "{{json .}}": (2.5698275s)
	I0531 19:50:00.270796    9204 info.go:265] docker info: {ID:JKWR:L4LW:XYJC:G6AI:GZFU:RUGW:CCH6:OD2M:V572:4FTB:B7YC:DTUC Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:72 OomKillDisable:true NGoroutines:58 SystemTime:2022-05-31 19:49:59.025748 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.102.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_6
4 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:53902323712 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.14 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3df54a852345ae127d1fa3092b95168e4a88e2f8 Expected:3df54a852345ae127d1fa3092b95168e4a88e2f8} RuncCommit:{ID:v1.0.3-0-gf46b6ba Expected:v1.0.3-0-gf46b6ba} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,p
rofile=default] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.5.1] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:C:\Program Files\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc.
Version:v0.17.0]] Warnings:<nil>}}
	I0531 19:50:00.270796    9204 start_flags.go:292] no existing cluster config was found, will generate one from the flags 
	I0531 19:50:00.272492    9204 start_flags.go:847] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0531 19:50:00.275572    9204 out.go:177] * Using Docker Desktop driver with the root privilege
	I0531 19:50:00.278649    9204 cni.go:95] Creating CNI manager for "calico"
	I0531 19:50:00.278649    9204 start_flags.go:301] Found "Calico" CNI - setting NetworkPlugin=cni
	I0531 19:50:00.278649    9204 start_flags.go:306] config:
	{Name:calico-20220531191937-2108 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653596720-14230@sha256:e953786303ac8350802546ee187d34e89f0007072a54fdbcc2f86a1fb8575418 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.6 ClusterName:calico-20220531191937-2108 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:5m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube7:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0531 19:50:00.283331    9204 out.go:177] * Starting control plane node calico-20220531191937-2108 in cluster calico-20220531191937-2108
	I0531 19:50:00.287897    9204 cache.go:120] Beginning downloading kic base image for docker with docker
	I0531 19:50:00.290886    9204 out.go:177] * Pulling base image ...
	I0531 19:50:00.295769    9204 preload.go:132] Checking if preload exists for k8s version v1.23.6 and runtime docker
	I0531 19:50:00.295769    9204 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653596720-14230@sha256:e953786303ac8350802546ee187d34e89f0007072a54fdbcc2f86a1fb8575418 in local docker daemon
	I0531 19:50:00.295769    9204 preload.go:148] Found local preload: C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.23.6-docker-overlay2-amd64.tar.lz4
	I0531 19:50:00.295769    9204 cache.go:57] Caching tarball of preloaded images
	I0531 19:50:00.296331    9204 preload.go:174] Found C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.23.6-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0531 19:50:00.296577    9204 cache.go:60] Finished verifying existence of preloaded tar for  v1.23.6 on docker
	I0531 19:50:00.296780    9204 profile.go:148] Saving config to C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\calico-20220531191937-2108\config.json ...
	I0531 19:50:00.296892    9204 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\calico-20220531191937-2108\config.json: {Name:mk395a5aeceb2554c99cc9c4c3ac1d1fc9bee949 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0531 19:50:01.564541    9204 image.go:79] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653596720-14230@sha256:e953786303ac8350802546ee187d34e89f0007072a54fdbcc2f86a1fb8575418 in local docker daemon, skipping pull
	I0531 19:50:01.564541    9204 cache.go:141] gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653596720-14230@sha256:e953786303ac8350802546ee187d34e89f0007072a54fdbcc2f86a1fb8575418 exists in daemon, skipping load
	I0531 19:50:01.564541    9204 cache.go:206] Successfully downloaded all kic artifacts
	I0531 19:50:01.564541    9204 start.go:352] acquiring machines lock for calico-20220531191937-2108: {Name:mk229298a8341a90ce561add7d1a945d7b3315d3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0531 19:50:01.564541    9204 start.go:356] acquired machines lock for "calico-20220531191937-2108" in 0s
	I0531 19:50:01.564541    9204 start.go:91] Provisioning new machine with config: &{Name:calico-20220531191937-2108 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653596720-14230@sha256:e953786303ac8350802546ee187d34e89f0007072a54fdbcc2f86a1fb8575418 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.6 ClusterName:calico-20220531191937-2108 Namespace:default
APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.23.6 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:5m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube7:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docke
r BinaryMirror: DisableOptimizations:false DisableMetrics:false} &{Name: IP: Port:8443 KubernetesVersion:v1.23.6 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0531 19:50:01.564541    9204 start.go:131] createHost starting for "" (driver="docker")
	I0531 19:49:59.646777    9268 pod_ready.go:102] pod "cilium-k5t52" in "kube-system" namespace has status "Ready":"False"
	I0531 19:50:02.332392    9268 pod_ready.go:102] pod "cilium-k5t52" in "kube-system" namespace has status "Ready":"False"
	I0531 19:50:01.915706    9164 cli_runner.go:164] Run: docker container inspect auto-20220531191922-2108 --format={{.State.Status}}
	I0531 19:50:03.216485    9164 cli_runner.go:217] Completed: docker container inspect auto-20220531191922-2108 --format={{.State.Status}}: (1.300773s)
	I0531 19:50:01.568587    9204 out.go:204] * Creating docker container (CPUs=2, Memory=2048MB) ...
	I0531 19:50:01.568587    9204 start.go:165] libmachine.API.Create for "calico-20220531191937-2108" (driver="docker")
	I0531 19:50:01.568587    9204 client.go:168] LocalClient.Create starting
	I0531 19:50:01.569562    9204 main.go:134] libmachine: Reading certificate data from C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem
	I0531 19:50:01.569562    9204 main.go:134] libmachine: Decoding PEM data...
	I0531 19:50:01.569562    9204 main.go:134] libmachine: Parsing certificate...
	I0531 19:50:01.569562    9204 main.go:134] libmachine: Reading certificate data from C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\cert.pem
	I0531 19:50:01.569562    9204 main.go:134] libmachine: Decoding PEM data...
	I0531 19:50:01.569562    9204 main.go:134] libmachine: Parsing certificate...
	I0531 19:50:01.578550    9204 cli_runner.go:164] Run: docker network inspect calico-20220531191937-2108 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0531 19:50:02.878167    9204 cli_runner.go:211] docker network inspect calico-20220531191937-2108 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0531 19:50:02.878167    9204 cli_runner.go:217] Completed: docker network inspect calico-20220531191937-2108 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}": (1.2996109s)
	I0531 19:50:02.885168    9204 network_create.go:272] running [docker network inspect calico-20220531191937-2108] to gather additional debugging logs...
	I0531 19:50:02.885168    9204 cli_runner.go:164] Run: docker network inspect calico-20220531191937-2108
	W0531 19:50:04.147653    9204 cli_runner.go:211] docker network inspect calico-20220531191937-2108 returned with exit code 1
	I0531 19:50:04.147653    9204 cli_runner.go:217] Completed: docker network inspect calico-20220531191937-2108: (1.2624794s)
	I0531 19:50:04.147653    9204 network_create.go:275] error running [docker network inspect calico-20220531191937-2108]: docker network inspect calico-20220531191937-2108: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: calico-20220531191937-2108
	I0531 19:50:04.147653    9204 network_create.go:277] output of [docker network inspect calico-20220531191937-2108]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: calico-20220531191937-2108
	
	** /stderr **
	I0531 19:50:04.157637    9204 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0531 19:50:05.415515    9204 cli_runner.go:217] Completed: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}": (1.2578719s)
	I0531 19:50:05.445814    9204 network.go:288] reserving subnet 192.168.49.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[] amended:true}} dirty:map[192.168.49.0:0xc000006d88] misses:0}
	I0531 19:50:05.445814    9204 network.go:235] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0531 19:50:05.445814    9204 network_create.go:115] attempt to create docker network calico-20220531191937-2108 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0531 19:50:05.453842    9204 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true calico-20220531191937-2108
	I0531 19:50:04.642744    9268 pod_ready.go:102] pod "cilium-k5t52" in "kube-system" namespace has status "Ready":"False"
	I0531 19:50:06.644986    9268 pod_ready.go:102] pod "cilium-k5t52" in "kube-system" namespace has status "Ready":"False"
	I0531 19:50:09.144450    9268 pod_ready.go:102] pod "cilium-k5t52" in "kube-system" namespace has status "Ready":"False"
	I0531 19:50:06.251698    9164 cli_runner.go:164] Run: docker container inspect auto-20220531191922-2108 --format={{.State.Status}}
	I0531 19:50:07.600632    9164 cli_runner.go:217] Completed: docker container inspect auto-20220531191922-2108 --format={{.State.Status}}: (1.3488025s)
	I0531 19:50:06.783335    9204 cli_runner.go:217] Completed: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true calico-20220531191937-2108: (1.3294874s)
	I0531 19:50:06.783335    9204 network_create.go:99] docker network calico-20220531191937-2108 192.168.49.0/24 created
	I0531 19:50:06.783335    9204 kic.go:106] calculated static IP "192.168.49.2" for the "calico-20220531191937-2108" container
	I0531 19:50:06.796335    9204 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0531 19:50:08.150320    9204 cli_runner.go:217] Completed: docker ps -a --format {{.Names}}: (1.3528697s)
	I0531 19:50:08.159915    9204 cli_runner.go:164] Run: docker volume create calico-20220531191937-2108 --label name.minikube.sigs.k8s.io=calico-20220531191937-2108 --label created_by.minikube.sigs.k8s.io=true
	I0531 19:50:09.384153    9204 cli_runner.go:217] Completed: docker volume create calico-20220531191937-2108 --label name.minikube.sigs.k8s.io=calico-20220531191937-2108 --label created_by.minikube.sigs.k8s.io=true: (1.2242333s)
	I0531 19:50:09.384392    9204 oci.go:103] Successfully created a docker volume calico-20220531191937-2108
	I0531 19:50:09.393540    9204 cli_runner.go:164] Run: docker run --rm --name calico-20220531191937-2108-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=calico-20220531191937-2108 --entrypoint /usr/bin/test -v calico-20220531191937-2108:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653596720-14230@sha256:e953786303ac8350802546ee187d34e89f0007072a54fdbcc2f86a1fb8575418 -d /var/lib
	I0531 19:50:11.642196    9268 pod_ready.go:102] pod "cilium-k5t52" in "kube-system" namespace has status "Ready":"False"
	I0531 19:50:13.647209    9268 pod_ready.go:102] pod "cilium-k5t52" in "kube-system" namespace has status "Ready":"False"
	I0531 19:50:10.615103    9164 cli_runner.go:164] Run: docker container inspect auto-20220531191922-2108 --format={{.State.Status}}
	I0531 19:50:12.008187    9164 cli_runner.go:217] Completed: docker container inspect auto-20220531191922-2108 --format={{.State.Status}}: (1.3920811s)
	I0531 19:50:15.042820    9164 cli_runner.go:164] Run: docker container inspect auto-20220531191922-2108 --format={{.State.Status}}
	I0531 19:50:12.953319    9204 cli_runner.go:217] Completed: docker run --rm --name calico-20220531191937-2108-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=calico-20220531191937-2108 --entrypoint /usr/bin/test -v calico-20220531191937-2108:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653596720-14230@sha256:e953786303ac8350802546ee187d34e89f0007072a54fdbcc2f86a1fb8575418 -d /var/lib: (3.5597634s)
	I0531 19:50:12.953319    9204 oci.go:107] Successfully prepared a docker volume calico-20220531191937-2108
	I0531 19:50:12.953319    9204 preload.go:132] Checking if preload exists for k8s version v1.23.6 and runtime docker
	I0531 19:50:12.953319    9204 kic.go:179] Starting extracting preloaded images to volume ...
	I0531 19:50:12.965313    9204 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.23.6-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v calico-20220531191937-2108:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653596720-14230@sha256:e953786303ac8350802546ee187d34e89f0007072a54fdbcc2f86a1fb8575418 -I lz4 -xf /preloaded.tar -C /extractDir
	I0531 19:50:15.654692    9268 pod_ready.go:102] pod "cilium-k5t52" in "kube-system" namespace has status "Ready":"False"
	I0531 19:50:18.086828    9268 pod_ready.go:102] pod "cilium-k5t52" in "kube-system" namespace has status "Ready":"False"
	I0531 19:50:16.345289    9164 cli_runner.go:217] Completed: docker container inspect auto-20220531191922-2108 --format={{.State.Status}}: (1.3024634s)
	I0531 19:50:19.377999    9164 cli_runner.go:164] Run: docker container inspect auto-20220531191922-2108 --format={{.State.Status}}
	I0531 19:50:20.146417    9268 pod_ready.go:102] pod "cilium-k5t52" in "kube-system" namespace has status "Ready":"False"
	I0531 19:50:22.690233    9268 pod_ready.go:102] pod "cilium-k5t52" in "kube-system" namespace has status "Ready":"False"
	I0531 19:50:20.665948    9164 cli_runner.go:217] Completed: docker container inspect auto-20220531191922-2108 --format={{.State.Status}}: (1.2879427s)
	I0531 19:50:23.696116    9164 cli_runner.go:164] Run: docker container inspect auto-20220531191922-2108 --format={{.State.Status}}
	I0531 19:50:24.969191    9164 cli_runner.go:217] Completed: docker container inspect auto-20220531191922-2108 --format={{.State.Status}}: (1.2730695s)
	I0531 19:50:25.137333    9268 pod_ready.go:102] pod "cilium-k5t52" in "kube-system" namespace has status "Ready":"False"
	I0531 19:50:27.203556    9268 pod_ready.go:102] pod "cilium-k5t52" in "kube-system" namespace has status "Ready":"False"
	I0531 19:50:27.988449    9164 cli_runner.go:164] Run: docker container inspect auto-20220531191922-2108 --format={{.State.Status}}
	I0531 19:50:29.200894    9164 cli_runner.go:217] Completed: docker container inspect auto-20220531191922-2108 --format={{.State.Status}}: (1.2124396s)
	I0531 19:50:29.657641    9268 pod_ready.go:102] pod "cilium-k5t52" in "kube-system" namespace has status "Ready":"False"
	I0531 19:50:31.933017    9268 pod_ready.go:102] pod "cilium-k5t52" in "kube-system" namespace has status "Ready":"False"
	I0531 19:50:32.227746    9164 cli_runner.go:164] Run: docker container inspect auto-20220531191922-2108 --format={{.State.Status}}
	I0531 19:50:33.510930    9164 cli_runner.go:217] Completed: docker container inspect auto-20220531191922-2108 --format={{.State.Status}}: (1.2831022s)
	I0531 19:50:34.332888    9268 pod_ready.go:102] pod "cilium-k5t52" in "kube-system" namespace has status "Ready":"False"
	I0531 19:50:36.646199    9268 pod_ready.go:102] pod "cilium-k5t52" in "kube-system" namespace has status "Ready":"False"
	I0531 19:50:38.772665    9268 pod_ready.go:102] pod "cilium-k5t52" in "kube-system" namespace has status "Ready":"False"
	I0531 19:50:36.545016    9164 cli_runner.go:164] Run: docker container inspect auto-20220531191922-2108 --format={{.State.Status}}
	I0531 19:50:37.748014    9164 cli_runner.go:217] Completed: docker container inspect auto-20220531191922-2108 --format={{.State.Status}}: (1.202913s)
	I0531 19:50:40.202571    9204 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.23.6-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v calico-20220531191937-2108:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653596720-14230@sha256:e953786303ac8350802546ee187d34e89f0007072a54fdbcc2f86a1fb8575418 -I lz4 -xf /preloaded.tar -C /extractDir: (27.2371385s)
	I0531 19:50:40.202571    9204 kic.go:188] duration metric: took 27.249133 seconds to extract preloaded images to volume
	I0531 19:50:40.212968    9204 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0531 19:50:41.079065    9268 pod_ready.go:102] pod "cilium-k5t52" in "kube-system" namespace has status "Ready":"False"
	I0531 19:50:43.087683    9268 pod_ready.go:102] pod "cilium-k5t52" in "kube-system" namespace has status "Ready":"False"
	I0531 19:50:40.770103    9164 cli_runner.go:164] Run: docker container inspect auto-20220531191922-2108 --format={{.State.Status}}
	I0531 19:50:41.922738    9164 cli_runner.go:217] Completed: docker container inspect auto-20220531191922-2108 --format={{.State.Status}}: (1.1526298s)
	I0531 19:50:44.926251    9164 stop.go:59] stop err: Maximum number of retries (60) exceeded
	I0531 19:50:44.926251    9164 delete.go:129] stophost failed (probably ok): Temporary Error: stop: Maximum number of retries (60) exceeded
	I0531 19:50:44.942411    9164 cli_runner.go:164] Run: docker container inspect auto-20220531191922-2108 --format={{.State.Status}}
	I0531 19:50:42.425561    9204 cli_runner.go:217] Completed: docker system info --format "{{json .}}": (2.212583s)
	I0531 19:50:42.426310    9204 info.go:265] docker info: {ID:JKWR:L4LW:XYJC:G6AI:GZFU:RUGW:CCH6:OD2M:V572:4FTB:B7YC:DTUC Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:74 OomKillDisable:true NGoroutines:58 SystemTime:2022-05-31 19:50:41.2734932 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.102.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_
64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:53902323712 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.14 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3df54a852345ae127d1fa3092b95168e4a88e2f8 Expected:3df54a852345ae127d1fa3092b95168e4a88e2f8} RuncCommit:{ID:v1.0.3-0-gf46b6ba Expected:v1.0.3-0-gf46b6ba} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,
profile=default] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.5.1] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:C:\Program Files\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc.
Version:v0.17.0]] Warnings:<nil>}}
	I0531 19:50:42.437858    9204 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0531 19:50:44.621798    9204 cli_runner.go:217] Completed: docker info --format "'{{json .SecurityOptions}}'": (2.1839299s)
	I0531 19:50:44.628791    9204 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname calico-20220531191937-2108 --name calico-20220531191937-2108 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=calico-20220531191937-2108 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=calico-20220531191937-2108 --network calico-20220531191937-2108 --ip 192.168.49.2 --volume calico-20220531191937-2108:/var --security-opt apparmor=unconfined --memory=2048mb --memory-swap=2048mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653596720-14230@sha256:e953786303ac8350802546ee187d34e89f0007072a54fdbcc2f86a1fb8575418
	I0531 19:50:45.089192    9268 pod_ready.go:102] pod "cilium-k5t52" in "kube-system" namespace has status "Ready":"False"
	I0531 19:50:47.102726    9268 pod_ready.go:102] pod "cilium-k5t52" in "kube-system" namespace has status "Ready":"False"
	I0531 19:50:46.171577    9164 cli_runner.go:217] Completed: docker container inspect auto-20220531191922-2108 --format={{.State.Status}}: (1.2291605s)
	W0531 19:50:46.171577    9164 delete.go:135] deletehost failed: Docker machine "auto-20220531191922-2108" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I0531 19:50:46.180573    9164 cli_runner.go:164] Run: docker container inspect -f {{.Id}} auto-20220531191922-2108
	I0531 19:50:47.463241    9164 cli_runner.go:217] Completed: docker container inspect -f {{.Id}} auto-20220531191922-2108: (1.2826628s)
	I0531 19:50:47.474237    9164 cli_runner.go:164] Run: docker container inspect auto-20220531191922-2108 --format={{.State.Status}}
	I0531 19:50:48.820556    9164 cli_runner.go:217] Completed: docker container inspect auto-20220531191922-2108 --format={{.State.Status}}: (1.3463122s)
	I0531 19:50:48.827551    9164 cli_runner.go:164] Run: docker exec --privileged -t auto-20220531191922-2108 /bin/bash -c "sudo init 0"
	W0531 19:50:50.067176    9164 cli_runner.go:211] docker exec --privileged -t auto-20220531191922-2108 /bin/bash -c "sudo init 0" returned with exit code 1
	I0531 19:50:50.067176    9164 cli_runner.go:217] Completed: docker exec --privileged -t auto-20220531191922-2108 /bin/bash -c "sudo init 0": (1.2396197s)
	I0531 19:50:50.067176    9164 oci.go:625] error shutdown auto-20220531191922-2108: docker exec --privileged -t auto-20220531191922-2108 /bin/bash -c "sudo init 0": exit status 1
	stdout:
	
	stderr:
	Error response from daemon: Container 340ffde1adac470b31984d86e47615244c565b791afa400115f002bf5bf8dd67 is not running
	I0531 19:50:47.120642    9204 cli_runner.go:217] Completed: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname calico-20220531191937-2108 --name calico-20220531191937-2108 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=calico-20220531191937-2108 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=calico-20220531191937-2108 --network calico-20220531191937-2108 --ip 192.168.49.2 --volume calico-20220531191937-2108:/var --security-opt apparmor=unconfined --memory=2048mb --memory-swap=2048mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653596720-14230@sha256:e953786303ac8350802546ee187d34e89f0007072a54fdbcc2f86a1fb8575418: (2.4917556s)
	I0531 19:50:47.129556    9204 cli_runner.go:164] Run: docker container inspect calico-20220531191937-2108 --format={{.State.Running}}
	I0531 19:50:48.475805    9204 cli_runner.go:217] Completed: docker container inspect calico-20220531191937-2108 --format={{.State.Running}}: (1.3462433s)
	I0531 19:50:48.486615    9204 cli_runner.go:164] Run: docker container inspect calico-20220531191937-2108 --format={{.State.Status}}
	I0531 19:50:49.783563    9204 cli_runner.go:217] Completed: docker container inspect calico-20220531191937-2108 --format={{.State.Status}}: (1.2968512s)
	I0531 19:50:49.795640    9204 cli_runner.go:164] Run: docker exec calico-20220531191937-2108 stat /var/lib/dpkg/alternatives/iptables
	I0531 19:50:49.592858    9268 pod_ready.go:102] pod "cilium-k5t52" in "kube-system" namespace has status "Ready":"False"
	I0531 19:50:52.094119    9268 pod_ready.go:102] pod "cilium-k5t52" in "kube-system" namespace has status "Ready":"False"
	I0531 19:50:51.090703    9164 cli_runner.go:164] Run: docker container inspect auto-20220531191922-2108 --format={{.State.Status}}
	I0531 19:50:52.331743    9164 cli_runner.go:217] Completed: docker container inspect auto-20220531191922-2108 --format={{.State.Status}}: (1.2408341s)
	I0531 19:50:52.331920    9164 oci.go:639] temporary error: container auto-20220531191922-2108 status is  but expect it to be exited
	I0531 19:50:52.332002    9164 oci.go:645] Successfully shutdown container auto-20220531191922-2108
	I0531 19:50:52.340140    9164 cli_runner.go:164] Run: docker rm -f -v auto-20220531191922-2108
	I0531 19:50:53.611440    9164 cli_runner.go:217] Completed: docker rm -f -v auto-20220531191922-2108: (1.2712945s)
	I0531 19:50:53.629598    9164 cli_runner.go:164] Run: docker container inspect -f {{.Id}} auto-20220531191922-2108
	W0531 19:50:54.793441    9164 cli_runner.go:211] docker container inspect -f {{.Id}} auto-20220531191922-2108 returned with exit code 1
	I0531 19:50:54.793672    9164 cli_runner.go:217] Completed: docker container inspect -f {{.Id}} auto-20220531191922-2108: (1.1638383s)
	I0531 19:50:54.808001    9164 cli_runner.go:164] Run: docker network inspect auto-20220531191922-2108 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0531 19:50:51.194840    9204 cli_runner.go:217] Completed: docker exec calico-20220531191937-2108 stat /var/lib/dpkg/alternatives/iptables: (1.3991943s)
	I0531 19:50:51.194840    9204 oci.go:247] the created container "calico-20220531191937-2108" has a running status.
	I0531 19:50:51.194840    9204 kic.go:210] Creating ssh key for kic: C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\calico-20220531191937-2108\id_rsa...
	I0531 19:50:51.352748    9204 kic_runner.go:191] docker (temp): C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\calico-20220531191937-2108\id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0531 19:50:52.714374    9204 cli_runner.go:164] Run: docker container inspect calico-20220531191937-2108 --format={{.State.Status}}
	I0531 19:50:53.983691    9204 cli_runner.go:217] Completed: docker container inspect calico-20220531191937-2108 --format={{.State.Status}}: (1.2693109s)
	I0531 19:50:53.999679    9204 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0531 19:50:53.999679    9204 kic_runner.go:114] Args: [docker exec --privileged calico-20220531191937-2108 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0531 19:50:55.344133    9204 kic_runner.go:123] Done: [docker exec --privileged calico-20220531191937-2108 chown docker:docker /home/docker/.ssh/authorized_keys]: (1.3442906s)
	I0531 19:50:55.347971    9204 kic.go:250] ensuring only current user has permissions to key file located at : C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\calico-20220531191937-2108\id_rsa...
	I0531 19:50:55.925795    9204 cli_runner.go:164] Run: docker container inspect calico-20220531191937-2108 --format={{.State.Status}}
	W0531 19:50:56.033847    9164 cli_runner.go:211] docker network inspect auto-20220531191922-2108 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0531 19:50:56.033847    9164 cli_runner.go:217] Completed: docker network inspect auto-20220531191922-2108 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}": (1.2258406s)
	I0531 19:50:56.040853    9164 network_create.go:272] running [docker network inspect auto-20220531191922-2108] to gather additional debugging logs...
	I0531 19:50:56.040853    9164 cli_runner.go:164] Run: docker network inspect auto-20220531191922-2108
	W0531 19:50:57.206246    9164 cli_runner.go:211] docker network inspect auto-20220531191922-2108 returned with exit code 1
	I0531 19:50:57.206246    9164 cli_runner.go:217] Completed: docker network inspect auto-20220531191922-2108: (1.1653883s)
	I0531 19:50:57.206246    9164 network_create.go:275] error running [docker network inspect auto-20220531191922-2108]: docker network inspect auto-20220531191922-2108: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: auto-20220531191922-2108
	I0531 19:50:57.206246    9164 network_create.go:277] output of [docker network inspect auto-20220531191922-2108]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: auto-20220531191922-2108
	
	** /stderr **
	W0531 19:50:57.207252    9164 delete.go:139] delete failed (probably ok) <nil>
	I0531 19:50:57.207252    9164 fix.go:115] Sleeping 1 second for extra luck!
	I0531 19:50:58.209593    9164 start.go:131] createHost starting for "" (driver="docker")
	I0531 19:50:54.590673    9268 pod_ready.go:102] pod "cilium-k5t52" in "kube-system" namespace has status "Ready":"False"
	I0531 19:50:56.597659    9268 pod_ready.go:102] pod "cilium-k5t52" in "kube-system" namespace has status "Ready":"False"
	I0531 19:50:59.090202    9268 pod_ready.go:102] pod "cilium-k5t52" in "kube-system" namespace has status "Ready":"False"
	I0531 19:50:58.216019    9164 out.go:204] * Creating docker container (CPUs=2, Memory=2048MB) ...
	I0531 19:50:58.216363    9164 start.go:165] libmachine.API.Create for "auto-20220531191922-2108" (driver="docker")
	I0531 19:50:58.216424    9164 client.go:168] LocalClient.Create starting
	I0531 19:50:58.216424    9164 main.go:134] libmachine: Reading certificate data from C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem
	I0531 19:50:58.217206    9164 main.go:134] libmachine: Decoding PEM data...
	I0531 19:50:58.217206    9164 main.go:134] libmachine: Parsing certificate...
	I0531 19:50:58.217598    9164 main.go:134] libmachine: Reading certificate data from C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\cert.pem
	I0531 19:50:58.217933    9164 main.go:134] libmachine: Decoding PEM data...
	I0531 19:50:58.217933    9164 main.go:134] libmachine: Parsing certificate...
	I0531 19:50:58.233355    9164 cli_runner.go:164] Run: docker network inspect auto-20220531191922-2108 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0531 19:50:59.565184    9164 cli_runner.go:211] docker network inspect auto-20220531191922-2108 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0531 19:50:59.565184    9164 cli_runner.go:217] Completed: docker network inspect auto-20220531191922-2108 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}": (1.3309144s)
	I0531 19:50:59.574188    9164 network_create.go:272] running [docker network inspect auto-20220531191922-2108] to gather additional debugging logs...
	I0531 19:50:59.574188    9164 cli_runner.go:164] Run: docker network inspect auto-20220531191922-2108
	I0531 19:50:57.128250    9204 cli_runner.go:217] Completed: docker container inspect calico-20220531191937-2108 --format={{.State.Status}}: (1.2024495s)
	I0531 19:50:57.128250    9204 machine.go:88] provisioning docker machine ...
	I0531 19:50:57.128250    9204 ubuntu.go:169] provisioning hostname "calico-20220531191937-2108"
	I0531 19:50:57.136258    9204 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220531191937-2108
	I0531 19:50:58.384809    9204 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220531191937-2108: (1.2485456s)
	I0531 19:50:58.389815    9204 main.go:134] libmachine: Using SSH client type: native
	I0531 19:50:58.396816    9204 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x1322ea0] 0x1325d00 <nil>  [] 0s} 127.0.0.1 54919 <nil> <nil>}
	I0531 19:50:58.396816    9204 main.go:134] libmachine: About to run SSH command:
	sudo hostname calico-20220531191937-2108 && echo "calico-20220531191937-2108" | sudo tee /etc/hostname
	I0531 19:50:58.637577    9204 main.go:134] libmachine: SSH cmd err, output: <nil>: calico-20220531191937-2108
	
	I0531 19:50:58.647947    9204 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220531191937-2108
	I0531 19:50:59.961886    9204 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220531191937-2108: (1.3139332s)
	I0531 19:50:59.966482    9204 main.go:134] libmachine: Using SSH client type: native
	I0531 19:50:59.967181    9204 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x1322ea0] 0x1325d00 <nil>  [] 0s} 127.0.0.1 54919 <nil> <nil>}
	I0531 19:50:59.967181    9204 main.go:134] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\scalico-20220531191937-2108' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 calico-20220531191937-2108/g' /etc/hosts;
				else 
					echo '127.0.1.1 calico-20220531191937-2108' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0531 19:51:00.166174    9204 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I0531 19:51:00.166174    9204 ubuntu.go:175] set auth options {CertDir:C:\Users\jenkins.minikube7\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube7\minikube-integration\.minikube}
	I0531 19:51:00.166267    9204 ubuntu.go:177] setting up certificates
	I0531 19:51:00.166267    9204 provision.go:83] configureAuth start
	I0531 19:51:00.174598    9204 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" calico-20220531191937-2108
	I0531 19:51:01.585861    9268 pod_ready.go:102] pod "cilium-k5t52" in "kube-system" namespace has status "Ready":"False"
	I0531 19:51:03.594656    9268 pod_ready.go:102] pod "cilium-k5t52" in "kube-system" namespace has status "Ready":"False"
	W0531 19:51:00.789363    9164 cli_runner.go:211] docker network inspect auto-20220531191922-2108 returned with exit code 1
	I0531 19:51:00.789363    9164 cli_runner.go:217] Completed: docker network inspect auto-20220531191922-2108: (1.2150736s)
	I0531 19:51:00.789465    9164 network_create.go:275] error running [docker network inspect auto-20220531191922-2108]: docker network inspect auto-20220531191922-2108: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: auto-20220531191922-2108
	I0531 19:51:00.789465    9164 network_create.go:277] output of [docker network inspect auto-20220531191922-2108]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: auto-20220531191922-2108
	
	** /stderr **
	I0531 19:51:00.799089    9164 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0531 19:51:02.037678    9164 cli_runner.go:217] Completed: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}": (1.2385829s)
	I0531 19:51:02.058285    9164 network.go:284] reusing subnet 192.168.49.0 that has expired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc0005f2b80] amended:false}} dirty:map[] misses:0}
	I0531 19:51:02.058285    9164 network.go:235] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0531 19:51:02.058285    9164 network_create.go:115] attempt to create docker network auto-20220531191922-2108 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0531 19:51:02.069439    9164 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true auto-20220531191922-2108
	W0531 19:51:03.250157    9164 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true auto-20220531191922-2108 returned with exit code 1
	I0531 19:51:03.250157    9164 cli_runner.go:217] Completed: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true auto-20220531191922-2108: (1.1806423s)
	W0531 19:51:03.250157    9164 network_create.go:107] failed to create docker network auto-20220531191922-2108 192.168.49.0/24, will retry: subnet is taken
	I0531 19:51:03.275153    9164 network.go:279] skipping subnet 192.168.49.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc0005f2b80] amended:false}} dirty:map[] misses:0}
	I0531 19:51:03.275153    9164 network.go:238] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0531 19:51:03.299141    9164 network.go:288] reserving subnet 192.168.58.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc0005f2b80] amended:true}} dirty:map[192.168.49.0:0xc0005f2b80 192.168.58.0:0xc000006c98] misses:0}
	I0531 19:51:03.299141    9164 network.go:235] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0531 19:51:03.299141    9164 network_create.go:115] attempt to create docker network auto-20220531191922-2108 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
	I0531 19:51:03.307143    9164 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true auto-20220531191922-2108
	W0531 19:51:04.516619    9164 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true auto-20220531191922-2108 returned with exit code 1
	I0531 19:51:04.516619    9164 cli_runner.go:217] Completed: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true auto-20220531191922-2108: (1.2094701s)
	W0531 19:51:04.516619    9164 network_create.go:107] failed to create docker network auto-20220531191922-2108 192.168.58.0/24, will retry: subnet is taken
	I0531 19:51:04.541855    9164 network.go:279] skipping subnet 192.168.58.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc0005f2b80] amended:true}} dirty:map[192.168.49.0:0xc0005f2b80 192.168.58.0:0xc000006c98] misses:1}
	I0531 19:51:04.541855    9164 network.go:238] skipping subnet 192.168.58.0/24 that is reserved: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0531 19:51:04.560170    9164 network.go:288] reserving subnet 192.168.67.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc0005f2b80] amended:true}} dirty:map[192.168.49.0:0xc0005f2b80 192.168.58.0:0xc000006c98 192.168.67.0:0xc0005922f8] misses:1}
	I0531 19:51:04.560170    9164 network.go:235] using free private subnet 192.168.67.0/24: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0531 19:51:04.560170    9164 network_create.go:115] attempt to create docker network auto-20220531191922-2108 192.168.67.0/24 with gateway 192.168.67.1 and MTU of 1500 ...
	I0531 19:51:04.568815    9164 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true auto-20220531191922-2108
	I0531 19:51:01.477955    9204 cli_runner.go:217] Completed: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" calico-20220531191937-2108: (1.303351s)
	I0531 19:51:01.477955    9204 provision.go:138] copyHostCerts
	I0531 19:51:01.477955    9204 exec_runner.go:144] found C:\Users\jenkins.minikube7\minikube-integration\.minikube/key.pem, removing ...
	I0531 19:51:01.477955    9204 exec_runner.go:207] rm: C:\Users\jenkins.minikube7\minikube-integration\.minikube\key.pem
	I0531 19:51:01.478916    9204 exec_runner.go:151] cp: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube7\minikube-integration\.minikube/key.pem (1679 bytes)
	I0531 19:51:01.479924    9204 exec_runner.go:144] found C:\Users\jenkins.minikube7\minikube-integration\.minikube/ca.pem, removing ...
	I0531 19:51:01.479924    9204 exec_runner.go:207] rm: C:\Users\jenkins.minikube7\minikube-integration\.minikube\ca.pem
	I0531 19:51:01.479924    9204 exec_runner.go:151] cp: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube7\minikube-integration\.minikube/ca.pem (1078 bytes)
	I0531 19:51:01.481945    9204 exec_runner.go:144] found C:\Users\jenkins.minikube7\minikube-integration\.minikube/cert.pem, removing ...
	I0531 19:51:01.481945    9204 exec_runner.go:207] rm: C:\Users\jenkins.minikube7\minikube-integration\.minikube\cert.pem
	I0531 19:51:01.481945    9204 exec_runner.go:151] cp: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube7\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0531 19:51:01.482907    9204 provision.go:112] generating server cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.calico-20220531191937-2108 san=[192.168.49.2 127.0.0.1 localhost 127.0.0.1 minikube calico-20220531191937-2108]
	I0531 19:51:01.638392    9204 provision.go:172] copyRemoteCerts
	I0531 19:51:01.648401    9204 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0531 19:51:01.656385    9204 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220531191937-2108
	I0531 19:51:02.904839    9204 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220531191937-2108: (1.2483134s)
	I0531 19:51:02.904949    9204 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:54919 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\calico-20220531191937-2108\id_rsa Username:docker}
	I0531 19:51:03.060111    9204 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (1.4117041s)
	I0531 19:51:03.060887    9204 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0531 19:51:03.138138    9204 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1253 bytes)
	I0531 19:51:03.189140    9204 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0531 19:51:03.247164    9204 provision.go:86] duration metric: configureAuth took 3.0808835s
	I0531 19:51:03.247164    9204 ubuntu.go:193] setting minikube options for container-runtime
	I0531 19:51:03.247164    9204 config.go:178] Loaded profile config "calico-20220531191937-2108": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.6
	I0531 19:51:03.261180    9204 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220531191937-2108
	I0531 19:51:04.500656    9204 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220531191937-2108: (1.239471s)
	I0531 19:51:04.504628    9204 main.go:134] libmachine: Using SSH client type: native
	I0531 19:51:04.504628    9204 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x1322ea0] 0x1325d00 <nil>  [] 0s} 127.0.0.1 54919 <nil> <nil>}
	I0531 19:51:04.504628    9204 main.go:134] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0531 19:51:04.713593    9204 main.go:134] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0531 19:51:04.713593    9204 ubuntu.go:71] root file system type: overlay
	I0531 19:51:04.714591    9204 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0531 19:51:04.721589    9204 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220531191937-2108
	I0531 19:51:05.902536    9204 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220531191937-2108: (1.1809424s)
	I0531 19:51:05.906537    9204 main.go:134] libmachine: Using SSH client type: native
	I0531 19:51:05.907557    9204 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x1322ea0] 0x1325d00 <nil>  [] 0s} 127.0.0.1 54919 <nil> <nil>}
	I0531 19:51:05.907557    9204 main.go:134] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0531 19:51:06.138734    9204 main.go:134] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0531 19:51:06.148744    9204 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220531191937-2108
	I0531 19:51:06.073904    9268 pod_ready.go:102] pod "cilium-k5t52" in "kube-system" namespace has status "Ready":"False"
	I0531 19:51:08.101133    9268 pod_ready.go:102] pod "cilium-k5t52" in "kube-system" namespace has status "Ready":"False"
	W0531 19:51:05.734225    9164 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true auto-20220531191922-2108 returned with exit code 1
	I0531 19:51:05.734225    9164 cli_runner.go:217] Completed: docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true auto-20220531191922-2108: (1.1654052s)
	W0531 19:51:05.734225    9164 network_create.go:107] failed to create docker network auto-20220531191922-2108 192.168.67.0/24, will retry: subnet is taken
	I0531 19:51:05.751381    9164 network.go:279] skipping subnet 192.168.67.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc0005f2b80] amended:true}} dirty:map[192.168.49.0:0xc0005f2b80 192.168.58.0:0xc000006c98 192.168.67.0:0xc0005922f8] misses:2}
	I0531 19:51:05.751381    9164 network.go:238] skipping subnet 192.168.67.0/24 that is reserved: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0531 19:51:05.770483    9164 network.go:288] reserving subnet 192.168.76.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc0005f2b80] amended:true}} dirty:map[192.168.49.0:0xc0005f2b80 192.168.58.0:0xc000006c98 192.168.67.0:0xc0005922f8 192.168.76.0:0xc000006dc0] misses:2}
	I0531 19:51:05.770483    9164 network.go:235] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0531 19:51:05.770483    9164 network_create.go:115] attempt to create docker network auto-20220531191922-2108 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I0531 19:51:05.777060    9164 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true auto-20220531191922-2108
	I0531 19:51:07.131208    9164 cli_runner.go:217] Completed: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true auto-20220531191922-2108: (1.3541418s)
	I0531 19:51:07.131208    9164 network_create.go:99] docker network auto-20220531191922-2108 192.168.76.0/24 created
	I0531 19:51:07.131208    9164 kic.go:106] calculated static IP "192.168.76.2" for the "auto-20220531191922-2108" container
	I0531 19:51:07.147458    9164 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0531 19:51:08.442899    9164 cli_runner.go:217] Completed: docker ps -a --format {{.Names}}: (1.2952755s)
	I0531 19:51:08.455737    9164 cli_runner.go:164] Run: docker volume create auto-20220531191922-2108 --label name.minikube.sigs.k8s.io=auto-20220531191922-2108 --label created_by.minikube.sigs.k8s.io=true
	I0531 19:51:09.639133    9164 cli_runner.go:217] Completed: docker volume create auto-20220531191922-2108 --label name.minikube.sigs.k8s.io=auto-20220531191922-2108 --label created_by.minikube.sigs.k8s.io=true: (1.1833905s)
	I0531 19:51:09.639133    9164 oci.go:103] Successfully created a docker volume auto-20220531191922-2108
	I0531 19:51:09.649152    9164 cli_runner.go:164] Run: docker run --rm --name auto-20220531191922-2108-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=auto-20220531191922-2108 --entrypoint /usr/bin/test -v auto-20220531191922-2108:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653596720-14230@sha256:e953786303ac8350802546ee187d34e89f0007072a54fdbcc2f86a1fb8575418 -d /var/lib
	I0531 19:51:07.427387    9204 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220531191937-2108: (1.2776444s)
	I0531 19:51:07.432952    9204 main.go:134] libmachine: Using SSH client type: native
	I0531 19:51:07.433950    9204 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x1322ea0] 0x1325d00 <nil>  [] 0s} 127.0.0.1 54919 <nil> <nil>}
	I0531 19:51:07.433950    9204 main.go:134] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0531 19:51:09.087698    9204 main.go:134] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2022-05-12 09:15:28.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2022-05-31 19:51:06.120425000 +0000
	@@ -1,30 +1,32 @@
	 [Unit]
	 Description=Docker Application Container Engine
	 Documentation=https://docs.docker.com
	-After=network-online.target docker.socket firewalld.service containerd.service
	+BindsTo=containerd.service
	+After=network-online.target firewalld.service containerd.service
	 Wants=network-online.target
	-Requires=docker.socket containerd.service
	+Requires=docker.socket
	+StartLimitBurst=3
	+StartLimitIntervalSec=60
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutSec=0
	-RestartSec=2
	-Restart=always
	-
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	+Restart=on-failure
	 
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP $MAINPID
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	@@ -32,16 +34,16 @@
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	 
	 # kill only the docker process, not all processes in the cgroup
	 KillMode=process
	-OOMScoreAdjust=-500
	 
	 [Install]
	 WantedBy=multi-user.target
	Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install enable docker
	
	I0531 19:51:09.087698    9204 machine.go:91] provisioned docker machine in 11.9593959s
	I0531 19:51:09.087698    9204 client.go:171] LocalClient.Create took 1m7.5188147s
	I0531 19:51:09.087698    9204 start.go:173] duration metric: libmachine.API.Create for "calico-20220531191937-2108" took 1m7.5188147s
	I0531 19:51:09.087698    9204 start.go:306] post-start starting for "calico-20220531191937-2108" (driver="docker")
	I0531 19:51:09.087698    9204 start.go:316] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0531 19:51:09.098732    9204 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0531 19:51:09.106693    9204 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220531191937-2108
	I0531 19:51:10.371344    9204 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220531191937-2108: (1.264614s)
	I0531 19:51:10.372478    9204 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:54919 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\calico-20220531191937-2108\id_rsa Username:docker}
	I0531 19:51:10.511036    9204 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (1.4122985s)
	I0531 19:51:10.522025    9204 ssh_runner.go:195] Run: cat /etc/os-release
	I0531 19:51:10.533022    9204 main.go:134] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0531 19:51:10.533022    9204 main.go:134] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0531 19:51:10.533022    9204 main.go:134] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0531 19:51:10.533022    9204 info.go:137] Remote host: Ubuntu 20.04.4 LTS
	I0531 19:51:10.533022    9204 filesync.go:126] Scanning C:\Users\jenkins.minikube7\minikube-integration\.minikube\addons for local assets ...
	I0531 19:51:10.533022    9204 filesync.go:126] Scanning C:\Users\jenkins.minikube7\minikube-integration\.minikube\files for local assets ...
	I0531 19:51:10.534037    9204 filesync.go:149] local asset: C:\Users\jenkins.minikube7\minikube-integration\.minikube\files\etc\ssl\certs\21082.pem -> 21082.pem in /etc/ssl/certs
	I0531 19:51:10.551037    9204 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0531 19:51:10.572032    9204 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\files\etc\ssl\certs\21082.pem --> /etc/ssl/certs/21082.pem (1708 bytes)
	I0531 19:51:10.635039    9204 start.go:309] post-start completed in 1.5473339s
	I0531 19:51:10.649038    9204 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" calico-20220531191937-2108
	I0531 19:51:10.586032    9268 pod_ready.go:102] pod "cilium-k5t52" in "kube-system" namespace has status "Ready":"False"
	I0531 19:51:12.595767    9268 pod_ready.go:102] pod "cilium-k5t52" in "kube-system" namespace has status "Ready":"False"
	I0531 19:51:12.494224    9164 cli_runner.go:217] Completed: docker run --rm --name auto-20220531191922-2108-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=auto-20220531191922-2108 --entrypoint /usr/bin/test -v auto-20220531191922-2108:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653596720-14230@sha256:e953786303ac8350802546ee187d34e89f0007072a54fdbcc2f86a1fb8575418 -d /var/lib: (2.8450593s)
	I0531 19:51:12.494489    9164 oci.go:107] Successfully prepared a docker volume auto-20220531191922-2108
	I0531 19:51:12.494489    9164 preload.go:132] Checking if preload exists for k8s version v1.23.6 and runtime docker
	I0531 19:51:12.494638    9164 kic.go:179] Starting extracting preloaded images to volume ...
	I0531 19:51:12.506015    9164 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.23.6-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v auto-20220531191922-2108:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653596720-14230@sha256:e953786303ac8350802546ee187d34e89f0007072a54fdbcc2f86a1fb8575418 -I lz4 -xf /preloaded.tar -C /extractDir
	I0531 19:51:11.927436    9204 cli_runner.go:217] Completed: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" calico-20220531191937-2108: (1.2782851s)
	I0531 19:51:11.927739    9204 profile.go:148] Saving config to C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\calico-20220531191937-2108\config.json ...
	I0531 19:51:11.946471    9204 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0531 19:51:11.954471    9204 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220531191937-2108
	I0531 19:51:13.203722    9204 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220531191937-2108: (1.2492449s)
	I0531 19:51:13.203722    9204 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:54919 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\calico-20220531191937-2108\id_rsa Username:docker}
	I0531 19:51:13.292052    9204 ssh_runner.go:235] Completed: sh -c "df -h /var | awk 'NR==2{print $5}'": (1.3445869s)
	I0531 19:51:13.301045    9204 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0531 19:51:13.313047    9204 start.go:134] duration metric: createHost completed in 1m11.7481896s
	I0531 19:51:13.313047    9204 start.go:81] releasing machines lock for "calico-20220531191937-2108", held for 1m11.7481896s
	I0531 19:51:13.320047    9204 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" calico-20220531191937-2108
	I0531 19:51:14.526134    9204 cli_runner.go:217] Completed: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" calico-20220531191937-2108: (1.2058332s)
	I0531 19:51:14.530551    9204 ssh_runner.go:195] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0531 19:51:14.540587    9204 ssh_runner.go:195] Run: systemctl --version
	I0531 19:51:14.544618    9204 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220531191937-2108
	I0531 19:51:14.554196    9204 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220531191937-2108
	I0531 19:51:15.796816    9204 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220531191937-2108: (1.2521124s)
	I0531 19:51:15.797621    9204 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:54919 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\calico-20220531191937-2108\id_rsa Username:docker}
	I0531 19:51:15.820048    9204 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220531191937-2108: (1.2658464s)
	I0531 19:51:15.820048    9204 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:54919 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\calico-20220531191937-2108\id_rsa Username:docker}
	I0531 19:51:16.029994    9204 ssh_runner.go:235] Completed: curl -sS -m 2 https://k8s.gcr.io/: (1.4992808s)
	I0531 19:51:16.029994    9204 ssh_runner.go:235] Completed: systemctl --version: (1.4894001s)
	I0531 19:51:16.047688    9204 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0531 19:51:16.109228    9204 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0531 19:51:16.150091    9204 cruntime.go:273] skipping containerd shutdown because we are bound to it
	I0531 19:51:16.165654    9204 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0531 19:51:16.192715    9204 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/dockershim.sock
	image-endpoint: unix:///var/run/dockershim.sock
	" | sudo tee /etc/crictl.yaml"
	I0531 19:51:16.251284    9204 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0531 19:51:16.429590    9204 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0531 19:51:16.728499    9204 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0531 19:51:16.779238    9204 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0531 19:51:16.986896    9204 ssh_runner.go:195] Run: sudo systemctl start docker
	I0531 19:51:17.030256    9204 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0531 19:51:17.159713    9204 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0531 19:51:14.632262    9268 pod_ready.go:102] pod "cilium-k5t52" in "kube-system" namespace has status "Ready":"False"
	I0531 19:51:17.091454    9268 pod_ready.go:102] pod "cilium-k5t52" in "kube-system" namespace has status "Ready":"False"
	I0531 19:51:17.494325    9204 out.go:204] * Preparing Kubernetes v1.23.6 on Docker 20.10.16 ...
	I0531 19:51:17.508083    9204 cli_runner.go:164] Run: docker exec -t calico-20220531191937-2108 dig +short host.docker.internal
	I0531 19:51:18.989400    9204 cli_runner.go:217] Completed: docker exec -t calico-20220531191937-2108 dig +short host.docker.internal: (1.4813113s)
	I0531 19:51:18.989400    9204 network.go:96] got host ip for mount in container by digging dns: 192.168.65.2
	I0531 19:51:19.001193    9204 ssh_runner.go:195] Run: grep 192.168.65.2	host.minikube.internal$ /etc/hosts
	I0531 19:51:19.019131    9204 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.65.2	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0531 19:51:19.065499    9204 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" calico-20220531191937-2108
	I0531 19:51:20.327768    9204 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" calico-20220531191937-2108: (1.261278s)
	I0531 19:51:20.327768    9204 preload.go:132] Checking if preload exists for k8s version v1.23.6 and runtime docker
	I0531 19:51:20.337236    9204 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0531 19:51:20.420821    9204 docker.go:610] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.23.6
	k8s.gcr.io/kube-scheduler:v1.23.6
	k8s.gcr.io/kube-proxy:v1.23.6
	k8s.gcr.io/kube-controller-manager:v1.23.6
	k8s.gcr.io/etcd:3.5.1-0
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0531 19:51:20.420977    9204 docker.go:541] Images already preloaded, skipping extraction
	I0531 19:51:20.435383    9204 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0531 19:51:20.533516    9204 docker.go:610] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.23.6
	k8s.gcr.io/kube-proxy:v1.23.6
	k8s.gcr.io/kube-scheduler:v1.23.6
	k8s.gcr.io/kube-controller-manager:v1.23.6
	k8s.gcr.io/etcd:3.5.1-0
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0531 19:51:20.533582    9204 cache_images.go:84] Images are preloaded, skipping loading
	I0531 19:51:20.544179    9204 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0531 19:51:20.763164    9204 cni.go:95] Creating CNI manager for "calico"
	I0531 19:51:20.763266    9204 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0531 19:51:20.763266    9204 kubeadm.go:158] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.23.6 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:calico-20220531191937-2108 NodeName:calico-20220531191937-2108 DNSDomain:cluster.local CRISocket:/var/run/dockershim.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/mi
nikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	I0531 19:51:20.763605    9204 kubeadm.go:162] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/dockershim.sock
	  name: "calico-20220531191937-2108"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.23.6
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0531 19:51:20.763779    9204 kubeadm.go:961] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.23.6/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=docker --hostname-override=calico-20220531191937-2108 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.23.6 ClusterName:calico-20220531191937-2108 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico NodeIP: NodePort:8443 NodeName:}
	I0531 19:51:20.777560    9204 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.23.6
	I0531 19:51:20.812377    9204 binaries.go:44] Found k8s binaries, skipping transfer
	I0531 19:51:20.822173    9204 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0531 19:51:20.847759    9204 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (373 bytes)
	I0531 19:51:20.894419    9204 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0531 19:51:20.943002    9204 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2048 bytes)
	I0531 19:51:21.004999    9204 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0531 19:51:21.027970    9204 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0531 19:51:21.065422    9204 certs.go:54] Setting up C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\calico-20220531191937-2108 for IP: 192.168.49.2
	I0531 19:51:21.066491    9204 certs.go:182] skipping minikubeCA CA generation: C:\Users\jenkins.minikube7\minikube-integration\.minikube\ca.key
	I0531 19:51:21.066798    9204 certs.go:182] skipping proxyClientCA CA generation: C:\Users\jenkins.minikube7\minikube-integration\.minikube\proxy-client-ca.key
	I0531 19:51:21.068010    9204 certs.go:302] generating minikube-user signed cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\calico-20220531191937-2108\client.key
	I0531 19:51:21.068489    9204 crypto.go:68] Generating cert C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\calico-20220531191937-2108\client.crt with IP's: []
	I0531 19:51:19.585051    9268 pod_ready.go:102] pod "cilium-k5t52" in "kube-system" namespace has status "Ready":"False"
	I0531 19:51:21.597865    9268 pod_ready.go:102] pod "cilium-k5t52" in "kube-system" namespace has status "Ready":"False"
	I0531 19:51:24.105041    9268 pod_ready.go:102] pod "cilium-k5t52" in "kube-system" namespace has status "Ready":"False"
	I0531 19:51:21.239497    9204 crypto.go:156] Writing cert to C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\calico-20220531191937-2108\client.crt ...
	I0531 19:51:21.239497    9204 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\calico-20220531191937-2108\client.crt: {Name:mk7717fa2d448864e461cc54e83296f68b8463bd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0531 19:51:21.240569    9204 crypto.go:164] Writing key to C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\calico-20220531191937-2108\client.key ...
	I0531 19:51:21.240569    9204 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\calico-20220531191937-2108\client.key: {Name:mkbd89bf22718c6399768f822a13f7683a912fdb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0531 19:51:21.241576    9204 certs.go:302] generating minikube signed cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\calico-20220531191937-2108\apiserver.key.dd3b5fb2
	I0531 19:51:21.241576    9204 crypto.go:68] Generating cert C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\calico-20220531191937-2108\apiserver.crt.dd3b5fb2 with IP's: [192.168.49.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I0531 19:51:21.303988    9204 crypto.go:156] Writing cert to C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\calico-20220531191937-2108\apiserver.crt.dd3b5fb2 ...
	I0531 19:51:21.303988    9204 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\calico-20220531191937-2108\apiserver.crt.dd3b5fb2: {Name:mk1c798eabfdccece8c43513d5079e690fc5c5f5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0531 19:51:21.304577    9204 crypto.go:164] Writing key to C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\calico-20220531191937-2108\apiserver.key.dd3b5fb2 ...
	I0531 19:51:21.304577    9204 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\calico-20220531191937-2108\apiserver.key.dd3b5fb2: {Name:mk508c26789c2c5b39d18c925674707c3be71d06 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0531 19:51:21.305706    9204 certs.go:320] copying C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\calico-20220531191937-2108\apiserver.crt.dd3b5fb2 -> C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\calico-20220531191937-2108\apiserver.crt
	I0531 19:51:21.312663    9204 certs.go:324] copying C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\calico-20220531191937-2108\apiserver.key.dd3b5fb2 -> C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\calico-20220531191937-2108\apiserver.key
	I0531 19:51:21.313459    9204 certs.go:302] generating aggregator signed cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\calico-20220531191937-2108\proxy-client.key
	I0531 19:51:21.314524    9204 crypto.go:68] Generating cert C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\calico-20220531191937-2108\proxy-client.crt with IP's: []
	I0531 19:51:21.471497    9204 crypto.go:156] Writing cert to C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\calico-20220531191937-2108\proxy-client.crt ...
	I0531 19:51:21.471497    9204 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\calico-20220531191937-2108\proxy-client.crt: {Name:mk632ff53178cf3468ba9f6e8992cf6c07b84866 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0531 19:51:21.473256    9204 crypto.go:164] Writing key to C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\calico-20220531191937-2108\proxy-client.key ...
	I0531 19:51:21.473256    9204 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\calico-20220531191937-2108\proxy-client.key: {Name:mkb4191699fbbb28010ed2ba28eed8f9214b0550 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0531 19:51:21.481348    9204 certs.go:388] found cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\2108.pem (1338 bytes)
	W0531 19:51:21.482170    9204 certs.go:384] ignoring C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\2108_empty.pem, impossibly tiny 0 bytes
	I0531 19:51:21.482170    9204 certs.go:388] found cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca-key.pem (1675 bytes)
	I0531 19:51:21.482611    9204 certs.go:388] found cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem (1078 bytes)
	I0531 19:51:21.482995    9204 certs.go:388] found cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0531 19:51:21.483267    9204 certs.go:388] found cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\key.pem (1679 bytes)
	I0531 19:51:21.483525    9204 certs.go:388] found cert: C:\Users\jenkins.minikube7\minikube-integration\.minikube\files\etc\ssl\certs\C:\Users\jenkins.minikube7\minikube-integration\.minikube\files\etc\ssl\certs\21082.pem (1708 bytes)
	I0531 19:51:21.484551    9204 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\calico-20220531191937-2108\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0531 19:51:21.565598    9204 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\calico-20220531191937-2108\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0531 19:51:21.652047    9204 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\calico-20220531191937-2108\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0531 19:51:21.708339    9204 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\calico-20220531191937-2108\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0531 19:51:21.766482    9204 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0531 19:51:21.824736    9204 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0531 19:51:21.895741    9204 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0531 19:51:21.964128    9204 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0531 19:51:22.031639    9204 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0531 19:51:22.113459    9204 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\2108.pem --> /usr/share/ca-certificates/2108.pem (1338 bytes)
	I0531 19:51:22.170954    9204 ssh_runner.go:362] scp C:\Users\jenkins.minikube7\minikube-integration\.minikube\files\etc\ssl\certs\21082.pem --> /usr/share/ca-certificates/21082.pem (1708 bytes)
	I0531 19:51:22.227855    9204 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (752 bytes)
	I0531 19:51:22.293264    9204 ssh_runner.go:195] Run: openssl version
	I0531 19:51:22.325002    9204 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2108.pem && ln -fs /usr/share/ca-certificates/2108.pem /etc/ssl/certs/2108.pem"
	I0531 19:51:22.376521    9204 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2108.pem
	I0531 19:51:22.393458    9204 certs.go:431] hashing: -rw-r--r-- 1 root root 1338 May 31 17:31 /usr/share/ca-certificates/2108.pem
	I0531 19:51:22.403431    9204 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2108.pem
	I0531 19:51:22.425445    9204 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/2108.pem /etc/ssl/certs/51391683.0"
	I0531 19:51:22.462364    9204 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/21082.pem && ln -fs /usr/share/ca-certificates/21082.pem /etc/ssl/certs/21082.pem"
	I0531 19:51:22.517805    9204 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/21082.pem
	I0531 19:51:22.527793    9204 certs.go:431] hashing: -rw-r--r-- 1 root root 1708 May 31 17:31 /usr/share/ca-certificates/21082.pem
	I0531 19:51:22.538806    9204 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/21082.pem
	I0531 19:51:22.561793    9204 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/21082.pem /etc/ssl/certs/3ec20f2e.0"
	I0531 19:51:22.598947    9204 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0531 19:51:22.642030    9204 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0531 19:51:22.653776    9204 certs.go:431] hashing: -rw-r--r-- 1 root root 1111 May 31 17:19 /usr/share/ca-certificates/minikubeCA.pem
	I0531 19:51:22.668000    9204 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0531 19:51:22.713832    9204 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0531 19:51:22.737619    9204 kubeadm.go:395] StartCluster: {Name:calico-20220531191937-2108 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653596720-14230@sha256:e953786303ac8350802546ee187d34e89f0007072a54fdbcc2f86a1fb8575418 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.6 ClusterName:calico-20220531191937-2108 Namespace:default APIServerName:minikube
CA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.23.6 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:5m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube7:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMi
rror: DisableOptimizations:false DisableMetrics:false}
	I0531 19:51:22.750642    9204 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0531 19:51:22.840394    9204 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0531 19:51:22.881382    9204 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0531 19:51:22.906373    9204 kubeadm.go:221] ignoring SystemVerification for kubeadm because of docker driver
	I0531 19:51:22.918377    9204 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0531 19:51:22.946873    9204 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0531 19:51:22.946986    9204 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0531 19:51:26.579529    9268 pod_ready.go:102] pod "cilium-k5t52" in "kube-system" namespace has status "Ready":"False"
	I0531 19:51:28.595723    9268 pod_ready.go:102] pod "cilium-k5t52" in "kube-system" namespace has status "Ready":"False"
	
	* 
	* ==> Docker <==
	* -- Logs begin at Tue 2022-05-31 19:43:05 UTC, end at Tue 2022-05-31 19:51:45 UTC. --
	May 31 19:48:50 embed-certs-20220531193346-2108 dockerd[249]: time="2022-05-31T19:48:50.186351000Z" level=info msg="ignoring event" container=657a49562f497935ed8cfb5450b794fcef0fb529bae8516198153c43d1a06794 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	May 31 19:48:50 embed-certs-20220531193346-2108 dockerd[249]: time="2022-05-31T19:48:50.671769800Z" level=info msg="ignoring event" container=52eebb6efca24e2f29a4c15d5ebc725494e06fb2fdec51b3c69f6b4bbd56e893 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	May 31 19:48:51 embed-certs-20220531193346-2108 dockerd[249]: time="2022-05-31T19:48:51.145577200Z" level=info msg="ignoring event" container=321b341c3da01f03172c5f814e5e3956dff334962fe69b1419d552a7088ec25e module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	May 31 19:48:51 embed-certs-20220531193346-2108 dockerd[249]: time="2022-05-31T19:48:51.560593600Z" level=info msg="ignoring event" container=42131017a14c7aa8c301405260289deb36491abb326dcf4664a7643128255cb4 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	May 31 19:48:52 embed-certs-20220531193346-2108 dockerd[249]: time="2022-05-31T19:48:52.020321600Z" level=info msg="ignoring event" container=e31760c5c1f5fffc253c944bc1585f47574225e13c0f6895e332f25ea8794a05 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	May 31 19:48:52 embed-certs-20220531193346-2108 dockerd[249]: time="2022-05-31T19:48:52.472713000Z" level=info msg="ignoring event" container=b0ce8ecd8c5e3109c0394845dbb8dbded24845410f32ebeff68a660438a5fc58 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	May 31 19:49:48 embed-certs-20220531193346-2108 dockerd[249]: time="2022-05-31T19:49:48.947001500Z" level=warning msg="Error getting v2 registry: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host"
	May 31 19:49:48 embed-certs-20220531193346-2108 dockerd[249]: time="2022-05-31T19:49:48.948075900Z" level=info msg="Attempting next endpoint for pull after error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host"
	May 31 19:49:49 embed-certs-20220531193346-2108 dockerd[249]: time="2022-05-31T19:49:49.132672800Z" level=error msg="Handler for POST /v1.41/images/create returned error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host"
	May 31 19:49:50 embed-certs-20220531193346-2108 dockerd[249]: time="2022-05-31T19:49:50.943284000Z" level=info msg="ignoring event" container=7e4df02eea3abb4773abbd8693ff77f41cabb5856ea82c9ed22215112b159445 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	May 31 19:49:51 embed-certs-20220531193346-2108 dockerd[249]: time="2022-05-31T19:49:51.296521800Z" level=warning msg="reference for unknown type: application/vnd.docker.distribution.manifest.v1+prettyjws" digest="sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb" remote="k8s.gcr.io/echoserver:1.4"
	May 31 19:49:51 embed-certs-20220531193346-2108 dockerd[249]: time="2022-05-31T19:49:51.469553400Z" level=warning msg="Error persisting manifest" digest="sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb" error="error committing manifest to content store: commit failed: unexpected commit digest sha256:eaee4c452b076cdb05b391ed7e75e1ad0aca136665875ab5d7e2f3d9f4675769, expected sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb: failed precondition" remote="k8s.gcr.io/echoserver:1.4"
	May 31 19:49:51 embed-certs-20220531193346-2108 dockerd[249]: time="2022-05-31T19:49:51.487563700Z" level=info msg="ignoring event" container=6c33a6cd399b34f3186ba32a1a024490c70b9d36496e04dd6596dc6b8e88a6ea module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	May 31 19:50:10 embed-certs-20220531193346-2108 dockerd[249]: time="2022-05-31T19:50:10.100102000Z" level=warning msg="reference for unknown type: " digest="sha256:cc746e7a0b1eec0db01cbabbb6386b23d7af97e79fa9e36bb883a95b7eb96fe2" remote="docker.io/kubernetesui/dashboard@sha256:cc746e7a0b1eec0db01cbabbb6386b23d7af97e79fa9e36bb883a95b7eb96fe2"
	May 31 19:50:10 embed-certs-20220531193346-2108 dockerd[249]: time="2022-05-31T19:50:10.851162100Z" level=info msg="ignoring event" container=3612c77016ee135003d1e5f7eb30b67ef73402742c72878b54cc53a10fc1265c module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	May 31 19:50:13 embed-certs-20220531193346-2108 dockerd[249]: time="2022-05-31T19:50:13.633607300Z" level=info msg="ignoring event" container=fb4f7abe8f01a9515de8c3f9a0f0f5847dc7e16251463a2d2f3b43bb590f1ae5 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	May 31 19:50:33 embed-certs-20220531193346-2108 dockerd[249]: time="2022-05-31T19:50:33.190591600Z" level=warning msg="Error getting v2 registry: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host"
	May 31 19:50:33 embed-certs-20220531193346-2108 dockerd[249]: time="2022-05-31T19:50:33.190764700Z" level=info msg="Attempting next endpoint for pull after error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host"
	May 31 19:50:33 embed-certs-20220531193346-2108 dockerd[249]: time="2022-05-31T19:50:33.391311500Z" level=error msg="Handler for POST /v1.41/images/create returned error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host"
	May 31 19:50:33 embed-certs-20220531193346-2108 dockerd[249]: time="2022-05-31T19:50:33.444839500Z" level=info msg="ignoring event" container=4e329316b6cf16e812f4fc5db1fe998ca74bc770f881aa43a99f5fdbe785d9b5 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	May 31 19:51:01 embed-certs-20220531193346-2108 dockerd[249]: time="2022-05-31T19:51:01.252849500Z" level=info msg="ignoring event" container=b737170ecd04f67be756f5b2dcd03440df6107b41a300d43cc2fa92e16bc4acb module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	May 31 19:51:03 embed-certs-20220531193346-2108 dockerd[249]: time="2022-05-31T19:51:03.446542400Z" level=warning msg="Error getting v2 registry: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host"
	May 31 19:51:03 embed-certs-20220531193346-2108 dockerd[249]: time="2022-05-31T19:51:03.446731300Z" level=info msg="Attempting next endpoint for pull after error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host"
	May 31 19:51:03 embed-certs-20220531193346-2108 dockerd[249]: time="2022-05-31T19:51:03.455770700Z" level=error msg="Handler for POST /v1.41/images/create returned error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host"
	May 31 19:51:44 embed-certs-20220531193346-2108 dockerd[249]: time="2022-05-31T19:51:44.852957200Z" level=info msg="ignoring event" container=ccea2f24f20cd25aded40f2b1fa4e8a3771c225c2c4374a9caabe01f4169c2c5 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                                                            CREATED              STATE               NAME                        ATTEMPT             POD ID
	ccea2f24f20cd       a90209bb39e3d                                                                                    1 second ago         Exited              dashboard-metrics-scraper   4                   573ade20600ce
	b737170ecd04f       a90209bb39e3d                                                                                    45 seconds ago       Exited              dashboard-metrics-scraper   3                   573ade20600ce
	d951ec257dd10       kubernetesui/dashboard@sha256:cc746e7a0b1eec0db01cbabbb6386b23d7af97e79fa9e36bb883a95b7eb96fe2   About a minute ago   Running             kubernetes-dashboard        0                   2bafb3ffe18c9
	9a7ffad2971b8       6e38f40d628db                                                                                    About a minute ago   Running             storage-provisioner         0                   da0d57e2765f6
	e92d705833d42       a4ca41631cc7a                                                                                    2 minutes ago        Running             coredns                     0                   93fb15ca13d74
	50d3e27751736       4c03754524064                                                                                    2 minutes ago        Running             kube-proxy                  0                   6a8f003c94a78
	482a2f211adf5       595f327f224a4                                                                                    2 minutes ago        Running             kube-scheduler              2                   0bbf18fbf4fa4
	0b2164659b745       25f8c7f3da61c                                                                                    2 minutes ago        Running             etcd                        2                   efe187e67f2e7
	e45e0056f239d       8fa62c12256df                                                                                    2 minutes ago        Running             kube-apiserver              2                   89c6f37c504cf
	0607075503f48       df7b72818ad2e                                                                                    2 minutes ago        Running             kube-controller-manager     2                   97f35fc644fe5
	
	* 
	* ==> coredns [e92d705833d4] <==
	* .:53
	[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	CoreDNS-1.8.6
	linux/amd64, go1.17.1, 13a9191
	[INFO] Reloading
	[INFO] plugin/health: Going into lameduck mode for 5s
	[INFO] plugin/reload: Running configuration MD5 = c23ed519c17e71ee396ed052e6209e94
	[INFO] Reloading complete
	
	* 
	* ==> describe nodes <==
	* Name:               embed-certs-20220531193346-2108
	Roles:              control-plane,master
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=embed-certs-20220531193346-2108
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=bd46569bd7cb517fcad2e704abdebd2826bd8454
	                    minikube.k8s.io/name=embed-certs-20220531193346-2108
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2022_05_31T19_49_17_0700
	                    minikube.k8s.io/version=v1.26.0-beta.1
	                    node-role.kubernetes.io/control-plane=
	                    node-role.kubernetes.io/master=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 31 May 2022 19:49:10 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-20220531193346-2108
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 31 May 2022 19:51:46 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 31 May 2022 19:50:51 +0000   Tue, 31 May 2022 19:49:06 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 31 May 2022 19:50:51 +0000   Tue, 31 May 2022 19:49:06 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 31 May 2022 19:50:51 +0000   Tue, 31 May 2022 19:49:06 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 31 May 2022 19:50:51 +0000   Tue, 31 May 2022 19:49:28 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.67.2
	  Hostname:    embed-certs-20220531193346-2108
	Capacity:
	  cpu:                16
	  ephemeral-storage:  263174212Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             52638988Ki
	  pods:               110
	Allocatable:
	  cpu:                16
	  ephemeral-storage:  263174212Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             52638988Ki
	  pods:               110
	System Info:
	  Machine ID:                 bfc82849fe6e4a6a9236307a23a8b5f1
	  System UUID:                bfc82849fe6e4a6a9236307a23a8b5f1
	  Boot ID:                    99d8680c-6839-4c5e-a5fa-8740ef80d5ef
	  Kernel Version:             5.10.102.1-microsoft-standard-WSL2
	  OS Image:                   Ubuntu 20.04.4 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://20.10.16
	  Kubelet Version:            v1.23.6
	  Kube-Proxy Version:         v1.23.6
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (10 in total)
	  Namespace                   Name                                                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                       ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-64897985d-5m9xf                                    100m (0%!)(MISSING)     0 (0%!)(MISSING)      70Mi (0%!)(MISSING)        170Mi (0%!)(MISSING)     2m17s
	  kube-system                 etcd-embed-certs-20220531193346-2108                       100m (0%!)(MISSING)     0 (0%!)(MISSING)      100Mi (0%!)(MISSING)       0 (0%!)(MISSING)         2m28s
	  kube-system                 kube-apiserver-embed-certs-20220531193346-2108             250m (1%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m28s
	  kube-system                 kube-controller-manager-embed-certs-20220531193346-2108    200m (1%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m34s
	  kube-system                 kube-proxy-jqpk2                                           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m17s
	  kube-system                 kube-scheduler-embed-certs-20220531193346-2108             100m (0%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m34s
	  kube-system                 metrics-server-b955d9d8-w48dh                              100m (0%!)(MISSING)     0 (0%!)(MISSING)      200Mi (0%!)(MISSING)       0 (0%!)(MISSING)         2m3s
	  kube-system                 storage-provisioner                                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m2s
	  kubernetes-dashboard        dashboard-metrics-scraper-56974995fc-g6xnz                 0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m1s
	  kubernetes-dashboard        kubernetes-dashboard-8469778f77-9rhsq                      0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m1s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (5%!)(MISSING)   0 (0%!)(MISSING)
	  memory             370Mi (0%!)(MISSING)  170Mi (0%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From        Message
	  ----    ------                   ----                   ----        -------
	  Normal  Starting                 2m1s                   kube-proxy  
	  Normal  NodeHasSufficientMemory  2m47s (x6 over 2m48s)  kubelet     Node embed-certs-20220531193346-2108 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m47s (x5 over 2m48s)  kubelet     Node embed-certs-20220531193346-2108 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m47s (x5 over 2m48s)  kubelet     Node embed-certs-20220531193346-2108 status is now: NodeHasSufficientPID
	  Normal  Starting                 2m30s                  kubelet     Starting kubelet.
	  Normal  NodeHasSufficientMemory  2m29s                  kubelet     Node embed-certs-20220531193346-2108 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m29s                  kubelet     Node embed-certs-20220531193346-2108 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m29s                  kubelet     Node embed-certs-20220531193346-2108 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  2m29s                  kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeReady                2m19s                  kubelet     Node embed-certs-20220531193346-2108 status is now: NodeReady
	
	* 
	* ==> dmesg <==
	* [  +0.000006] overlayfs: workdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
	[  +0.089750] overlayfs: upperdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
	[  +0.000005] overlayfs: workdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
	[  +0.002712] overlayfs: upperdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
	[  +0.000005] overlayfs: workdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
	[  +0.106424] overlayfs: upperdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
	[  +0.000004] overlayfs: workdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
	[  +0.091580] overlayfs: upperdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
	[  +0.000006] overlayfs: workdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
	[May31 19:22] WSL2: Performing memory compaction.
	[May31 19:23] WSL2: Performing memory compaction.
	[May31 19:24] WSL2: Performing memory compaction.
	[May31 19:25] WSL2: Performing memory compaction.
	[May31 19:26] WSL2: Performing memory compaction.
	[May31 19:27] WSL2: Performing memory compaction.
	[May31 19:28] WSL2: Performing memory compaction.
	[May31 19:30] WSL2: Performing memory compaction.
	[May31 19:32] WSL2: Performing memory compaction.
	[May31 19:34] WSL2: Performing memory compaction.
	[May31 19:37] WSL2: Performing memory compaction.
	[May31 19:39] WSL2: Performing memory compaction.
	[May31 19:40] WSL2: Performing memory compaction.
	[May31 19:45] WSL2: Performing memory compaction.
	[May31 19:46] WSL2: Performing memory compaction.
	[May31 19:48] WSL2: Performing memory compaction.
	
	* 
	* ==> etcd [0b2164659b74] <==
	* {"level":"warn","ts":"2022-05-31T19:51:37.855Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2022-05-31T19:51:36.475Z","time spent":"1.379922s","remote":"127.0.0.1:46008","response type":"/etcdserverpb.Lease/LeaseGrant","request count":-1,"request size":-1,"response count":-1,"response size":-1,"request content":""}
	{"level":"info","ts":"2022-05-31T19:51:37.855Z","caller":"traceutil/trace.go:171","msg":"trace[1339695567] transaction","detail":"{read_only:false; response_revision:745; number_of_response:1; }","duration":"1.3795013s","start":"2022-05-31T19:51:36.476Z","end":"2022-05-31T19:51:37.855Z","steps":["trace[1339695567] 'process raft request'  (duration: 1.3790609s)"],"step_count":1}
	{"level":"warn","ts":"2022-05-31T19:51:37.856Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2022-05-31T19:51:36.476Z","time spent":"1.379601s","remote":"127.0.0.1:46074","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":1118,"response count":0,"response size":40,"request content":"compare:<target:MOD key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" mod_revision:740 > success:<request_put:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" value_size:1045 >> failure:<request_range:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" > >"}
	{"level":"warn","ts":"2022-05-31T19:51:37.856Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"1.2580065s","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2022-05-31T19:51:37.856Z","caller":"traceutil/trace.go:171","msg":"trace[682681465] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:745; }","duration":"1.2580775s","start":"2022-05-31T19:51:36.598Z","end":"2022-05-31T19:51:37.856Z","steps":["trace[682681465] 'agreement among raft nodes before linearized reading'  (duration: 1.257857s)"],"step_count":1}
	{"level":"warn","ts":"2022-05-31T19:51:37.856Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2022-05-31T19:51:36.598Z","time spent":"1.2581519s","remote":"127.0.0.1:46090","response type":"/etcdserverpb.KV/Range","request count":0,"request size":18,"response count":0,"response size":29,"request content":"key:\"/registry/health\" "}
	{"level":"warn","ts":"2022-05-31T19:51:38.357Z","caller":"etcdserver/v3_server.go:815","msg":"waiting for ReadIndex response took too long, retrying","sent-request-id":2289940891359724779,"retry-timeout":"500ms"}
	{"level":"warn","ts":"2022-05-31T19:51:39.044Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"1.9999121s","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"","error":"context canceled"}
	{"level":"info","ts":"2022-05-31T19:51:39.045Z","caller":"traceutil/trace.go:171","msg":"trace[2112298364] range","detail":"{range_begin:/registry/health; range_end:; }","duration":"2.0001341s","start":"2022-05-31T19:51:37.044Z","end":"2022-05-31T19:51:39.044Z","steps":["trace[2112298364] 'agreement among raft nodes before linearized reading'  (duration: 1.9999074s)"],"step_count":1}
	{"level":"warn","ts":"2022-05-31T19:51:39.045Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2022-05-31T19:51:37.044Z","time spent":"2.0003388s","remote":"127.0.0.1:46090","response type":"/etcdserverpb.KV/Range","request count":0,"request size":18,"response count":0,"response size":0,"request content":"key:\"/registry/health\" "}
	WARNING: 2022/05/31 19:51:39 [core] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	{"level":"warn","ts":"2022-05-31T19:51:39.094Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"342.0188ms","expected-duration":"100ms","prefix":"","request":"header:<ID:2289940891359724780 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/masterleases/192.168.67.2\" mod_revision:734 > success:<request_put:<key:\"/registry/masterleases/192.168.67.2\" value_size:67 lease:2289940891359724776 >> failure:<request_range:<key:\"/registry/masterleases/192.168.67.2\" > >>","response":"size:16"}
	{"level":"info","ts":"2022-05-31T19:51:39.094Z","caller":"traceutil/trace.go:171","msg":"trace[1770385449] linearizableReadLoop","detail":"{readStateIndex:787; appliedIndex:786; }","duration":"1.2388384s","start":"2022-05-31T19:51:37.855Z","end":"2022-05-31T19:51:39.094Z","steps":["trace[1770385449] 'read index received'  (duration: 896.1351ms)","trace[1770385449] 'applied index is now lower than readState.Index'  (duration: 342.6996ms)"],"step_count":2}
	{"level":"info","ts":"2022-05-31T19:51:39.094Z","caller":"traceutil/trace.go:171","msg":"trace[328777976] transaction","detail":"{read_only:false; response_revision:746; number_of_response:1; }","duration":"1.2365481s","start":"2022-05-31T19:51:37.858Z","end":"2022-05-31T19:51:39.094Z","steps":["trace[328777976] 'process raft request'  (duration: 894.1845ms)","trace[328777976] 'compare'  (duration: 341.5904ms)"],"step_count":2}
	{"level":"warn","ts":"2022-05-31T19:51:39.094Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2022-05-31T19:51:37.858Z","time spent":"1.2366169s","remote":"127.0.0.1:46008","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":118,"response count":0,"response size":40,"request content":"compare:<target:MOD key:\"/registry/masterleases/192.168.67.2\" mod_revision:734 > success:<request_put:<key:\"/registry/masterleases/192.168.67.2\" value_size:67 lease:2289940891359724776 >> failure:<request_range:<key:\"/registry/masterleases/192.168.67.2\" > >"}
	{"level":"info","ts":"2022-05-31T19:51:39.240Z","caller":"traceutil/trace.go:171","msg":"trace[149399756] linearizableReadLoop","detail":"{readStateIndex:787; appliedIndex:787; }","duration":"145.9032ms","start":"2022-05-31T19:51:39.094Z","end":"2022-05-31T19:51:39.240Z","steps":["trace[149399756] 'read index received'  (duration: 145.8849ms)","trace[149399756] 'applied index is now lower than readState.Index'  (duration: 13.8µs)"],"step_count":2}
	{"level":"warn","ts":"2022-05-31T19:51:39.425Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"324.5239ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/endpoints/default/kubernetes\" ","response":"range_response_count:1 size:422"}
	{"level":"warn","ts":"2022-05-31T19:51:39.425Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"368.3092ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2022-05-31T19:51:39.425Z","caller":"traceutil/trace.go:171","msg":"trace[1342692633] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:746; }","duration":"368.3536ms","start":"2022-05-31T19:51:39.057Z","end":"2022-05-31T19:51:39.425Z","steps":["trace[1342692633] 'agreement among raft nodes before linearized reading'  (duration: 183.5125ms)","trace[1342692633] 'range keys from in-memory index tree'  (duration: 184.7611ms)"],"step_count":2}
	{"level":"info","ts":"2022-05-31T19:51:39.425Z","caller":"traceutil/trace.go:171","msg":"trace[900947950] range","detail":"{range_begin:/registry/services/endpoints/default/kubernetes; range_end:; response_count:1; response_revision:746; }","duration":"324.7115ms","start":"2022-05-31T19:51:39.100Z","end":"2022-05-31T19:51:39.425Z","steps":["trace[900947950] 'agreement among raft nodes before linearized reading'  (duration: 139.9489ms)","trace[900947950] 'range keys from in-memory index tree'  (duration: 184.5442ms)"],"step_count":2}
	{"level":"warn","ts":"2022-05-31T19:51:39.425Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2022-05-31T19:51:39.057Z","time spent":"368.4438ms","remote":"127.0.0.1:46090","response type":"/etcdserverpb.KV/Range","request count":0,"request size":18,"response count":0,"response size":29,"request content":"key:\"/registry/health\" "}
	{"level":"warn","ts":"2022-05-31T19:51:39.425Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2022-05-31T19:51:39.100Z","time spent":"324.7749ms","remote":"127.0.0.1:46074","response type":"/etcdserverpb.KV/Range","request count":0,"request size":49,"response count":1,"response size":446,"request content":"key:\"/registry/services/endpoints/default/kubernetes\" "}
	{"level":"warn","ts":"2022-05-31T19:51:39.425Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"364.4696ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/events/kube-system/kube-apiserver-embed-certs-20220531193346-2108.16f448688e55f0d8\" ","response":"range_response_count:1 size:803"}
	{"level":"info","ts":"2022-05-31T19:51:39.426Z","caller":"traceutil/trace.go:171","msg":"trace[1888885281] range","detail":"{range_begin:/registry/events/kube-system/kube-apiserver-embed-certs-20220531193346-2108.16f448688e55f0d8; range_end:; response_count:1; response_revision:746; }","duration":"365.0448ms","start":"2022-05-31T19:51:39.061Z","end":"2022-05-31T19:51:39.426Z","steps":["trace[1888885281] 'agreement among raft nodes before linearized reading'  (duration: 179.6032ms)","trace[1888885281] 'range keys from in-memory index tree'  (duration: 184.6653ms)"],"step_count":2}
	{"level":"warn","ts":"2022-05-31T19:51:39.426Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2022-05-31T19:51:39.060Z","time spent":"365.1014ms","remote":"127.0.0.1:46046","response type":"/etcdserverpb.KV/Range","request count":0,"request size":94,"response count":1,"response size":827,"request content":"key:\"/registry/events/kube-system/kube-apiserver-embed-certs-20220531193346-2108.16f448688e55f0d8\" "}
	
	* 
	* ==> kernel <==
	*  19:51:47 up  2:39,  0 users,  load average: 7.58, 6.96, 5.63
	Linux embed-certs-20220531193346-2108 5.10.102.1-microsoft-standard-WSL2 #1 SMP Wed Mar 2 00:30:59 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 20.04.4 LTS"
	
	* 
	* ==> kube-apiserver [e45e0056f239] <==
	* Trace[76097970]: ---"Object stored in database" 4561ms (19:51:36.447)
	Trace[76097970]: [4.5622068s] [4.5622068s] END
	I0531 19:51:36.449012       1 trace.go:205] Trace[1530367915]: "Get" url:/api/v1/namespaces/default,user-agent:kube-apiserver/v1.23.6 (linux/amd64) kubernetes/ad33385,audit-id:35cd4835-3709-4e56-9b3a-db598162d0da,client:127.0.0.1,accept:application/vnd.kubernetes.protobuf, */*,protocol:HTTP/2.0 (31-May-2022 19:51:34.636) (total time: 1812ms):
	Trace[1530367915]: ---"About to write a response" 1812ms (19:51:36.448)
	Trace[1530367915]: [1.8126439s] [1.8126439s] END
	I0531 19:51:36.449421       1 trace.go:205] Trace[115240336]: "Get" url:/api/v1/namespaces/kube-system/endpoints/k8s.io-minikube-hostpath,user-agent:storage-provisioner/v0.0.0 (linux/amd64) kubernetes/$Format,audit-id:f1b73816-4b90-40b1-97a7-0bd0f5449a45,client:192.168.67.2,accept:application/json, */*,protocol:HTTP/2.0 (31-May-2022 19:51:32.238) (total time: 4211ms):
	Trace[115240336]: ---"About to write a response" 4211ms (19:51:36.449)
	Trace[115240336]: [4.2113668s] [4.2113668s] END
	I0531 19:51:36.454949       1 trace.go:205] Trace[1721943851]: "GuaranteedUpdate etcd3" type:*core.Event (31-May-2022 19:51:34.057) (total time: 2397ms):
	Trace[1721943851]: ---"initial value restored" 2391ms (19:51:36.449)
	Trace[1721943851]: [2.3974298s] [2.3974298s] END
	I0531 19:51:36.455300       1 trace.go:205] Trace[228968495]: "Patch" url:/api/v1/namespaces/kube-system/events/kube-apiserver-embed-certs-20220531193346-2108.16f448688e55f0d8,user-agent:kubelet/v1.23.6 (linux/amd64) kubernetes/ad33385,audit-id:9f27966c-a3f9-4ef3-9e43-f827a116100a,client:192.168.67.2,accept:application/vnd.kubernetes.protobuf,application/json,protocol:HTTP/2.0 (31-May-2022 19:51:34.057) (total time: 2398ms):
	Trace[228968495]: ---"About to apply patch" 2391ms (19:51:36.449)
	Trace[228968495]: [2.3981748s] [2.3981748s] END
	I0531 19:51:37.856957       1 trace.go:205] Trace[640905069]: "GuaranteedUpdate etcd3" type:*core.Endpoints (31-May-2022 19:51:36.474) (total time: 1381ms):
	Trace[640905069]: ---"Transaction committed" 1381ms (19:51:37.856)
	Trace[640905069]: [1.3819051s] [1.3819051s] END
	I0531 19:51:37.857327       1 trace.go:205] Trace[199981717]: "Update" url:/api/v1/namespaces/kube-system/endpoints/k8s.io-minikube-hostpath,user-agent:storage-provisioner/v0.0.0 (linux/amd64) kubernetes/$Format,audit-id:fb8d4166-39b5-4567-bb3c-8321d9abdd0e,client:192.168.67.2,accept:application/json, */*,protocol:HTTP/2.0 (31-May-2022 19:51:36.474) (total time: 1382ms):
	Trace[199981717]: ---"Object stored in database" 1382ms (19:51:37.857)
	Trace[199981717]: [1.3826321s] [1.3826321s] END
	{"level":"warn","ts":"2022-05-31T19:51:39.044Z","logger":"etcd-client","caller":"v3/retry_interceptor.go:62","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0xc000a23180/#initially=[https://127.0.0.1:2379]","attempt":0,"error":"rpc error: code = DeadlineExceeded desc = context deadline exceeded"}
	I0531 19:51:39.095970       1 trace.go:205] Trace[417301077]: "GuaranteedUpdate etcd3" type:*v1.Endpoints (31-May-2022 19:51:36.472) (total time: 2623ms):
	Trace[417301077]: ---"Transaction prepared" 1381ms (19:51:37.857)
	Trace[417301077]: ---"Transaction committed" 1238ms (19:51:39.095)
	Trace[417301077]: [2.6232s] [2.6232s] END
	
	* 
	* ==> kube-controller-manager [0607075503f4] <==
	* E0531 19:49:45.835411       1 replica_set.go:536] sync "kubernetes-dashboard/kubernetes-dashboard-8469778f77" failed with pods "kubernetes-dashboard-8469778f77-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0531 19:49:45.835428       1 event.go:294] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-56974995fc" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"dashboard-metrics-scraper-56974995fc-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	I0531 19:49:45.835618       1 event.go:294] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-8469778f77" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"kubernetes-dashboard-8469778f77-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	E0531 19:49:45.843602       1 replica_set.go:536] sync "kubernetes-dashboard/dashboard-metrics-scraper-56974995fc" failed with pods "dashboard-metrics-scraper-56974995fc-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	E0531 19:49:45.843937       1 replica_set.go:536] sync "kubernetes-dashboard/kubernetes-dashboard-8469778f77" failed with pods "kubernetes-dashboard-8469778f77-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0531 19:49:45.843988       1 event.go:294] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-56974995fc" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"dashboard-metrics-scraper-56974995fc-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	I0531 19:49:45.844036       1 event.go:294] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-8469778f77" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"kubernetes-dashboard-8469778f77-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	E0531 19:49:45.925714       1 replica_set.go:536] sync "kubernetes-dashboard/dashboard-metrics-scraper-56974995fc" failed with pods "dashboard-metrics-scraper-56974995fc-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	E0531 19:49:45.925759       1 replica_set.go:536] sync "kubernetes-dashboard/kubernetes-dashboard-8469778f77" failed with pods "kubernetes-dashboard-8469778f77-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0531 19:49:45.925815       1 event.go:294] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-56974995fc" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"dashboard-metrics-scraper-56974995fc-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	I0531 19:49:45.926564       1 event.go:294] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-8469778f77" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"kubernetes-dashboard-8469778f77-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	E0531 19:49:46.224823       1 replica_set.go:536] sync "kubernetes-dashboard/kubernetes-dashboard-8469778f77" failed with pods "kubernetes-dashboard-8469778f77-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0531 19:49:46.225647       1 event.go:294] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-8469778f77" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"kubernetes-dashboard-8469778f77-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	E0531 19:49:46.227723       1 replica_set.go:536] sync "kubernetes-dashboard/dashboard-metrics-scraper-56974995fc" failed with pods "dashboard-metrics-scraper-56974995fc-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0531 19:49:46.227761       1 event.go:294] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-56974995fc" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"dashboard-metrics-scraper-56974995fc-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	I0531 19:49:46.439790       1 event.go:294] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-8469778f77" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kubernetes-dashboard-8469778f77-9rhsq"
	I0531 19:49:46.439839       1 event.go:294] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-56974995fc" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: dashboard-metrics-scraper-56974995fc-g6xnz"
	E0531 19:49:59.627032       1 resource_quota_controller.go:413] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0531 19:50:00.142692       1 garbagecollector.go:707] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0531 19:50:29.736689       1 resource_quota_controller.go:413] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0531 19:50:30.332044       1 garbagecollector.go:707] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0531 19:50:59.777785       1 resource_quota_controller.go:413] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0531 19:51:00.523901       1 garbagecollector.go:707] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0531 19:51:29.814313       1 resource_quota_controller.go:413] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0531 19:51:30.559752       1 garbagecollector.go:707] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	
	* 
	* ==> kube-proxy [50d3e2775173] <==
	* E0531 19:49:45.130628       1 proxier.go:647] "Failed to read builtin modules file, you can ignore this message when kube-proxy is running inside container without mounting /lib/modules" err="open /lib/modules/5.10.102.1-microsoft-standard-WSL2/modules.builtin: no such file or directory" filePath="/lib/modules/5.10.102.1-microsoft-standard-WSL2/modules.builtin"
	I0531 19:49:45.143693       1 proxier.go:657] "Failed to load kernel module with modprobe, you can ignore this message when kube-proxy is running inside container without mounting /lib/modules" moduleName="ip_vs"
	I0531 19:49:45.151050       1 proxier.go:657] "Failed to load kernel module with modprobe, you can ignore this message when kube-proxy is running inside container without mounting /lib/modules" moduleName="ip_vs_rr"
	I0531 19:49:45.236963       1 proxier.go:657] "Failed to load kernel module with modprobe, you can ignore this message when kube-proxy is running inside container without mounting /lib/modules" moduleName="ip_vs_wrr"
	I0531 19:49:45.244432       1 proxier.go:657] "Failed to load kernel module with modprobe, you can ignore this message when kube-proxy is running inside container without mounting /lib/modules" moduleName="ip_vs_sh"
	I0531 19:49:45.332877       1 proxier.go:657] "Failed to load kernel module with modprobe, you can ignore this message when kube-proxy is running inside container without mounting /lib/modules" moduleName="nf_conntrack"
	I0531 19:49:45.639808       1 node.go:163] Successfully retrieved node IP: 192.168.67.2
	I0531 19:49:45.639890       1 server_others.go:138] "Detected node IP" address="192.168.67.2"
	I0531 19:49:45.640670       1 server_others.go:561] "Unknown proxy mode, assuming iptables proxy" proxyMode=""
	I0531 19:49:46.142410       1 server_others.go:206] "Using iptables Proxier"
	I0531 19:49:46.142631       1 server_others.go:213] "kube-proxy running in dual-stack mode" ipFamily=IPv4
	I0531 19:49:46.142655       1 server_others.go:214] "Creating dualStackProxier for iptables"
	I0531 19:49:46.142803       1 server_others.go:491] "Detect-local-mode set to ClusterCIDR, but no IPv6 cluster CIDR defined, , defaulting to no-op detect-local for IPv6"
	I0531 19:49:46.144334       1 server.go:656] "Version info" version="v1.23.6"
	I0531 19:49:46.145879       1 config.go:317] "Starting service config controller"
	I0531 19:49:46.146010       1 shared_informer.go:240] Waiting for caches to sync for service config
	I0531 19:49:46.146722       1 config.go:226] "Starting endpoint slice config controller"
	I0531 19:49:46.146736       1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
	I0531 19:49:46.247171       1 shared_informer.go:247] Caches are synced for service config 
	I0531 19:49:46.247227       1 shared_informer.go:247] Caches are synced for endpoint slice config 
	
	* 
	* ==> kube-scheduler [482a2f211adf] <==
	* W0531 19:49:11.983904       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0531 19:49:11.984080       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0531 19:49:11.984657       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0531 19:49:11.984777       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0531 19:49:12.056586       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0531 19:49:12.056746       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0531 19:49:12.062018       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0531 19:49:12.062084       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0531 19:49:12.069118       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0531 19:49:12.069246       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0531 19:49:12.123835       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0531 19:49:12.124017       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0531 19:49:12.325108       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0531 19:49:12.325352       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1beta1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0531 19:49:12.325399       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.CSIStorageCapacity: failed to list *v1beta1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0531 19:49:12.325842       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0531 19:49:12.336385       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0531 19:49:12.336522       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0531 19:49:12.342122       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0531 19:49:12.342245       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0531 19:49:12.375792       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0531 19:49:12.375925       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0531 19:49:14.478824       1 plugin.go:138] "getting namespace, assuming empty set of namespace labels" err="namespace \"kube-system\" not found" namespace="kube-system"
	E0531 19:49:14.478976       1 plugin.go:138] "getting namespace, assuming empty set of namespace labels" err="namespace \"kube-system\" not found" namespace="kube-system"
	I0531 19:49:15.040440       1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	
	* 
	* ==> kubelet <==
	* -- Logs begin at Tue 2022-05-31 19:43:05 UTC, end at Tue 2022-05-31 19:51:48 UTC. --
	May 31 19:51:48 embed-certs-20220531193346-2108 kubelet[5169]: E0531 19:51:48.191416    5169 pod_workers.go:951] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-56974995fc-g6xnz_kubernetes-dashboard(8c645eab-c464-4365-bb03-e761790e1b33)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-56974995fc-g6xnz" podUID=8c645eab-c464-4365-bb03-e761790e1b33
	May 31 19:51:48 embed-certs-20220531193346-2108 kubelet[5169]: I0531 19:51:48.371135    5169 kubelet_getters.go:300] "Path does not exist" path="/var/lib/kubelet/pods/338c4bbf5c704a2a0ada42f2bf66d93d/volumes"
	May 31 19:51:48 embed-certs-20220531193346-2108 kubelet[5169]: I0531 19:51:48.371355    5169 kubelet_getters.go:300] "Path does not exist" path="/var/lib/kubelet/pods/ad2acd0137210a83460a0df3d2f2d9c9/volumes"
	May 31 19:51:48 embed-certs-20220531193346-2108 kubelet[5169]: I0531 19:51:48.371396    5169 kubelet_getters.go:300] "Path does not exist" path="/var/lib/kubelet/pods/bfb645df10bc1c0532ff7af877b3f38c/volumes"
	May 31 19:51:48 embed-certs-20220531193346-2108 kubelet[5169]: I0531 19:51:48.371438    5169 kubelet_getters.go:300] "Path does not exist" path="/var/lib/kubelet/pods/56c9cdb4-db41-46c5-8cd4-209677050138/volumes"
	May 31 19:51:48 embed-certs-20220531193346-2108 kubelet[5169]: I0531 19:51:48.371467    5169 kubelet_getters.go:300] "Path does not exist" path="/var/lib/kubelet/pods/b69231baa328f19f3705dac5129593c2/volumes"
	May 31 19:51:48 embed-certs-20220531193346-2108 kubelet[5169]: I0531 19:51:48.371497    5169 kubelet_getters.go:300] "Path does not exist" path="/var/lib/kubelet/pods/c6857c844cb9ecf685bba20626e8b532/volumes"
	May 31 19:51:48 embed-certs-20220531193346-2108 kubelet[5169]: I0531 19:51:48.371536    5169 kubelet_getters.go:300] "Path does not exist" path="/var/lib/kubelet/pods/ce596a6602b29a1aa40c59cd0f2e881f/volumes"
	May 31 19:51:48 embed-certs-20220531193346-2108 kubelet[5169]: I0531 19:51:48.371565    5169 kubelet_getters.go:300] "Path does not exist" path="/var/lib/kubelet/pods/e421f4b0ae3f1f9c0ed28247fadbce51/volumes"
	May 31 19:51:48 embed-certs-20220531193346-2108 kubelet[5169]: I0531 19:51:48.371596    5169 kubelet_getters.go:300] "Path does not exist" path="/var/lib/kubelet/pods/203b1d21-0785-45f8-a005-ccd8b231048b/volumes"
	May 31 19:51:48 embed-certs-20220531193346-2108 kubelet[5169]: I0531 19:51:48.371634    5169 kubelet_getters.go:300] "Path does not exist" path="/var/lib/kubelet/pods/cbd345e6-4310-42fe-a76c-9f335f470d4d/volumes"
	May 31 19:51:48 embed-certs-20220531193346-2108 kubelet[5169]: I0531 19:51:48.371666    5169 kubelet_getters.go:300] "Path does not exist" path="/var/lib/kubelet/pods/3b0548d289dd34b74c4b7db8c3b65ef0/volumes"
	May 31 19:51:48 embed-certs-20220531193346-2108 kubelet[5169]: I0531 19:51:48.371702    5169 kubelet_getters.go:300] "Path does not exist" path="/var/lib/kubelet/pods/9a740f1b-9edc-4440-b404-ed74d49c418d/volumes"
	May 31 19:51:48 embed-certs-20220531193346-2108 kubelet[5169]: time="2022-05-31T19:51:48Z" level=warning msg="Failed to remove cgroup (will retry)" error="rmdir /sys/fs/cgroup/rdma/kubepods/burstable/podb69231baa328f19f3705dac5129593c2/117ac7c8a2babfd0606e35cbb6c8820305cdc6265e31d9b904b80d399295ccd9: device or resource busy"
	May 31 19:51:48 embed-certs-20220531193346-2108 kubelet[5169]: time="2022-05-31T19:51:48Z" level=warning msg="Failed to remove cgroup (will retry)" error="rmdir /sys/fs/cgroup/rdma/kubepods/besteffort/pod9a740f1b-9edc-4440-b404-ed74d49c418d/152251cf734540cadf4adce797803fc34a434bfe785b30e55447d93a047b8f71: device or resource busy"
	May 31 19:51:48 embed-certs-20220531193346-2108 kubelet[5169]: time="2022-05-31T19:51:48Z" level=warning msg="Failed to remove cgroup (will retry)" error="rmdir /sys/fs/cgroup/rdma/kubepods/burstable/pod56c9cdb4-db41-46c5-8cd4-209677050138/71f6a19ff0ef670dfa232a840aeac80f93dc8554b902cd7cf38daa534ee954d7: device or resource busy"
	May 31 19:51:48 embed-certs-20220531193346-2108 kubelet[5169]: time="2022-05-31T19:51:48Z" level=warning msg="Failed to remove cgroup (will retry)" error="rmdir /sys/fs/cgroup/rdma/kubepods/besteffort/pod203b1d21-0785-45f8-a005-ccd8b231048b/b0eb67caf61ba77a4e12b80f810e99e7337f9d9de00929305f89327c700f3793: device or resource busy"
	May 31 19:51:48 embed-certs-20220531193346-2108 kubelet[5169]: time="2022-05-31T19:51:48Z" level=warning msg="Failed to remove cgroup (will retry)" error="rmdir /sys/fs/cgroup/rdma/kubepods/besteffort/podcbd345e6-4310-42fe-a76c-9f335f470d4d/2b26f9f46e44565f1c2e001155defd97b977122e0e621b13581d05976c1b7b93: device or resource busy"
	May 31 19:51:48 embed-certs-20220531193346-2108 kubelet[5169]: time="2022-05-31T19:51:48Z" level=warning msg="Failed to remove cgroup (will retry)" error="rmdir /sys/fs/cgroup/rdma/kubepods/burstable/podbfb645df10bc1c0532ff7af877b3f38c/a069846beba9f18c6be439b3f7a00364fc4dac17fc5956f0e7274ec8b629a0fa: device or resource busy"
	May 31 19:51:48 embed-certs-20220531193346-2108 kubelet[5169]: time="2022-05-31T19:51:48Z" level=warning msg="Failed to remove cgroup (will retry)" error="rmdir /sys/fs/cgroup/rdma/kubepods/burstable/podad2acd0137210a83460a0df3d2f2d9c9/02447e1fee273cf431999d5ce3bd9422f97740c73a98b10c876dd6839cc36d42: device or resource busy"
	May 31 19:51:48 embed-certs-20220531193346-2108 kubelet[5169]: time="2022-05-31T19:51:48Z" level=warning msg="Failed to remove cgroup (will retry)" error="rmdir /sys/fs/cgroup/rdma/kubepods/burstable/pode421f4b0ae3f1f9c0ed28247fadbce51/d83eb103bd5e7550adb10a72e917bc3841ab9aa4f1a34e54b344c8a63a77b51b: device or resource busy"
	May 31 19:51:48 embed-certs-20220531193346-2108 kubelet[5169]: time="2022-05-31T19:51:48Z" level=warning msg="Failed to remove cgroup (will retry)" error="rmdir /sys/fs/cgroup/rdma/kubepods/burstable/pod3b0548d289dd34b74c4b7db8c3b65ef0/5b2465d08ca73458663a8a1b17daa7e7a54d8df634080979ae106b253244171c: device or resource busy"
	May 31 19:51:48 embed-certs-20220531193346-2108 kubelet[5169]: time="2022-05-31T19:51:48Z" level=warning msg="Failed to remove cgroup (will retry)" error="rmdir /sys/fs/cgroup/rdma/kubepods/burstable/pod338c4bbf5c704a2a0ada42f2bf66d93d/29f57e4f7f015abd015519b511289d402d27aa9450531a22ac2f95291ea19963: device or resource busy"
	May 31 19:51:48 embed-certs-20220531193346-2108 kubelet[5169]: time="2022-05-31T19:51:48Z" level=warning msg="Failed to remove cgroup (will retry)" error="rmdir /sys/fs/cgroup/rdma/kubepods/burstable/podce596a6602b29a1aa40c59cd0f2e881f/4dee898a48112a5532769c2988b800aaa45f5f0565a70bffcc8a07ade742104b: device or resource busy"
	May 31 19:51:48 embed-certs-20220531193346-2108 kubelet[5169]: time="2022-05-31T19:51:48Z" level=warning msg="Failed to remove cgroup (will retry)" error="rmdir /sys/fs/cgroup/rdma/kubepods/burstable/podc6857c844cb9ecf685bba20626e8b532/06dbdf88324c6f470d371247122610f9c81a280b764169403ccb1aac547fe2a3: device or resource busy"
	
	* 
	* ==> kubernetes-dashboard [d951ec257dd1] <==
	* 2022/05/31 19:50:33 Starting overwatch
	2022/05/31 19:50:33 Using namespace: kubernetes-dashboard
	2022/05/31 19:50:33 Using in-cluster config to connect to apiserver
	2022/05/31 19:50:33 Using secret token for csrf signing
	2022/05/31 19:50:33 Initializing csrf token from kubernetes-dashboard-csrf secret
	2022/05/31 19:50:33 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2022/05/31 19:50:33 Successful initial request to the apiserver, version: v1.23.6
	2022/05/31 19:50:33 Generating JWE encryption key
	2022/05/31 19:50:33 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2022/05/31 19:50:33 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2022/05/31 19:50:34 Initializing JWE encryption key from synchronized object
	2022/05/31 19:50:34 Creating in-cluster Sidecar client
	2022/05/31 19:50:34 Serving insecurely on HTTP port: 9090
	2022/05/31 19:50:34 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2022/05/31 19:51:04 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2022/05/31 19:51:36 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	
	* 
	* ==> storage-provisioner [9a7ffad2971b] <==
	* I0531 19:49:50.029861       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0531 19:49:50.127335       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0531 19:49:50.127719       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0531 19:49:50.230182       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0531 19:49:50.230445       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"848a626b-366f-40da-b374-e7c69b22f1b4", APIVersion:"v1", ResourceVersion:"598", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-20220531193346-2108_77c199d1-f409-4014-82d5-d028fac289c7 became leader
	I0531 19:49:50.230610       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-20220531193346-2108_77c199d1-f409-4014-82d5-d028fac289c7!
	I0531 19:49:50.332394       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-20220531193346-2108_77c199d1-f409-4014-82d5-d028fac289c7!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p embed-certs-20220531193346-2108 -n embed-certs-20220531193346-2108
helpers_test.go:254: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p embed-certs-20220531193346-2108 -n embed-certs-20220531193346-2108: (7.662968s)
helpers_test.go:261: (dbg) Run:  kubectl --context embed-certs-20220531193346-2108 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:270: non-running pods: metrics-server-b955d9d8-w48dh
helpers_test.go:272: ======> post-mortem[TestStartStop/group/embed-certs/serial/Pause]: describe non-running pods <======
helpers_test.go:275: (dbg) Run:  kubectl --context embed-certs-20220531193346-2108 describe pod metrics-server-b955d9d8-w48dh
helpers_test.go:275: (dbg) Non-zero exit: kubectl --context embed-certs-20220531193346-2108 describe pod metrics-server-b955d9d8-w48dh: exit status 1 (312.9848ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-b955d9d8-w48dh" not found

                                                
                                                
** /stderr **
helpers_test.go:277: kubectl --context embed-certs-20220531193346-2108 describe pod metrics-server-b955d9d8-w48dh: exit status 1
--- FAIL: TestStartStop/group/embed-certs/serial/Pause (64.05s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (357.44s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:169: (dbg) Run:  kubectl --context auto-20220531191922-2108 exec deployment/netcat -- nslookup kubernetes.default

                                                
                                                
=== CONT  TestNetworkPlugins/group/auto/DNS
net_test.go:169: (dbg) Non-zero exit: kubectl --context auto-20220531191922-2108 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (22.8362329s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:169: (dbg) Run:  kubectl --context auto-20220531191922-2108 exec deployment/netcat -- nslookup kubernetes.default
E0531 19:58:43.342622    2108 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\kindnet-20220531191930-2108\client.crt: The system cannot find the path specified.
net_test.go:169: (dbg) Non-zero exit: kubectl --context auto-20220531191922-2108 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.5471142s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:169: (dbg) Run:  kubectl --context auto-20220531191922-2108 exec deployment/netcat -- nslookup kubernetes.default
E0531 19:58:56.319838    2108 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\addons-20220531171726-2108\client.crt: The system cannot find the path specified.
net_test.go:169: (dbg) Non-zero exit: kubectl --context auto-20220531191922-2108 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.4882095s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
E0531 19:59:11.158873    2108 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\kindnet-20220531191930-2108\client.crt: The system cannot find the path specified.
net_test.go:169: (dbg) Run:  kubectl --context auto-20220531191922-2108 exec deployment/netcat -- nslookup kubernetes.default
E0531 19:59:12.284628    2108 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\no-preload-20220531192611-2108\client.crt: The system cannot find the path specified.
E0531 19:59:13.058993    2108 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\addons-20220531171726-2108\client.crt: The system cannot find the path specified.

                                                
                                                
=== CONT  TestNetworkPlugins/group/auto/DNS
net_test.go:169: (dbg) Non-zero exit: kubectl --context auto-20220531191922-2108 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.5280837s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **

                                                
                                                
=== CONT  TestNetworkPlugins/group/auto/DNS
net_test.go:169: (dbg) Run:  kubectl --context auto-20220531191922-2108 exec deployment/netcat -- nslookup kubernetes.default
net_test.go:169: (dbg) Non-zero exit: kubectl --context auto-20220531191922-2108 exec deployment/netcat -- nslookup kubernetes.default: context deadline exceeded (0s)

                                                
                                                
=== CONT  TestNetworkPlugins/group/auto/DNS
net_test.go:169: (dbg) Run:  kubectl --context auto-20220531191922-2108 exec deployment/netcat -- nslookup kubernetes.default
net_test.go:169: (dbg) Non-zero exit: kubectl --context auto-20220531191922-2108 exec deployment/netcat -- nslookup kubernetes.default: context deadline exceeded (195.1µs)

                                                
                                                
=== CONT  TestNetworkPlugins/group/auto/DNS
net_test.go:169: (dbg) Run:  kubectl --context auto-20220531191922-2108 exec deployment/netcat -- nslookup kubernetes.default
net_test.go:169: (dbg) Non-zero exit: kubectl --context auto-20220531191922-2108 exec deployment/netcat -- nslookup kubernetes.default: context deadline exceeded (0s)
E0531 19:59:46.146367    2108 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\ingress-addon-legacy-20220531181152-2108\client.crt: The system cannot find the path specified.

                                                
                                                
=== CONT  TestNetworkPlugins/group/auto/DNS
net_test.go:169: (dbg) Run:  kubectl --context auto-20220531191922-2108 exec deployment/netcat -- nslookup kubernetes.default

                                                
                                                
=== CONT  TestNetworkPlugins/group/auto/DNS
net_test.go:169: (dbg) Non-zero exit: kubectl --context auto-20220531191922-2108 exec deployment/netcat -- nslookup kubernetes.default: context deadline exceeded (278µs)
E0531 19:59:56.253552    2108 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\old-k8s-version-20220531192531-2108\client.crt: The system cannot find the path specified.

                                                
                                                
=== CONT  TestNetworkPlugins/group/auto/DNS
net_test.go:169: (dbg) Run:  kubectl --context auto-20220531191922-2108 exec deployment/netcat -- nslookup kubernetes.default
net_test.go:169: (dbg) Non-zero exit: kubectl --context auto-20220531191922-2108 exec deployment/netcat -- nslookup kubernetes.default: context deadline exceeded (0s)

                                                
                                                
=== CONT  TestNetworkPlugins/group/auto/DNS
net_test.go:169: (dbg) Run:  kubectl --context auto-20220531191922-2108 exec deployment/netcat -- nslookup kubernetes.default
net_test.go:169: (dbg) Non-zero exit: kubectl --context auto-20220531191922-2108 exec deployment/netcat -- nslookup kubernetes.default: context deadline exceeded (0s)

                                                
                                                
=== CONT  TestNetworkPlugins/group/auto/DNS
net_test.go:169: (dbg) Run:  kubectl --context auto-20220531191922-2108 exec deployment/netcat -- nslookup kubernetes.default
net_test.go:169: (dbg) Non-zero exit: kubectl --context auto-20220531191922-2108 exec deployment/netcat -- nslookup kubernetes.default: context deadline exceeded (0s)
E0531 20:01:10.025091    2108 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\functional-20220531173104-2108\client.crt: The system cannot find the path specified.
E0531 20:02:04.086582    2108 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\default-k8s-different-port-20220531193451-2108\client.crt: The system cannot find the path specified.
net_test.go:169: (dbg) Run:  kubectl --context auto-20220531191922-2108 exec deployment/netcat -- nslookup kubernetes.default
net_test.go:169: (dbg) Non-zero exit: kubectl --context auto-20220531191922-2108 exec deployment/netcat -- nslookup kubernetes.default: context deadline exceeded (0s)
net_test.go:169: (dbg) Run:  kubectl --context auto-20220531191922-2108 exec deployment/netcat -- nslookup kubernetes.default
net_test.go:169: (dbg) Non-zero exit: kubectl --context auto-20220531191922-2108 exec deployment/netcat -- nslookup kubernetes.default: context deadline exceeded (0s)
E0531 20:03:43.346761    2108 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\kindnet-20220531191930-2108\client.crt: The system cannot find the path specified.
net_test.go:169: (dbg) Run:  kubectl --context auto-20220531191922-2108 exec deployment/netcat -- nslookup kubernetes.default
net_test.go:169: (dbg) Non-zero exit: kubectl --context auto-20220531191922-2108 exec deployment/netcat -- nslookup kubernetes.default: context deadline exceeded (0s)
net_test.go:175: failed to do nslookup on kubernetes.default: context deadline exceeded
net_test.go:180: failed nslookup: got="", want=*"10.96.0.1"*
--- FAIL: TestNetworkPlugins/group/auto/DNS (357.44s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (52s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:101: (dbg) Run:  out/minikube-windows-amd64.exe start -p bridge-20220531191922-2108 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=bridge --driver=docker

                                                
                                                
=== CONT  TestNetworkPlugins/group/bridge/Start
net_test.go:101: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p bridge-20220531191922-2108 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=bridge --driver=docker: exit status 1 (52.0013826s)

                                                
                                                
-- stdout --
	* [bridge-20220531191922-2108] minikube v1.26.0-beta.1 on Microsoft Windows 10 Enterprise N 10.0.19044 Build 19044
	  - KUBECONFIG=C:\Users\jenkins.minikube7\minikube-integration\kubeconfig
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube7\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=14079
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	* Using the docker driver based on user configuration
	* Using Docker Desktop driver with the root privilege
	* Starting control plane node bridge-20220531191922-2108 in cluster bridge-20220531191922-2108
	* Pulling base image ...
	* Creating docker container (CPUs=2, Memory=2048MB) ...

                                                
                                                
-- /stdout --
** stderr ** 
	I0531 19:58:31.003076    2088 out.go:296] Setting OutFile to fd 1576 ...
	I0531 19:58:31.062583    2088 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0531 19:58:31.062583    2088 out.go:309] Setting ErrFile to fd 1596...
	I0531 19:58:31.062583    2088 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0531 19:58:31.076460    2088 out.go:303] Setting JSON to false
	I0531 19:58:31.079070    2088 start.go:115] hostinfo: {"hostname":"minikube7","uptime":85381,"bootTime":1653941730,"procs":155,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19044 Build 19044","kernelVersion":"10.0.19044 Build 19044","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f66ed2ea-6c04-4a6b-8eea-b2fb0953e990"}
	W0531 19:58:31.079070    2088 start.go:123] gopshost.Virtualization returned error: not implemented yet
	I0531 19:58:31.084967    2088 out.go:177] * [bridge-20220531191922-2108] minikube v1.26.0-beta.1 on Microsoft Windows 10 Enterprise N 10.0.19044 Build 19044
	I0531 19:58:31.088836    2088 notify.go:193] Checking for updates...
	I0531 19:58:31.092146    2088 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube7\minikube-integration\kubeconfig
	I0531 19:58:31.097744    2088 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube7\minikube-integration\.minikube
	I0531 19:58:31.102764    2088 out.go:177]   - MINIKUBE_LOCATION=14079
	I0531 19:58:31.108009    2088 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0531 19:58:31.113287    2088 config.go:178] Loaded profile config "auto-20220531191922-2108": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.6
	I0531 19:58:31.113287    2088 config.go:178] Loaded profile config "calico-20220531191937-2108": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.6
	I0531 19:58:31.113287    2088 config.go:178] Loaded profile config "false-20220531191930-2108": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.6
	I0531 19:58:31.114363    2088 driver.go:358] Setting default libvirt URI to qemu:///system
	I0531 19:58:33.874084    2088 docker.go:137] docker version: linux-20.10.14
	I0531 19:58:33.885037    2088 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0531 19:58:36.160212    2088 cli_runner.go:217] Completed: docker system info --format "{{json .}}": (2.2751647s)
	I0531 19:58:36.160212    2088 info.go:265] docker info: {ID:JKWR:L4LW:XYJC:G6AI:GZFU:RUGW:CCH6:OD2M:V572:4FTB:B7YC:DTUC Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:85 OomKillDisable:true NGoroutines:63 SystemTime:2022-05-31 19:58:35.0170559 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.102.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_
64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:53902323712 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.14 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3df54a852345ae127d1fa3092b95168e4a88e2f8 Expected:3df54a852345ae127d1fa3092b95168e4a88e2f8} RuncCommit:{ID:v1.0.3-0-gf46b6ba Expected:v1.0.3-0-gf46b6ba} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,
profile=default] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.5.1] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:C:\Program Files\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc.
Version:v0.17.0]] Warnings:<nil>}}
	I0531 19:58:36.165225    2088 out.go:177] * Using the docker driver based on user configuration
	I0531 19:58:36.171229    2088 start.go:284] selected driver: docker
	I0531 19:58:36.171229    2088 start.go:806] validating driver "docker" against <nil>
	I0531 19:58:36.171229    2088 start.go:817] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0531 19:58:36.308602    2088 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0531 19:58:38.556045    2088 cli_runner.go:217] Completed: docker system info --format "{{json .}}": (2.2464333s)
	I0531 19:58:38.556412    2088 info.go:265] docker info: {ID:JKWR:L4LW:XYJC:G6AI:GZFU:RUGW:CCH6:OD2M:V572:4FTB:B7YC:DTUC Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:85 OomKillDisable:true NGoroutines:63 SystemTime:2022-05-31 19:58:37.4206909 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.102.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_
64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:53902323712 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.14 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3df54a852345ae127d1fa3092b95168e4a88e2f8 Expected:3df54a852345ae127d1fa3092b95168e4a88e2f8} RuncCommit:{ID:v1.0.3-0-gf46b6ba Expected:v1.0.3-0-gf46b6ba} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,
profile=default] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.5.1] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:C:\Program Files\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc.
Version:v0.17.0]] Warnings:<nil>}}
	I0531 19:58:38.556412    2088 start_flags.go:292] no existing cluster config was found, will generate one from the flags 
	I0531 19:58:38.557034    2088 start_flags.go:847] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0531 19:58:38.562660    2088 out.go:177] * Using Docker Desktop driver with the root privilege
	I0531 19:58:38.566916    2088 cni.go:95] Creating CNI manager for "bridge"
	I0531 19:58:38.566916    2088 start_flags.go:301] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0531 19:58:38.566916    2088 start_flags.go:306] config:
	{Name:bridge-20220531191922-2108 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653596720-14230@sha256:e953786303ac8350802546ee187d34e89f0007072a54fdbcc2f86a1fb8575418 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.6 ClusterName:bridge-20220531191922-2108 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:5m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube7:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0531 19:58:38.568932    2088 out.go:177] * Starting control plane node bridge-20220531191922-2108 in cluster bridge-20220531191922-2108
	I0531 19:58:38.568932    2088 cache.go:120] Beginning downloading kic base image for docker with docker
	I0531 19:58:38.568932    2088 out.go:177] * Pulling base image ...
	I0531 19:58:38.583893    2088 preload.go:132] Checking if preload exists for k8s version v1.23.6 and runtime docker
	I0531 19:58:38.584570    2088 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653596720-14230@sha256:e953786303ac8350802546ee187d34e89f0007072a54fdbcc2f86a1fb8575418 in local docker daemon
	I0531 19:58:38.584720    2088 preload.go:148] Found local preload: C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.23.6-docker-overlay2-amd64.tar.lz4
	I0531 19:58:38.584803    2088 cache.go:57] Caching tarball of preloaded images
	I0531 19:58:38.585304    2088 preload.go:174] Found C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.23.6-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0531 19:58:38.585567    2088 cache.go:60] Finished verifying existence of preloaded tar for  v1.23.6 on docker
	I0531 19:58:38.585934    2088 profile.go:148] Saving config to C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\bridge-20220531191922-2108\config.json ...
	I0531 19:58:38.586310    2088 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\bridge-20220531191922-2108\config.json: {Name:mk59dafa908de803a2b281f8afcddeab0754f2c2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0531 19:58:39.776835    2088 image.go:79] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653596720-14230@sha256:e953786303ac8350802546ee187d34e89f0007072a54fdbcc2f86a1fb8575418 in local docker daemon, skipping pull
	I0531 19:58:39.776835    2088 cache.go:141] gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653596720-14230@sha256:e953786303ac8350802546ee187d34e89f0007072a54fdbcc2f86a1fb8575418 exists in daemon, skipping load
	I0531 19:58:39.776835    2088 cache.go:206] Successfully downloaded all kic artifacts
	I0531 19:58:39.776835    2088 start.go:352] acquiring machines lock for bridge-20220531191922-2108: {Name:mk57bbc29b2fb31887587ddaf1ae16787cc4bad3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0531 19:58:39.776835    2088 start.go:356] acquired machines lock for "bridge-20220531191922-2108" in 0s
	I0531 19:58:39.776835    2088 start.go:91] Provisioning new machine with config: &{Name:bridge-20220531191922-2108 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653596720-14230@sha256:e953786303ac8350802546ee187d34e89f0007072a54fdbcc2f86a1fb8575418 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.6 ClusterName:bridge-20220531191922-2108 Namespace:default
APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.23.6 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:5m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube7:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docke
r BinaryMirror: DisableOptimizations:false DisableMetrics:false} &{Name: IP: Port:8443 KubernetesVersion:v1.23.6 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0531 19:58:39.776835    2088 start.go:131] createHost starting for "" (driver="docker")
	I0531 19:58:39.779843    2088 out.go:204] * Creating docker container (CPUs=2, Memory=2048MB) ...
	I0531 19:58:39.779843    2088 start.go:165] libmachine.API.Create for "bridge-20220531191922-2108" (driver="docker")
	I0531 19:58:39.779843    2088 client.go:168] LocalClient.Create starting
	I0531 19:58:39.780854    2088 main.go:134] libmachine: Reading certificate data from C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\ca.pem
	I0531 19:58:39.780854    2088 main.go:134] libmachine: Decoding PEM data...
	I0531 19:58:39.780854    2088 main.go:134] libmachine: Parsing certificate...
	I0531 19:58:39.781842    2088 main.go:134] libmachine: Reading certificate data from C:\Users\jenkins.minikube7\minikube-integration\.minikube\certs\cert.pem
	I0531 19:58:39.781842    2088 main.go:134] libmachine: Decoding PEM data...
	I0531 19:58:39.781842    2088 main.go:134] libmachine: Parsing certificate...
	I0531 19:58:39.789835    2088 cli_runner.go:164] Run: docker network inspect bridge-20220531191922-2108 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0531 19:58:41.047393    2088 cli_runner.go:211] docker network inspect bridge-20220531191922-2108 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0531 19:58:41.047393    2088 cli_runner.go:217] Completed: docker network inspect bridge-20220531191922-2108 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}": (1.2575531s)
	I0531 19:58:41.058459    2088 network_create.go:272] running [docker network inspect bridge-20220531191922-2108] to gather additional debugging logs...
	I0531 19:58:41.058549    2088 cli_runner.go:164] Run: docker network inspect bridge-20220531191922-2108
	W0531 19:58:42.217973    2088 cli_runner.go:211] docker network inspect bridge-20220531191922-2108 returned with exit code 1
	I0531 19:58:42.217973    2088 cli_runner.go:217] Completed: docker network inspect bridge-20220531191922-2108: (1.1594195s)
	I0531 19:58:42.217973    2088 network_create.go:275] error running [docker network inspect bridge-20220531191922-2108]: docker network inspect bridge-20220531191922-2108: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: bridge-20220531191922-2108
	I0531 19:58:42.217973    2088 network_create.go:277] output of [docker network inspect bridge-20220531191922-2108]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: bridge-20220531191922-2108
	
	** /stderr **
	I0531 19:58:42.228002    2088 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0531 19:58:43.357951    2088 cli_runner.go:217] Completed: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}": (1.1298034s)
	I0531 19:58:43.380031    2088 network.go:288] reserving subnet 192.168.49.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[] amended:true}} dirty:map[192.168.49.0:0xc0006e6198] misses:0}
	I0531 19:58:43.380031    2088 network.go:235] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0531 19:58:43.380031    2088 network_create.go:115] attempt to create docker network bridge-20220531191922-2108 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0531 19:58:43.387618    2088 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true bridge-20220531191922-2108
	W0531 19:58:44.570872    2088 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true bridge-20220531191922-2108 returned with exit code 1
	I0531 19:58:44.570872    2088 cli_runner.go:217] Completed: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true bridge-20220531191922-2108: (1.1832491s)
	W0531 19:58:44.570872    2088 network_create.go:107] failed to create docker network bridge-20220531191922-2108 192.168.49.0/24, will retry: subnet is taken
	I0531 19:58:44.591031    2088 network.go:279] skipping subnet 192.168.49.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc0006e6198] amended:false}} dirty:map[] misses:0}
	I0531 19:58:44.591031    2088 network.go:238] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0531 19:58:44.610236    2088 network.go:288] reserving subnet 192.168.58.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc0006e6198] amended:true}} dirty:map[192.168.49.0:0xc0006e6198 192.168.58.0:0xc0006e6230] misses:0}
	I0531 19:58:44.610236    2088 network.go:235] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0531 19:58:44.610236    2088 network_create.go:115] attempt to create docker network bridge-20220531191922-2108 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
	I0531 19:58:44.617313    2088 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true bridge-20220531191922-2108
	I0531 19:58:45.881876    2088 cli_runner.go:217] Completed: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true bridge-20220531191922-2108: (1.2645572s)
	I0531 19:58:45.881876    2088 network_create.go:99] docker network bridge-20220531191922-2108 192.168.58.0/24 created
	I0531 19:58:45.881876    2088 kic.go:106] calculated static IP "192.168.58.2" for the "bridge-20220531191922-2108" container
	I0531 19:58:45.894874    2088 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0531 19:58:47.078969    2088 cli_runner.go:217] Completed: docker ps -a --format {{.Names}}: (1.1840894s)
	I0531 19:58:47.085868    2088 cli_runner.go:164] Run: docker volume create bridge-20220531191922-2108 --label name.minikube.sigs.k8s.io=bridge-20220531191922-2108 --label created_by.minikube.sigs.k8s.io=true
	I0531 19:58:48.294205    2088 cli_runner.go:217] Completed: docker volume create bridge-20220531191922-2108 --label name.minikube.sigs.k8s.io=bridge-20220531191922-2108 --label created_by.minikube.sigs.k8s.io=true: (1.2071719s)
	I0531 19:58:48.294272    2088 oci.go:103] Successfully created a docker volume bridge-20220531191922-2108
	I0531 19:58:48.302980    2088 cli_runner.go:164] Run: docker run --rm --name bridge-20220531191922-2108-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=bridge-20220531191922-2108 --entrypoint /usr/bin/test -v bridge-20220531191922-2108:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653596720-14230@sha256:e953786303ac8350802546ee187d34e89f0007072a54fdbcc2f86a1fb8575418 -d /var/lib
	I0531 19:58:51.411022    2088 cli_runner.go:217] Completed: docker run --rm --name bridge-20220531191922-2108-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=bridge-20220531191922-2108 --entrypoint /usr/bin/test -v bridge-20220531191922-2108:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653596720-14230@sha256:e953786303ac8350802546ee187d34e89f0007072a54fdbcc2f86a1fb8575418 -d /var/lib: (3.1080275s)
	I0531 19:58:51.411354    2088 oci.go:107] Successfully prepared a docker volume bridge-20220531191922-2108
	I0531 19:58:51.411354    2088 preload.go:132] Checking if preload exists for k8s version v1.23.6 and runtime docker
	I0531 19:58:51.411457    2088 kic.go:179] Starting extracting preloaded images to volume ...
	I0531 19:58:51.419270    2088 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.23.6-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v bridge-20220531191922-2108:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653596720-14230@sha256:e953786303ac8350802546ee187d34e89f0007072a54fdbcc2f86a1fb8575418 -I lz4 -xf /preloaded.tar -C /extractDir
	I0531 19:59:15.885208    2088 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.23.6-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v bridge-20220531191922-2108:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653596720-14230@sha256:e953786303ac8350802546ee187d34e89f0007072a54fdbcc2f86a1fb8575418 -I lz4 -xf /preloaded.tar -C /extractDir: (24.4657745s)
	I0531 19:59:15.885429    2088 kic.go:188] duration metric: took 24.473810 seconds to extract preloaded images to volume
	I0531 19:59:15.894605    2088 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0531 19:59:18.223086    2088 cli_runner.go:217] Completed: docker system info --format "{{json .}}": (2.3284705s)
	I0531 19:59:18.223086    2088 info.go:265] docker info: {ID:JKWR:L4LW:XYJC:G6AI:GZFU:RUGW:CCH6:OD2M:V572:4FTB:B7YC:DTUC Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:87 OomKillDisable:true NGoroutines:63 SystemTime:2022-05-31 19:59:17.0865382 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.102.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_
64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:53902323712 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.14 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3df54a852345ae127d1fa3092b95168e4a88e2f8 Expected:3df54a852345ae127d1fa3092b95168e4a88e2f8} RuncCommit:{ID:v1.0.3-0-gf46b6ba Expected:v1.0.3-0-gf46b6ba} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,
profile=default] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.5.1] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:C:\Program Files\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc.
Version:v0.17.0]] Warnings:<nil>}}
	I0531 19:59:18.232074    2088 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0531 19:59:20.735074    2088 cli_runner.go:217] Completed: docker info --format "'{{json .SecurityOptions}}'": (2.5029888s)
	I0531 19:59:20.748652    2088 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname bridge-20220531191922-2108 --name bridge-20220531191922-2108 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=bridge-20220531191922-2108 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=bridge-20220531191922-2108 --network bridge-20220531191922-2108 --ip 192.168.58.2 --volume bridge-20220531191922-2108:/var --security-opt apparmor=unconfined --memory=2048mb --memory-swap=2048mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653596720-14230@sha256:e953786303ac8350802546ee187d34e89f0007072a54fdbcc2f86a1fb8575418

                                                
                                                
** /stderr **
net_test.go:103: failed start: exit status 1
--- FAIL: TestNetworkPlugins/group/bridge/Start (52.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/KubeletFlags (5.84s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/KubeletFlags
net_test.go:122: (dbg) Run:  out/minikube-windows-amd64.exe ssh -p false-20220531191930-2108 "pgrep -a kubelet"

                                                
                                                
=== CONT  TestNetworkPlugins/group/false/KubeletFlags
net_test.go:122: (dbg) Non-zero exit: out/minikube-windows-amd64.exe ssh -p false-20220531191930-2108 "pgrep -a kubelet": exit status 1 (5.8364438s)
net_test.go:125: ssh failed: exit status 1
--- FAIL: TestNetworkPlugins/group/false/KubeletFlags (5.84s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (0s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:101: (dbg) Run:  out/minikube-windows-amd64.exe start -p enable-default-cni-20220531191922-2108 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --enable-default-cni=true --driver=docker
net_test.go:101: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p enable-default-cni-20220531191922-2108 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --enable-default-cni=true --driver=docker: context deadline exceeded (0s)
net_test.go:103: failed start: context deadline exceeded
--- FAIL: TestNetworkPlugins/group/enable-default-cni/Start (0.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Start (0s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Start
net_test.go:101: (dbg) Run:  out/minikube-windows-amd64.exe start -p kubenet-20220531191922-2108 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --network-plugin=kubenet --driver=docker
net_test.go:101: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p kubenet-20220531191922-2108 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --network-plugin=kubenet --driver=docker: context deadline exceeded (0s)
net_test.go:103: failed start: context deadline exceeded
--- FAIL: TestNetworkPlugins/group/kubenet/Start (0.00s)

                                                
                                    

Test pass (215/254)

Order passed test Duration
3 TestDownloadOnly/v1.16.0/json-events 17.25
4 TestDownloadOnly/v1.16.0/preload-exists 0
7 TestDownloadOnly/v1.16.0/kubectl 0
8 TestDownloadOnly/v1.16.0/LogsDuration 0.41
10 TestDownloadOnly/v1.23.6/json-events 13.56
11 TestDownloadOnly/v1.23.6/preload-exists 0
14 TestDownloadOnly/v1.23.6/kubectl 0
15 TestDownloadOnly/v1.23.6/LogsDuration 0.39
16 TestDownloadOnly/DeleteAll 11.11
17 TestDownloadOnly/DeleteAlwaysSucceeds 6.86
18 TestDownloadOnlyKic 45.09
19 TestBinaryMirror 16.23
20 TestOffline 235.39
22 TestAddons/Setup 406.09
26 TestAddons/parallel/MetricsServer 12.85
27 TestAddons/parallel/HelmTiller 35.37
29 TestAddons/parallel/CSI 96.55
31 TestAddons/serial/GCPAuth 25.93
32 TestAddons/StoppedEnableDisable 24.15
33 TestCertOptions 526.49
34 TestCertExpiration 738.83
35 TestDockerFlags 163.84
36 TestForceSystemdFlag 526.36
37 TestForceSystemdEnv 162.95
42 TestErrorSpam/setup 115.23
43 TestErrorSpam/start 21.25
44 TestErrorSpam/status 18.72
45 TestErrorSpam/pause 16.34
46 TestErrorSpam/unpause 17
47 TestErrorSpam/stop 31.94
50 TestFunctional/serial/CopySyncFile 0.03
51 TestFunctional/serial/StartWithProxy 128.69
52 TestFunctional/serial/AuditLog 0
53 TestFunctional/serial/SoftStart 34.05
54 TestFunctional/serial/KubeContext 0.16
55 TestFunctional/serial/KubectlGetPods 0.35
58 TestFunctional/serial/CacheCmd/cache/add_remote 18.06
59 TestFunctional/serial/CacheCmd/cache/add_local 9.21
60 TestFunctional/serial/CacheCmd/cache/delete_k8s.gcr.io/pause:3.3 0.34
61 TestFunctional/serial/CacheCmd/cache/list 0.35
62 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 6.09
63 TestFunctional/serial/CacheCmd/cache/cache_reload 23.75
64 TestFunctional/serial/CacheCmd/cache/delete 0.7
65 TestFunctional/serial/MinikubeKubectlCmd 1.95
66 TestFunctional/serial/MinikubeKubectlCmdDirectly 1.87
67 TestFunctional/serial/ExtraConfig 61.76
68 TestFunctional/serial/ComponentHealth 0.3
69 TestFunctional/serial/LogsCmd 7.49
70 TestFunctional/serial/LogsFileCmd 8.46
72 TestFunctional/parallel/ConfigCmd 2.27
74 TestFunctional/parallel/DryRun 12.96
75 TestFunctional/parallel/InternationalLanguage 5.25
76 TestFunctional/parallel/StatusCmd 23.19
81 TestFunctional/parallel/AddonsCmd 3.67
82 TestFunctional/parallel/PersistentVolumeClaim 53.1
84 TestFunctional/parallel/SSHCmd 14.72
85 TestFunctional/parallel/CpCmd 25.81
86 TestFunctional/parallel/MySQL 70.49
87 TestFunctional/parallel/FileSync 6.46
88 TestFunctional/parallel/CertSync 36.47
92 TestFunctional/parallel/NodeLabels 0.22
94 TestFunctional/parallel/NonActiveRuntimeDisabled 6.55
96 TestFunctional/parallel/ProfileCmd/profile_not_create 11.15
97 TestFunctional/parallel/DockerEnv/powershell 29.25
99 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0.01
101 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 15.64
102 TestFunctional/parallel/ProfileCmd/profile_list 7.05
104 TestFunctional/parallel/ProfileCmd/profile_json_output 7.95
105 TestFunctional/parallel/UpdateContextCmd/no_changes 4.14
106 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 4.03
107 TestFunctional/parallel/UpdateContextCmd/no_clusters 4.06
108 TestFunctional/parallel/ImageCommands/ImageListShort 4.17
109 TestFunctional/parallel/ImageCommands/ImageListTable 4.16
110 TestFunctional/parallel/ImageCommands/ImageListJson 4.24
111 TestFunctional/parallel/ImageCommands/ImageListYaml 4.31
112 TestFunctional/parallel/ImageCommands/ImageBuild 18.42
113 TestFunctional/parallel/ImageCommands/Setup 5.4
114 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 16.69
115 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 12.75
116 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 21.62
117 TestFunctional/parallel/ImageCommands/ImageSaveToFile 7.87
118 TestFunctional/parallel/ImageCommands/ImageRemove 8.17
119 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 12.62
124 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.11
125 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 19.2
126 TestFunctional/parallel/Version/short 0.36
127 TestFunctional/parallel/Version/components 5.81
128 TestFunctional/delete_addon-resizer_images 0.01
129 TestFunctional/delete_my-image_image 0.01
130 TestFunctional/delete_minikube_cached_images 0.01
133 TestIngressAddonLegacy/StartLegacyK8sCluster 134.82
135 TestIngressAddonLegacy/serial/ValidateIngressAddonActivation 51
136 TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation 4.57
140 TestJSONOutput/start/Command 127.46
141 TestJSONOutput/start/Audit 0
143 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
144 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
146 TestJSONOutput/pause/Command 6.01
147 TestJSONOutput/pause/Audit 0
149 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
150 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
152 TestJSONOutput/unpause/Command 5.62
153 TestJSONOutput/unpause/Audit 0
155 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
156 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
158 TestJSONOutput/stop/Command 17.73
159 TestJSONOutput/stop/Audit 0
161 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
162 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
163 TestErrorJSONOutput 7.11
165 TestKicCustomNetwork/create_custom_network 135.08
166 TestKicCustomNetwork/use_default_bridge_network 128.23
167 TestKicExistingNetwork 136.52
168 TestKicCustomSubnet 135.05
169 TestMainNoArgs 0.33
170 TestMinikubeProfile 292.11
173 TestMountStart/serial/StartWithMountFirst 49.39
174 TestMountStart/serial/VerifyMountFirst 5.98
175 TestMountStart/serial/StartWithMountSecond 49.8
176 TestMountStart/serial/VerifyMountSecond 5.84
177 TestMountStart/serial/DeleteFirst 18.41
178 TestMountStart/serial/VerifyMountPostDelete 5.84
179 TestMountStart/serial/Stop 8.48
180 TestMountStart/serial/RestartStopped 28.46
181 TestMountStart/serial/VerifyMountPostStop 5.92
184 TestMultiNode/serial/FreshStart2Nodes 251.34
185 TestMultiNode/serial/DeployApp2Nodes 24.95
186 TestMultiNode/serial/PingHostFrom2Pods 10.58
187 TestMultiNode/serial/AddNode 118.24
188 TestMultiNode/serial/ProfileList 6.35
189 TestMultiNode/serial/CopyFile 215.31
190 TestMultiNode/serial/StopNode 29.07
191 TestMultiNode/serial/StartAfterStop 53.22
192 TestMultiNode/serial/RestartKeepsNodes 214.24
193 TestMultiNode/serial/DeleteNode 43.07
194 TestMultiNode/serial/StopMultiNode 39.96
195 TestMultiNode/serial/RestartMultiNode 122.11
196 TestMultiNode/serial/ValidateNameConflict 141.59
200 TestPreload 345.63
201 TestScheduledStopWindows 216
205 TestInsufficientStorage 107.36
206 TestRunningBinaryUpgrade 324.42
209 TestMissingContainerUpgrade 484.3
211 TestNoKubernetes/serial/StartNoK8sWithVersion 0.49
213 TestStoppedBinaryUpgrade/Setup 0.62
220 TestNoKubernetes/serial/StartWithK8s 192.25
221 TestStoppedBinaryUpgrade/Upgrade 438.76
222 TestNoKubernetes/serial/StartWithStopK8s 91.71
225 TestPause/serial/Start 130.43
226 TestStoppedBinaryUpgrade/MinikubeLogs 10.99
227 TestPause/serial/SecondStartNoReconfiguration 38.82
228 TestPause/serial/Pause 7.3
229 TestPause/serial/VerifyStatus 6.87
230 TestPause/serial/Unpause 6.95
231 TestPause/serial/PauseAgain 6.63
232 TestPause/serial/DeletePaused 50.71
246 TestStartStop/group/old-k8s-version/serial/FirstStart 564.47
248 TestStartStop/group/no-preload/serial/FirstStart 180.38
249 TestStartStop/group/no-preload/serial/DeployApp 11.44
250 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 5.89
251 TestStartStop/group/no-preload/serial/Stop 18.6
252 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 5.95
253 TestStartStop/group/no-preload/serial/SecondStart 414.65
255 TestStartStop/group/embed-certs/serial/FirstStart 497.33
257 TestStartStop/group/default-k8s-different-port/serial/FirstStart 131.84
258 TestStartStop/group/old-k8s-version/serial/DeployApp 11.03
259 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 6.16
260 TestStartStop/group/old-k8s-version/serial/Stop 23.29
261 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 5.95
262 TestStartStop/group/old-k8s-version/serial/SecondStart 470.27
263 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 23.15
264 TestStartStop/group/default-k8s-different-port/serial/DeployApp 13.45
265 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.96
266 TestStartStop/group/default-k8s-different-port/serial/EnableAddonWhileActive 6.68
267 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 7.89
268 TestStartStop/group/default-k8s-different-port/serial/Stop 19.32
269 TestStartStop/group/no-preload/serial/Pause 42.34
270 TestStartStop/group/default-k8s-different-port/serial/EnableAddonAfterStop 6.11
271 TestStartStop/group/default-k8s-different-port/serial/SecondStart 417.3
273 TestStartStop/group/newest-cni/serial/FirstStart 144.06
274 TestStartStop/group/newest-cni/serial/DeployApp 0
275 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 6.38
276 TestStartStop/group/newest-cni/serial/Stop 19.69
277 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 6.67
278 TestStartStop/group/newest-cni/serial/SecondStart 86.19
279 TestStartStop/group/embed-certs/serial/DeployApp 12.11
280 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 7.52
281 TestStartStop/group/embed-certs/serial/Stop 19.68
282 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 6.93
283 TestStartStop/group/embed-certs/serial/SecondStart 430.66
284 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
285 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
286 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 8.29
287 TestStartStop/group/newest-cni/serial/Pause 48.9
288 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 5.04
289 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.55
290 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 7.48
292 TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop 46.09
293 TestNetworkPlugins/group/auto/Start 764.77
294 TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop 5.52
295 TestStartStop/group/default-k8s-different-port/serial/VerifyKubernetesImages 6.83
296 TestStartStop/group/default-k8s-different-port/serial/Pause 48.98
297 TestNetworkPlugins/group/kindnet/Start 171.3
299 TestNetworkPlugins/group/kindnet/ControllerPod 5.04
300 TestNetworkPlugins/group/kindnet/KubeletFlags 6.84
301 TestNetworkPlugins/group/kindnet/NetCatPod 21.13
302 TestNetworkPlugins/group/kindnet/DNS 0.61
303 TestNetworkPlugins/group/kindnet/Localhost 0.5
304 TestNetworkPlugins/group/kindnet/HairPin 0.63
306 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 40.05
307 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.55
308 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 7.43
310 TestNetworkPlugins/group/false/Start 394.39
311 TestNetworkPlugins/group/auto/KubeletFlags 7.64
312 TestNetworkPlugins/group/auto/NetCatPod 22.02
x
+
TestDownloadOnly/v1.16.0/json-events (17.25s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/json-events
aaa_download_only_test.go:71: (dbg) Run:  out/minikube-windows-amd64.exe start -o=json --download-only -p download-only-20220531171529-2108 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=docker --driver=docker
aaa_download_only_test.go:71: (dbg) Done: out/minikube-windows-amd64.exe start -o=json --download-only -p download-only-20220531171529-2108 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=docker --driver=docker: (17.2501493s)
--- PASS: TestDownloadOnly/v1.16.0/json-events (17.25s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/preload-exists
--- PASS: TestDownloadOnly/v1.16.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/kubectl
--- PASS: TestDownloadOnly/v1.16.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/LogsDuration (0.41s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/LogsDuration
aaa_download_only_test.go:173: (dbg) Run:  out/minikube-windows-amd64.exe logs -p download-only-20220531171529-2108
aaa_download_only_test.go:173: (dbg) Non-zero exit: out/minikube-windows-amd64.exe logs -p download-only-20220531171529-2108: exit status 85 (407.3957ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|------|---------|------|---------|------------|----------|
	| Command | Args | Profile | User | Version | Start Time | End Time |
	|---------|------|---------|------|---------|------------|----------|
	|---------|------|---------|------|---------|------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2022/05/31 17:15:30
	Running on machine: minikube7
	Binary: Built with gc go1.18.2 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0531 17:15:30.805545    4660 out.go:296] Setting OutFile to fd 640 ...
	I0531 17:15:30.870334    4660 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0531 17:15:30.870471    4660 out.go:309] Setting ErrFile to fd 564...
	I0531 17:15:30.870471    4660 out.go:343] TERM=,COLORTERM=, which probably does not support color
	W0531 17:15:30.879324    4660 root.go:300] Error reading config file at C:\Users\jenkins.minikube7\minikube-integration\.minikube\config\config.json: open C:\Users\jenkins.minikube7\minikube-integration\.minikube\config\config.json: The system cannot find the path specified.
	I0531 17:15:30.883263    4660 out.go:303] Setting JSON to true
	I0531 17:15:30.884813    4660 start.go:115] hostinfo: {"hostname":"minikube7","uptime":75601,"bootTime":1653941729,"procs":151,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19044 Build 19044","kernelVersion":"10.0.19044 Build 19044","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f66ed2ea-6c04-4a6b-8eea-b2fb0953e990"}
	W0531 17:15:30.884813    4660 start.go:123] gopshost.Virtualization returned error: not implemented yet
	I0531 17:15:30.914006    4660 out.go:97] [download-only-20220531171529-2108] minikube v1.26.0-beta.1 on Microsoft Windows 10 Enterprise N 10.0.19044 Build 19044
	I0531 17:15:30.914907    4660 notify.go:193] Checking for updates...
	I0531 17:15:30.917671    4660 out.go:169] KUBECONFIG=C:\Users\jenkins.minikube7\minikube-integration\kubeconfig
	W0531 17:15:30.915084    4660 preload.go:295] Failed to list preload files: open C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\preloaded-tarball: The system cannot find the file specified.
	I0531 17:15:30.922640    4660 out.go:169] MINIKUBE_HOME=C:\Users\jenkins.minikube7\minikube-integration\.minikube
	I0531 17:15:30.926056    4660 out.go:169] MINIKUBE_LOCATION=14079
	I0531 17:15:30.928761    4660 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	W0531 17:15:30.934609    4660 out.go:272] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0531 17:15:30.935569    4660 driver.go:358] Setting default libvirt URI to qemu:///system
	I0531 17:15:33.509834    4660 docker.go:137] docker version: linux-20.10.14
	I0531 17:15:33.517644    4660 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0531 17:15:35.476898    4660 cli_runner.go:217] Completed: docker system info --format "{{json .}}": (1.9592446s)
	I0531 17:15:35.477741    4660 info.go:265] docker info: {ID:JKWR:L4LW:XYJC:G6AI:GZFU:RUGW:CCH6:OD2M:V572:4FTB:B7YC:DTUC Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:0 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:44 OomKillDisable:true NGoroutines:45 SystemTime:2022-05-31 17:15:34.4779913 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.102.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_
64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:53902323712 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.14 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3df54a852345ae127d1fa3092b95168e4a88e2f8 Expected:3df54a852345ae127d1fa3092b95168e4a88e2f8} RuncCommit:{ID:v1.0.3-0-gf46b6ba Expected:v1.0.3-0-gf46b6ba} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,
profile=default] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.5.1] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:C:\Program Files\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc.
Version:v0.17.0]] Warnings:<nil>}}
	I0531 17:15:35.480841    4660 out.go:97] Using the docker driver based on user configuration
	I0531 17:15:35.481016    4660 start.go:284] selected driver: docker
	I0531 17:15:35.481139    4660 start.go:806] validating driver "docker" against <nil>
	I0531 17:15:35.504213    4660 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0531 17:15:37.450871    4660 cli_runner.go:217] Completed: docker system info --format "{{json .}}": (1.9465487s)
	I0531 17:15:37.451033    4660 info.go:265] docker info: {ID:JKWR:L4LW:XYJC:G6AI:GZFU:RUGW:CCH6:OD2M:V572:4FTB:B7YC:DTUC Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:0 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:44 OomKillDisable:true NGoroutines:45 SystemTime:2022-05-31 17:15:36.4631502 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.102.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_
64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:53902323712 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.14 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3df54a852345ae127d1fa3092b95168e4a88e2f8 Expected:3df54a852345ae127d1fa3092b95168e4a88e2f8} RuncCommit:{ID:v1.0.3-0-gf46b6ba Expected:v1.0.3-0-gf46b6ba} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,
profile=default] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.5.1] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:C:\Program Files\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc.
Version:v0.17.0]] Warnings:<nil>}}
	I0531 17:15:37.451554    4660 start_flags.go:292] no existing cluster config was found, will generate one from the flags 
	I0531 17:15:37.577988    4660 start_flags.go:373] Using suggested 16300MB memory alloc based on sys=65534MB, container=51405MB
	I0531 17:15:37.578709    4660 start_flags.go:829] Wait components to verify : map[apiserver:true system_pods:true]
	I0531 17:15:37.600762    4660 out.go:169] Using Docker Desktop driver with the root privilege
	I0531 17:15:37.603600    4660 cni.go:95] Creating CNI manager for ""
	I0531 17:15:37.604557    4660 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0531 17:15:37.604557    4660 start_flags.go:306] config:
	{Name:download-only-20220531171529-2108 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653596720-14230@sha256:e953786303ac8350802546ee187d34e89f0007072a54fdbcc2f86a1fb8575418 Memory:16300 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:download-only-20220531171529-2108 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomai
n:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube7:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0531 17:15:37.607794    4660 out.go:97] Starting control plane node download-only-20220531171529-2108 in cluster download-only-20220531171529-2108
	I0531 17:15:37.607923    4660 cache.go:120] Beginning downloading kic base image for docker with docker
	I0531 17:15:37.610517    4660 out.go:97] Pulling base image ...
	I0531 17:15:37.610517    4660 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0531 17:15:37.610517    4660 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653596720-14230@sha256:e953786303ac8350802546ee187d34e89f0007072a54fdbcc2f86a1fb8575418 in local docker daemon
	I0531 17:15:37.657087    4660 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4
	I0531 17:15:37.657087    4660 cache.go:57] Caching tarball of preloaded images
	I0531 17:15:37.657169    4660 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0531 17:15:37.662609    4660 out.go:97] Downloading Kubernetes v1.16.0 preload ...
	I0531 17:15:37.662710    4660 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4 ...
	I0531 17:15:37.727727    4660 download.go:101] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4?checksum=md5:326f3ce331abb64565b50b8c9e791244 -> C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4
	I0531 17:15:39.112711    4660 cache.go:146] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653596720-14230@sha256:e953786303ac8350802546ee187d34e89f0007072a54fdbcc2f86a1fb8575418 to local cache
	I0531 17:15:39.112711    4660 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds:v0.0.31-1653596720-14230@sha256:e953786303ac8350802546ee187d34e89f0007072a54fdbcc2f86a1fb8575418.tar -> C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds_v0.0.31-1653596720-14230@sha256_e953786303ac8350802546ee187d34e89f0007072a54fdbcc2f86a1fb8575418.tar
	I0531 17:15:39.112711    4660 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds:v0.0.31-1653596720-14230@sha256:e953786303ac8350802546ee187d34e89f0007072a54fdbcc2f86a1fb8575418.tar -> C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds_v0.0.31-1653596720-14230@sha256_e953786303ac8350802546ee187d34e89f0007072a54fdbcc2f86a1fb8575418.tar
	I0531 17:15:39.112711    4660 image.go:59] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653596720-14230@sha256:e953786303ac8350802546ee187d34e89f0007072a54fdbcc2f86a1fb8575418 in local cache directory
	I0531 17:15:39.117696    4660 image.go:119] Writing gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653596720-14230@sha256:e953786303ac8350802546ee187d34e89f0007072a54fdbcc2f86a1fb8575418 to local cache
	I0531 17:15:40.933439    4660 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4 ...
	I0531 17:15:41.247450    4660 preload.go:256] verifying checksumm of C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4 ...
	I0531 17:15:42.320540    4660 cache.go:60] Finished verifying existence of preloaded tar for  v1.16.0 on docker
	I0531 17:15:42.321677    4660 profile.go:148] Saving config to C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\download-only-20220531171529-2108\config.json ...
	I0531 17:15:42.322159    4660 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\download-only-20220531171529-2108\config.json: {Name:mkc89d529bccd884591a89d6fd42d568f1432a28 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0531 17:15:42.322888    4660 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0531 17:15:42.324905    4660 download.go:101] Downloading: https://storage.googleapis.com/kubernetes-release/release/v1.16.0/bin/windows/amd64/kubectl.exe?checksum=file:https://storage.googleapis.com/kubernetes-release/release/v1.16.0/bin/windows/amd64/kubectl.exe.sha1 -> C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\windows\amd64\v1.16.0/kubectl.exe
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-20220531171529-2108"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:174: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.16.0/LogsDuration (0.41s)

                                                
                                    
x
+
TestDownloadOnly/v1.23.6/json-events (13.56s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.23.6/json-events
aaa_download_only_test.go:71: (dbg) Run:  out/minikube-windows-amd64.exe start -o=json --download-only -p download-only-20220531171529-2108 --force --alsologtostderr --kubernetes-version=v1.23.6 --container-runtime=docker --driver=docker
aaa_download_only_test.go:71: (dbg) Done: out/minikube-windows-amd64.exe start -o=json --download-only -p download-only-20220531171529-2108 --force --alsologtostderr --kubernetes-version=v1.23.6 --container-runtime=docker --driver=docker: (13.5556354s)
--- PASS: TestDownloadOnly/v1.23.6/json-events (13.56s)

                                                
                                    
x
+
TestDownloadOnly/v1.23.6/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.23.6/preload-exists
--- PASS: TestDownloadOnly/v1.23.6/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.23.6/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.23.6/kubectl
--- PASS: TestDownloadOnly/v1.23.6/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.23.6/LogsDuration (0.39s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.23.6/LogsDuration
aaa_download_only_test.go:173: (dbg) Run:  out/minikube-windows-amd64.exe logs -p download-only-20220531171529-2108
aaa_download_only_test.go:173: (dbg) Non-zero exit: out/minikube-windows-amd64.exe logs -p download-only-20220531171529-2108: exit status 85 (390.4741ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|------|---------|------|---------|------------|----------|
	| Command | Args | Profile | User | Version | Start Time | End Time |
	|---------|------|---------|------|---------|------------|----------|
	|---------|------|---------|------|---------|------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2022/05/31 17:15:47
	Running on machine: minikube7
	Binary: Built with gc go1.18.2 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0531 17:15:47.089828    7920 out.go:296] Setting OutFile to fd 564 ...
	I0531 17:15:47.147286    7920 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0531 17:15:47.147410    7920 out.go:309] Setting ErrFile to fd 648...
	I0531 17:15:47.147410    7920 out.go:343] TERM=,COLORTERM=, which probably does not support color
	W0531 17:15:47.161434    7920 root.go:300] Error reading config file at C:\Users\jenkins.minikube7\minikube-integration\.minikube\config\config.json: open C:\Users\jenkins.minikube7\minikube-integration\.minikube\config\config.json: The system cannot find the file specified.
	I0531 17:15:47.162625    7920 out.go:303] Setting JSON to true
	I0531 17:15:47.166016    7920 start.go:115] hostinfo: {"hostname":"minikube7","uptime":75617,"bootTime":1653941730,"procs":152,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19044 Build 19044","kernelVersion":"10.0.19044 Build 19044","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f66ed2ea-6c04-4a6b-8eea-b2fb0953e990"}
	W0531 17:15:47.166173    7920 start.go:123] gopshost.Virtualization returned error: not implemented yet
	I0531 17:15:47.170941    7920 out.go:97] [download-only-20220531171529-2108] minikube v1.26.0-beta.1 on Microsoft Windows 10 Enterprise N 10.0.19044 Build 19044
	I0531 17:15:47.171121    7920 notify.go:193] Checking for updates...
	I0531 17:15:47.173921    7920 out.go:169] KUBECONFIG=C:\Users\jenkins.minikube7\minikube-integration\kubeconfig
	I0531 17:15:47.177462    7920 out.go:169] MINIKUBE_HOME=C:\Users\jenkins.minikube7\minikube-integration\.minikube
	I0531 17:15:47.182213    7920 out.go:169] MINIKUBE_LOCATION=14079
	I0531 17:15:47.185202    7920 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	W0531 17:15:47.189567    7920 out.go:272] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0531 17:15:47.190778    7920 config.go:178] Loaded profile config "download-only-20220531171529-2108": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.16.0
	W0531 17:15:47.191109    7920 start.go:714] api.Load failed for download-only-20220531171529-2108: filestore "download-only-20220531171529-2108": Docker machine "download-only-20220531171529-2108" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I0531 17:15:47.191109    7920 driver.go:358] Setting default libvirt URI to qemu:///system
	W0531 17:15:47.191109    7920 start.go:714] api.Load failed for download-only-20220531171529-2108: filestore "download-only-20220531171529-2108": Docker machine "download-only-20220531171529-2108" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I0531 17:15:49.650766    7920 docker.go:137] docker version: linux-20.10.14
	I0531 17:15:49.658256    7920 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0531 17:15:51.621169    7920 cli_runner.go:217] Completed: docker system info --format "{{json .}}": (1.9629035s)
	I0531 17:15:51.622294    7920 info.go:265] docker info: {ID:JKWR:L4LW:XYJC:G6AI:GZFU:RUGW:CCH6:OD2M:V572:4FTB:B7YC:DTUC Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:0 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:44 OomKillDisable:true NGoroutines:45 SystemTime:2022-05-31 17:15:50.6075947 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.102.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_
64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:53902323712 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.14 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3df54a852345ae127d1fa3092b95168e4a88e2f8 Expected:3df54a852345ae127d1fa3092b95168e4a88e2f8} RuncCommit:{ID:v1.0.3-0-gf46b6ba Expected:v1.0.3-0-gf46b6ba} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,
profile=default] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.5.1] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:C:\Program Files\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc.
Version:v0.17.0]] Warnings:<nil>}}
	I0531 17:15:52.097512    7920 out.go:97] Using the docker driver based on existing profile
	I0531 17:15:52.097512    7920 start.go:284] selected driver: docker
	I0531 17:15:52.097512    7920 start.go:806] validating driver "docker" against &{Name:download-only-20220531171529-2108 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653596720-14230@sha256:e953786303ac8350802546ee187d34e89f0007072a54fdbcc2f86a1fb8575418 Memory:16300 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:download-only-20220531171529-2108 Names
pace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube7:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0531 17:15:52.119488    7920 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0531 17:15:54.044484    7920 cli_runner.go:217] Completed: docker system info --format "{{json .}}": (1.9248407s)
	I0531 17:15:54.044747    7920 info.go:265] docker info: {ID:JKWR:L4LW:XYJC:G6AI:GZFU:RUGW:CCH6:OD2M:V572:4FTB:B7YC:DTUC Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:0 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:44 OomKillDisable:true NGoroutines:45 SystemTime:2022-05-31 17:15:53.078685 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.102.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_6
4 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:53902323712 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.14 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3df54a852345ae127d1fa3092b95168e4a88e2f8 Expected:3df54a852345ae127d1fa3092b95168e4a88e2f8} RuncCommit:{ID:v1.0.3-0-gf46b6ba Expected:v1.0.3-0-gf46b6ba} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,p
rofile=default] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.5.1] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:C:\Program Files\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc.
Version:v0.17.0]] Warnings:<nil>}}
	I0531 17:15:54.089686    7920 cni.go:95] Creating CNI manager for ""
	I0531 17:15:54.089757    7920 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0531 17:15:54.089757    7920 start_flags.go:306] config:
	{Name:download-only-20220531171529-2108 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653596720-14230@sha256:e953786303ac8350802546ee187d34e89f0007072a54fdbcc2f86a1fb8575418 Memory:16300 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.6 ClusterName:download-only-20220531171529-2108 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomai
n:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube7:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0531 17:15:54.199699    7920 out.go:97] Starting control plane node download-only-20220531171529-2108 in cluster download-only-20220531171529-2108
	I0531 17:15:54.199699    7920 cache.go:120] Beginning downloading kic base image for docker with docker
	I0531 17:15:54.367655    7920 out.go:97] Pulling base image ...
	I0531 17:15:54.368276    7920 preload.go:132] Checking if preload exists for k8s version v1.23.6 and runtime docker
	I0531 17:15:54.368276    7920 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653596720-14230@sha256:e953786303ac8350802546ee187d34e89f0007072a54fdbcc2f86a1fb8575418 in local docker daemon
	I0531 17:15:54.411162    7920 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.23.6/preloaded-images-k8s-v18-v1.23.6-docker-overlay2-amd64.tar.lz4
	I0531 17:15:54.411162    7920 cache.go:57] Caching tarball of preloaded images
	I0531 17:15:54.411509    7920 preload.go:132] Checking if preload exists for k8s version v1.23.6 and runtime docker
	I0531 17:15:54.414862    7920 out.go:97] Downloading Kubernetes v1.23.6 preload ...
	I0531 17:15:54.414986    7920 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.23.6-docker-overlay2-amd64.tar.lz4 ...
	I0531 17:15:54.481565    7920 download.go:101] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.23.6/preloaded-images-k8s-v18-v1.23.6-docker-overlay2-amd64.tar.lz4?checksum=md5:a6c3f222f3cce2a88e27e126d64eb717 -> C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.23.6-docker-overlay2-amd64.tar.lz4
	I0531 17:15:55.416938    7920 cache.go:146] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653596720-14230@sha256:e953786303ac8350802546ee187d34e89f0007072a54fdbcc2f86a1fb8575418 to local cache
	I0531 17:15:55.417607    7920 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds:v0.0.31-1653596720-14230@sha256:e953786303ac8350802546ee187d34e89f0007072a54fdbcc2f86a1fb8575418.tar -> C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds_v0.0.31-1653596720-14230@sha256_e953786303ac8350802546ee187d34e89f0007072a54fdbcc2f86a1fb8575418.tar
	I0531 17:15:55.417704    7920 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds:v0.0.31-1653596720-14230@sha256:e953786303ac8350802546ee187d34e89f0007072a54fdbcc2f86a1fb8575418.tar -> C:\Users\jenkins.minikube7\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds_v0.0.31-1653596720-14230@sha256_e953786303ac8350802546ee187d34e89f0007072a54fdbcc2f86a1fb8575418.tar
	I0531 17:15:55.417704    7920 image.go:59] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653596720-14230@sha256:e953786303ac8350802546ee187d34e89f0007072a54fdbcc2f86a1fb8575418 in local cache directory
	I0531 17:15:55.417704    7920 image.go:62] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653596720-14230@sha256:e953786303ac8350802546ee187d34e89f0007072a54fdbcc2f86a1fb8575418 in local cache directory, skipping pull
	I0531 17:15:55.417704    7920 image.go:103] gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653596720-14230@sha256:e953786303ac8350802546ee187d34e89f0007072a54fdbcc2f86a1fb8575418 exists in cache, skipping pull
	I0531 17:15:55.418272    7920 cache.go:149] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653596720-14230@sha256:e953786303ac8350802546ee187d34e89f0007072a54fdbcc2f86a1fb8575418 as a tarball
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-20220531171529-2108"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:174: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.23.6/LogsDuration (0.39s)

                                                
                                    
x
+
TestDownloadOnly/DeleteAll (11.11s)

                                                
                                                
=== RUN   TestDownloadOnly/DeleteAll
aaa_download_only_test.go:191: (dbg) Run:  out/minikube-windows-amd64.exe delete --all
aaa_download_only_test.go:191: (dbg) Done: out/minikube-windows-amd64.exe delete --all: (11.107719s)
--- PASS: TestDownloadOnly/DeleteAll (11.11s)

                                                
                                    
x
+
TestDownloadOnly/DeleteAlwaysSucceeds (6.86s)

                                                
                                                
=== RUN   TestDownloadOnly/DeleteAlwaysSucceeds
aaa_download_only_test.go:203: (dbg) Run:  out/minikube-windows-amd64.exe delete -p download-only-20220531171529-2108
aaa_download_only_test.go:203: (dbg) Done: out/minikube-windows-amd64.exe delete -p download-only-20220531171529-2108: (6.8581241s)
--- PASS: TestDownloadOnly/DeleteAlwaysSucceeds (6.86s)

                                                
                                    
x
+
TestDownloadOnlyKic (45.09s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:228: (dbg) Run:  out/minikube-windows-amd64.exe start --download-only -p download-docker-20220531171625-2108 --force --alsologtostderr --driver=docker
aaa_download_only_test.go:228: (dbg) Done: out/minikube-windows-amd64.exe start --download-only -p download-docker-20220531171625-2108 --force --alsologtostderr --driver=docker: (36.0953617s)
helpers_test.go:175: Cleaning up "download-docker-20220531171625-2108" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p download-docker-20220531171625-2108
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p download-docker-20220531171625-2108: (7.8835756s)
--- PASS: TestDownloadOnlyKic (45.09s)

                                                
                                    
x
+
TestBinaryMirror (16.23s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:310: (dbg) Run:  out/minikube-windows-amd64.exe start --download-only -p binary-mirror-20220531171710-2108 --alsologtostderr --binary-mirror http://127.0.0.1:50762 --driver=docker
aaa_download_only_test.go:310: (dbg) Done: out/minikube-windows-amd64.exe start --download-only -p binary-mirror-20220531171710-2108 --alsologtostderr --binary-mirror http://127.0.0.1:50762 --driver=docker: (8.0402521s)
helpers_test.go:175: Cleaning up "binary-mirror-20220531171710-2108" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p binary-mirror-20220531171710-2108
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p binary-mirror-20220531171710-2108: (7.9214294s)
--- PASS: TestBinaryMirror (16.23s)

                                                
                                    
x
+
TestOffline (235.39s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-windows-amd64.exe start -p offline-docker-20220531190920-2108 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=docker

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Done: out/minikube-windows-amd64.exe start -p offline-docker-20220531190920-2108 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=docker: (3m27.9865796s)
helpers_test.go:175: Cleaning up "offline-docker-20220531190920-2108" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p offline-docker-20220531190920-2108
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p offline-docker-20220531190920-2108: (27.3997254s)
--- PASS: TestOffline (235.39s)

                                                
                                    
x
+
TestAddons/Setup (406.09s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:75: (dbg) Run:  out/minikube-windows-amd64.exe start -p addons-20220531171726-2108 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --driver=docker --addons=ingress --addons=ingress-dns --addons=helm-tiller
addons_test.go:75: (dbg) Done: out/minikube-windows-amd64.exe start -p addons-20220531171726-2108 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --driver=docker --addons=ingress --addons=ingress-dns --addons=helm-tiller: (6m46.0850484s)
--- PASS: TestAddons/Setup (406.09s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (12.85s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:357: metrics-server stabilized in 35.951ms

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:359: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
helpers_test.go:342: "metrics-server-bd6f4dd56-r4krm" [76b81ddb-60e8-43a7-92e3-28380090aecf] Running

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:359: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.0384414s

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:365: (dbg) Run:  kubectl --context addons-20220531171726-2108 top pods -n kube-system

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:382: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-20220531171726-2108 addons disable metrics-server --alsologtostderr -v=1

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:382: (dbg) Done: out/minikube-windows-amd64.exe -p addons-20220531171726-2108 addons disable metrics-server --alsologtostderr -v=1: (7.4869248s)
--- PASS: TestAddons/parallel/MetricsServer (12.85s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (35.37s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:406: tiller-deploy stabilized in 34.9659ms

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:408: (dbg) TestAddons/parallel/HelmTiller: waiting 6m0s for pods matching "app=helm" in namespace "kube-system" ...

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
helpers_test.go:342: "tiller-deploy-6d67d5465d-4lt8b" [7c177767-baac-4ea2-8d8a-e68e368c5e35] Running

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:408: (dbg) TestAddons/parallel/HelmTiller: app=helm healthy within 5.0380699s

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:423: (dbg) Run:  kubectl --context addons-20220531171726-2108 run --rm helm-test --restart=Never --image=alpine/helm:2.16.3 -it --namespace=kube-system -- version

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:423: (dbg) Done: kubectl --context addons-20220531171726-2108 run --rm helm-test --restart=Never --image=alpine/helm:2.16.3 -it --namespace=kube-system -- version: (23.4921079s)
addons_test.go:428: kubectl --context addons-20220531171726-2108 run --rm helm-test --restart=Never --image=alpine/helm:2.16.3 -it --namespace=kube-system -- version: unexpected stderr: Unable to use a TTY - input is not a terminal or the right kind of file
If you don't see a command prompt, try pressing enter.
Error attaching, falling back to logs: 
addons_test.go:440: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-20220531171726-2108 addons disable helm-tiller --alsologtostderr -v=1

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:440: (dbg) Done: out/minikube-windows-amd64.exe -p addons-20220531171726-2108 addons disable helm-tiller --alsologtostderr -v=1: (6.775995s)
--- PASS: TestAddons/parallel/HelmTiller (35.37s)

                                                
                                    
x
+
TestAddons/parallel/CSI (96.55s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:511: csi-hostpath-driver pods stabilized in 41.9603ms
addons_test.go:514: (dbg) Run:  kubectl --context addons-20220531171726-2108 create -f testdata\csi-hostpath-driver\pvc.yaml

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:514: (dbg) Done: kubectl --context addons-20220531171726-2108 create -f testdata\csi-hostpath-driver\pvc.yaml: (1.5827013s)
addons_test.go:519: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:392: (dbg) Run:  kubectl --context addons-20220531171726-2108 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:392: (dbg) Run:  kubectl --context addons-20220531171726-2108 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:524: (dbg) Run:  kubectl --context addons-20220531171726-2108 create -f testdata\csi-hostpath-driver\pv-pod.yaml
addons_test.go:529: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:342: "task-pv-pod" [c04b425c-ecec-472f-9b91-9f86f8c728bf] Pending
helpers_test.go:342: "task-pv-pod" [c04b425c-ecec-472f-9b91-9f86f8c728bf] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])

                                                
                                                
=== CONT  TestAddons/parallel/CSI
helpers_test.go:342: "task-pv-pod" [c04b425c-ecec-472f-9b91-9f86f8c728bf] Running
addons_test.go:529: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 46.0324751s
addons_test.go:534: (dbg) Run:  kubectl --context addons-20220531171726-2108 create -f testdata\csi-hostpath-driver\snapshot.yaml
addons_test.go:539: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:417: (dbg) Run:  kubectl --context addons-20220531171726-2108 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default

                                                
                                                
=== CONT  TestAddons/parallel/CSI
helpers_test.go:417: (dbg) Run:  kubectl --context addons-20220531171726-2108 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:544: (dbg) Run:  kubectl --context addons-20220531171726-2108 delete pod task-pv-pod
addons_test.go:544: (dbg) Done: kubectl --context addons-20220531171726-2108 delete pod task-pv-pod: (1.5254377s)
addons_test.go:550: (dbg) Run:  kubectl --context addons-20220531171726-2108 delete pvc hpvc
addons_test.go:556: (dbg) Run:  kubectl --context addons-20220531171726-2108 create -f testdata\csi-hostpath-driver\pvc-restore.yaml
addons_test.go:561: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:392: (dbg) Run:  kubectl --context addons-20220531171726-2108 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:566: (dbg) Run:  kubectl --context addons-20220531171726-2108 create -f testdata\csi-hostpath-driver\pv-pod-restore.yaml
addons_test.go:571: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:342: "task-pv-pod-restore" [33b3b847-8f27-491c-9060-639af9b9d95f] Pending
helpers_test.go:342: "task-pv-pod-restore" [33b3b847-8f27-491c-9060-639af9b9d95f] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])

                                                
                                                
=== CONT  TestAddons/parallel/CSI
helpers_test.go:342: "task-pv-pod-restore" [33b3b847-8f27-491c-9060-639af9b9d95f] Running
addons_test.go:571: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 19.0429085s
addons_test.go:576: (dbg) Run:  kubectl --context addons-20220531171726-2108 delete pod task-pv-pod-restore
addons_test.go:576: (dbg) Done: kubectl --context addons-20220531171726-2108 delete pod task-pv-pod-restore: (2.1245828s)
addons_test.go:580: (dbg) Run:  kubectl --context addons-20220531171726-2108 delete pvc hpvc-restore
addons_test.go:584: (dbg) Run:  kubectl --context addons-20220531171726-2108 delete volumesnapshot new-snapshot-demo
addons_test.go:588: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-20220531171726-2108 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:588: (dbg) Done: out/minikube-windows-amd64.exe -p addons-20220531171726-2108 addons disable csi-hostpath-driver --alsologtostderr -v=1: (13.9355586s)
addons_test.go:592: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-20220531171726-2108 addons disable volumesnapshots --alsologtostderr -v=1
addons_test.go:592: (dbg) Done: out/minikube-windows-amd64.exe -p addons-20220531171726-2108 addons disable volumesnapshots --alsologtostderr -v=1: (5.8765536s)
--- PASS: TestAddons/parallel/CSI (96.55s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth (25.93s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth
addons_test.go:603: (dbg) Run:  kubectl --context addons-20220531171726-2108 create -f testdata\busybox.yaml
addons_test.go:603: (dbg) Done: kubectl --context addons-20220531171726-2108 create -f testdata\busybox.yaml: (1.7455339s)
addons_test.go:609: (dbg) TestAddons/serial/GCPAuth: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:342: "busybox" [4c01fb4c-bec4-402f-9352-322ab1fc3d59] Pending
helpers_test.go:342: "busybox" [4c01fb4c-bec4-402f-9352-322ab1fc3d59] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:342: "busybox" [4c01fb4c-bec4-402f-9352-322ab1fc3d59] Running
addons_test.go:609: (dbg) TestAddons/serial/GCPAuth: integration-test=busybox healthy within 9.0547265s
addons_test.go:615: (dbg) Run:  kubectl --context addons-20220531171726-2108 exec busybox -- /bin/sh -c "printenv GOOGLE_APPLICATION_CREDENTIALS"
addons_test.go:628: (dbg) Run:  kubectl --context addons-20220531171726-2108 exec busybox -- /bin/sh -c "cat /google-app-creds.json"
addons_test.go:652: (dbg) Run:  kubectl --context addons-20220531171726-2108 exec busybox -- /bin/sh -c "printenv GOOGLE_CLOUD_PROJECT"
addons_test.go:665: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-20220531171726-2108 addons disable gcp-auth --alsologtostderr -v=1
addons_test.go:665: (dbg) Done: out/minikube-windows-amd64.exe -p addons-20220531171726-2108 addons disable gcp-auth --alsologtostderr -v=1: (13.6637895s)
--- PASS: TestAddons/serial/GCPAuth (25.93s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (24.15s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:132: (dbg) Run:  out/minikube-windows-amd64.exe stop -p addons-20220531171726-2108
addons_test.go:132: (dbg) Done: out/minikube-windows-amd64.exe stop -p addons-20220531171726-2108: (18.6967768s)
addons_test.go:136: (dbg) Run:  out/minikube-windows-amd64.exe addons enable dashboard -p addons-20220531171726-2108
addons_test.go:136: (dbg) Done: out/minikube-windows-amd64.exe addons enable dashboard -p addons-20220531171726-2108: (2.7381357s)
addons_test.go:140: (dbg) Run:  out/minikube-windows-amd64.exe addons disable dashboard -p addons-20220531171726-2108
addons_test.go:140: (dbg) Done: out/minikube-windows-amd64.exe addons disable dashboard -p addons-20220531171726-2108: (2.7142198s)
--- PASS: TestAddons/StoppedEnableDisable (24.15s)

                                                
                                    
x
+
TestCertOptions (526.49s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-windows-amd64.exe start -p cert-options-20220531192459-2108 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker --apiserver-name=localhost
E0531 19:25:02.525312    2108 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\ingress-addon-legacy-20220531181152-2108\client.crt: The system cannot find the path specified.

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Done: out/minikube-windows-amd64.exe start -p cert-options-20220531192459-2108 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker --apiserver-name=localhost: (8m1.6107444s)
cert_options_test.go:60: (dbg) Run:  out/minikube-windows-amd64.exe -p cert-options-20220531192459-2108 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:60: (dbg) Done: out/minikube-windows-amd64.exe -p cert-options-20220531192459-2108 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt": (7.4470537s)
cert_options_test.go:100: (dbg) Run:  out/minikube-windows-amd64.exe ssh -p cert-options-20220531192459-2108 -- "sudo cat /etc/kubernetes/admin.conf"
cert_options_test.go:100: (dbg) Done: out/minikube-windows-amd64.exe ssh -p cert-options-20220531192459-2108 -- "sudo cat /etc/kubernetes/admin.conf": (7.0074638s)
helpers_test.go:175: Cleaning up "cert-options-20220531192459-2108" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p cert-options-20220531192459-2108

                                                
                                                
=== CONT  TestCertOptions
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p cert-options-20220531192459-2108: (29.0313996s)
--- PASS: TestCertOptions (526.49s)

                                                
                                    
x
+
TestCertExpiration (738.83s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-windows-amd64.exe start -p cert-expiration-20220531192220-2108 --memory=2048 --cert-expiration=3m --driver=docker

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Done: out/minikube-windows-amd64.exe start -p cert-expiration-20220531192220-2108 --memory=2048 --cert-expiration=3m --driver=docker: (8m1.1229692s)
E0531 19:31:10.031565    2108 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\functional-20220531173104-2108\client.crt: The system cannot find the path specified.

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:131: (dbg) Run:  out/minikube-windows-amd64.exe start -p cert-expiration-20220531192220-2108 --memory=2048 --cert-expiration=8760h --driver=docker

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:131: (dbg) Done: out/minikube-windows-amd64.exe start -p cert-expiration-20220531192220-2108 --memory=2048 --cert-expiration=8760h --driver=docker: (39.7101759s)
helpers_test.go:175: Cleaning up "cert-expiration-20220531192220-2108" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p cert-expiration-20220531192220-2108
E0531 19:34:13.057463    2108 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\addons-20220531171726-2108\client.crt: The system cannot find the path specified.
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p cert-expiration-20220531192220-2108: (37.9842559s)
--- PASS: TestCertExpiration (738.83s)

                                                
                                    
x
+
TestDockerFlags (163.84s)

                                                
                                                
=== RUN   TestDockerFlags
=== PAUSE TestDockerFlags

                                                
                                                

                                                
                                                
=== CONT  TestDockerFlags
docker_test.go:45: (dbg) Run:  out/minikube-windows-amd64.exe start -p docker-flags-20220531192247-2108 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=docker
E0531 19:24:13.053835    2108 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\addons-20220531171726-2108\client.crt: The system cannot find the path specified.

                                                
                                                
=== CONT  TestDockerFlags
docker_test.go:45: (dbg) Done: out/minikube-windows-amd64.exe start -p docker-flags-20220531192247-2108 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=docker: (2m2.1004802s)
docker_test.go:50: (dbg) Run:  out/minikube-windows-amd64.exe -p docker-flags-20220531192247-2108 ssh "sudo systemctl show docker --property=Environment --no-pager"
docker_test.go:50: (dbg) Done: out/minikube-windows-amd64.exe -p docker-flags-20220531192247-2108 ssh "sudo systemctl show docker --property=Environment --no-pager": (6.7469137s)
docker_test.go:61: (dbg) Run:  out/minikube-windows-amd64.exe -p docker-flags-20220531192247-2108 ssh "sudo systemctl show docker --property=ExecStart --no-pager"

                                                
                                                
=== CONT  TestDockerFlags
docker_test.go:61: (dbg) Done: out/minikube-windows-amd64.exe -p docker-flags-20220531192247-2108 ssh "sudo systemctl show docker --property=ExecStart --no-pager": (7.2070466s)
helpers_test.go:175: Cleaning up "docker-flags-20220531192247-2108" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p docker-flags-20220531192247-2108
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p docker-flags-20220531192247-2108: (27.7872947s)
--- PASS: TestDockerFlags (163.84s)

                                                
                                    
x
+
TestForceSystemdFlag (526.36s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:85: (dbg) Run:  out/minikube-windows-amd64.exe start -p force-systemd-flag-20220531191724-2108 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:85: (dbg) Done: out/minikube-windows-amd64.exe start -p force-systemd-flag-20220531191724-2108 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker: (8m12.593005s)
docker_test.go:104: (dbg) Run:  out/minikube-windows-amd64.exe -p force-systemd-flag-20220531191724-2108 ssh "docker info --format {{.CgroupDriver}}"
docker_test.go:104: (dbg) Done: out/minikube-windows-amd64.exe -p force-systemd-flag-20220531191724-2108 ssh "docker info --format {{.CgroupDriver}}": (7.2625102s)
helpers_test.go:175: Cleaning up "force-systemd-flag-20220531191724-2108" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p force-systemd-flag-20220531191724-2108
E0531 19:26:10.016785    2108 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\functional-20220531173104-2108\client.crt: The system cannot find the path specified.
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p force-systemd-flag-20220531191724-2108: (26.5024337s)
--- PASS: TestForceSystemdFlag (526.36s)

                                                
                                    
x
+
TestForceSystemdEnv (162.95s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:150: (dbg) Run:  out/minikube-windows-amd64.exe start -p force-systemd-env-20220531191937-2108 --memory=2048 --alsologtostderr -v=5 --driver=docker
E0531 19:20:02.517520    2108 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\ingress-addon-legacy-20220531181152-2108\client.crt: The system cannot find the path specified.

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:150: (dbg) Done: out/minikube-windows-amd64.exe start -p force-systemd-env-20220531191937-2108 --memory=2048 --alsologtostderr -v=5 --driver=docker: (2m13.090143s)
docker_test.go:104: (dbg) Run:  out/minikube-windows-amd64.exe -p force-systemd-env-20220531191937-2108 ssh "docker info --format {{.CgroupDriver}}"
docker_test.go:104: (dbg) Done: out/minikube-windows-amd64.exe -p force-systemd-env-20220531191937-2108 ssh "docker info --format {{.CgroupDriver}}": (8.1090469s)
helpers_test.go:175: Cleaning up "force-systemd-env-20220531191937-2108" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p force-systemd-env-20220531191937-2108
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p force-systemd-env-20220531191937-2108: (21.753874s)
--- PASS: TestForceSystemdEnv (162.95s)

                                                
                                    
x
+
TestErrorSpam/setup (115.23s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:78: (dbg) Run:  out/minikube-windows-amd64.exe start -p nospam-20220531172704-2108 -n=1 --memory=2250 --wait=false --log_dir=C:\Users\jenkins.minikube7\AppData\Local\Temp\nospam-20220531172704-2108 --driver=docker
error_spam_test.go:78: (dbg) Done: out/minikube-windows-amd64.exe start -p nospam-20220531172704-2108 -n=1 --memory=2250 --wait=false --log_dir=C:\Users\jenkins.minikube7\AppData\Local\Temp\nospam-20220531172704-2108 --driver=docker: (1m55.2334015s)
error_spam_test.go:88: acceptable stderr: "! C:\\ProgramData\\chocolatey\\bin\\kubectl.exe is version 1.18.2, which may have incompatibilites with Kubernetes 1.23.6."
--- PASS: TestErrorSpam/setup (115.23s)

                                                
                                    
x
+
TestErrorSpam/start (21.25s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:213: Cleaning up 1 logfile(s) ...
error_spam_test.go:156: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-20220531172704-2108 --log_dir C:\Users\jenkins.minikube7\AppData\Local\Temp\nospam-20220531172704-2108 start --dry-run
error_spam_test.go:156: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-20220531172704-2108 --log_dir C:\Users\jenkins.minikube7\AppData\Local\Temp\nospam-20220531172704-2108 start --dry-run: (7.2850845s)
error_spam_test.go:156: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-20220531172704-2108 --log_dir C:\Users\jenkins.minikube7\AppData\Local\Temp\nospam-20220531172704-2108 start --dry-run
E0531 17:29:13.014842    2108 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\addons-20220531171726-2108\client.crt: The system cannot find the path specified.
E0531 17:29:13.045124    2108 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\addons-20220531171726-2108\client.crt: The system cannot find the path specified.
E0531 17:29:13.061337    2108 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\addons-20220531171726-2108\client.crt: The system cannot find the path specified.
E0531 17:29:13.092929    2108 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\addons-20220531171726-2108\client.crt: The system cannot find the path specified.
E0531 17:29:13.140493    2108 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\addons-20220531171726-2108\client.crt: The system cannot find the path specified.
E0531 17:29:13.234402    2108 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\addons-20220531171726-2108\client.crt: The system cannot find the path specified.
E0531 17:29:13.405746    2108 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\addons-20220531171726-2108\client.crt: The system cannot find the path specified.
E0531 17:29:13.739373    2108 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\addons-20220531171726-2108\client.crt: The system cannot find the path specified.
E0531 17:29:14.384114    2108 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\addons-20220531171726-2108\client.crt: The system cannot find the path specified.
error_spam_test.go:156: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-20220531172704-2108 --log_dir C:\Users\jenkins.minikube7\AppData\Local\Temp\nospam-20220531172704-2108 start --dry-run: (7.1184028s)
error_spam_test.go:179: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-20220531172704-2108 --log_dir C:\Users\jenkins.minikube7\AppData\Local\Temp\nospam-20220531172704-2108 start --dry-run
E0531 17:29:15.670610    2108 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\addons-20220531171726-2108\client.crt: The system cannot find the path specified.
E0531 17:29:18.245259    2108 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\addons-20220531171726-2108\client.crt: The system cannot find the path specified.
error_spam_test.go:179: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-20220531172704-2108 --log_dir C:\Users\jenkins.minikube7\AppData\Local\Temp\nospam-20220531172704-2108 start --dry-run: (6.8413671s)
--- PASS: TestErrorSpam/start (21.25s)

                                                
                                    
x
+
TestErrorSpam/status (18.72s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:213: Cleaning up 0 logfile(s) ...
error_spam_test.go:156: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-20220531172704-2108 --log_dir C:\Users\jenkins.minikube7\AppData\Local\Temp\nospam-20220531172704-2108 status
E0531 17:29:23.368233    2108 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\addons-20220531171726-2108\client.crt: The system cannot find the path specified.
error_spam_test.go:156: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-20220531172704-2108 --log_dir C:\Users\jenkins.minikube7\AppData\Local\Temp\nospam-20220531172704-2108 status: (6.2604803s)
error_spam_test.go:156: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-20220531172704-2108 --log_dir C:\Users\jenkins.minikube7\AppData\Local\Temp\nospam-20220531172704-2108 status
E0531 17:29:33.623556    2108 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\addons-20220531171726-2108\client.crt: The system cannot find the path specified.
error_spam_test.go:156: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-20220531172704-2108 --log_dir C:\Users\jenkins.minikube7\AppData\Local\Temp\nospam-20220531172704-2108 status: (6.2430894s)
error_spam_test.go:179: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-20220531172704-2108 --log_dir C:\Users\jenkins.minikube7\AppData\Local\Temp\nospam-20220531172704-2108 status
error_spam_test.go:179: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-20220531172704-2108 --log_dir C:\Users\jenkins.minikube7\AppData\Local\Temp\nospam-20220531172704-2108 status: (6.2080809s)
--- PASS: TestErrorSpam/status (18.72s)

                                                
                                    
x
+
TestErrorSpam/pause (16.34s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:213: Cleaning up 0 logfile(s) ...
error_spam_test.go:156: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-20220531172704-2108 --log_dir C:\Users\jenkins.minikube7\AppData\Local\Temp\nospam-20220531172704-2108 pause
error_spam_test.go:156: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-20220531172704-2108 --log_dir C:\Users\jenkins.minikube7\AppData\Local\Temp\nospam-20220531172704-2108 pause: (5.8992597s)
error_spam_test.go:156: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-20220531172704-2108 --log_dir C:\Users\jenkins.minikube7\AppData\Local\Temp\nospam-20220531172704-2108 pause
error_spam_test.go:156: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-20220531172704-2108 --log_dir C:\Users\jenkins.minikube7\AppData\Local\Temp\nospam-20220531172704-2108 pause: (5.1747403s)
error_spam_test.go:179: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-20220531172704-2108 --log_dir C:\Users\jenkins.minikube7\AppData\Local\Temp\nospam-20220531172704-2108 pause
E0531 17:29:54.105372    2108 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\addons-20220531171726-2108\client.crt: The system cannot find the path specified.
error_spam_test.go:179: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-20220531172704-2108 --log_dir C:\Users\jenkins.minikube7\AppData\Local\Temp\nospam-20220531172704-2108 pause: (5.2657111s)
--- PASS: TestErrorSpam/pause (16.34s)

                                                
                                    
x
+
TestErrorSpam/unpause (17s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:213: Cleaning up 0 logfile(s) ...
error_spam_test.go:156: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-20220531172704-2108 --log_dir C:\Users\jenkins.minikube7\AppData\Local\Temp\nospam-20220531172704-2108 unpause
error_spam_test.go:156: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-20220531172704-2108 --log_dir C:\Users\jenkins.minikube7\AppData\Local\Temp\nospam-20220531172704-2108 unpause: (5.8966009s)
error_spam_test.go:156: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-20220531172704-2108 --log_dir C:\Users\jenkins.minikube7\AppData\Local\Temp\nospam-20220531172704-2108 unpause
error_spam_test.go:156: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-20220531172704-2108 --log_dir C:\Users\jenkins.minikube7\AppData\Local\Temp\nospam-20220531172704-2108 unpause: (5.5433282s)
error_spam_test.go:179: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-20220531172704-2108 --log_dir C:\Users\jenkins.minikube7\AppData\Local\Temp\nospam-20220531172704-2108 unpause
error_spam_test.go:179: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-20220531172704-2108 --log_dir C:\Users\jenkins.minikube7\AppData\Local\Temp\nospam-20220531172704-2108 unpause: (5.5518521s)
--- PASS: TestErrorSpam/unpause (17.00s)

                                                
                                    
x
+
TestErrorSpam/stop (31.94s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:213: Cleaning up 0 logfile(s) ...
error_spam_test.go:156: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-20220531172704-2108 --log_dir C:\Users\jenkins.minikube7\AppData\Local\Temp\nospam-20220531172704-2108 stop
error_spam_test.go:156: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-20220531172704-2108 --log_dir C:\Users\jenkins.minikube7\AppData\Local\Temp\nospam-20220531172704-2108 stop: (17.6406321s)
error_spam_test.go:156: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-20220531172704-2108 --log_dir C:\Users\jenkins.minikube7\AppData\Local\Temp\nospam-20220531172704-2108 stop
E0531 17:30:35.073406    2108 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\addons-20220531171726-2108\client.crt: The system cannot find the path specified.
error_spam_test.go:156: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-20220531172704-2108 --log_dir C:\Users\jenkins.minikube7\AppData\Local\Temp\nospam-20220531172704-2108 stop: (7.2065349s)
error_spam_test.go:179: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-20220531172704-2108 --log_dir C:\Users\jenkins.minikube7\AppData\Local\Temp\nospam-20220531172704-2108 stop
error_spam_test.go:179: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-20220531172704-2108 --log_dir C:\Users\jenkins.minikube7\AppData\Local\Temp\nospam-20220531172704-2108 stop: (7.0881787s)
--- PASS: TestErrorSpam/stop (31.94s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0.03s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1781: local sync path: C:\Users\jenkins.minikube7\minikube-integration\.minikube\files\etc\test\nested\copy\2108\hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.03s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (128.69s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2160: (dbg) Run:  out/minikube-windows-amd64.exe start -p functional-20220531173104-2108 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker
E0531 17:31:57.006608    2108 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\addons-20220531171726-2108\client.crt: The system cannot find the path specified.
functional_test.go:2160: (dbg) Done: out/minikube-windows-amd64.exe start -p functional-20220531173104-2108 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker: (2m8.6862729s)
--- PASS: TestFunctional/serial/StartWithProxy (128.69s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (34.05s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:651: (dbg) Run:  out/minikube-windows-amd64.exe start -p functional-20220531173104-2108 --alsologtostderr -v=8
functional_test.go:651: (dbg) Done: out/minikube-windows-amd64.exe start -p functional-20220531173104-2108 --alsologtostderr -v=8: (34.0466318s)
functional_test.go:655: soft start took 34.048376s for "functional-20220531173104-2108" cluster.
--- PASS: TestFunctional/serial/SoftStart (34.05s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.16s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:673: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.16s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.35s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:688: (dbg) Run:  kubectl --context functional-20220531173104-2108 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.35s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (18.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1041: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20220531173104-2108 cache add k8s.gcr.io/pause:3.1
functional_test.go:1041: (dbg) Done: out/minikube-windows-amd64.exe -p functional-20220531173104-2108 cache add k8s.gcr.io/pause:3.1: (6.0252996s)
functional_test.go:1041: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20220531173104-2108 cache add k8s.gcr.io/pause:3.3
functional_test.go:1041: (dbg) Done: out/minikube-windows-amd64.exe -p functional-20220531173104-2108 cache add k8s.gcr.io/pause:3.3: (5.8561853s)
functional_test.go:1041: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20220531173104-2108 cache add k8s.gcr.io/pause:latest
functional_test.go:1041: (dbg) Done: out/minikube-windows-amd64.exe -p functional-20220531173104-2108 cache add k8s.gcr.io/pause:latest: (6.1770786s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (18.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (9.21s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1069: (dbg) Run:  docker build -t minikube-local-cache-test:functional-20220531173104-2108 C:\Users\jenkins.minikube7\AppData\Local\Temp\TestFunctionalserialCacheCmdcacheadd_local1946352824\001
functional_test.go:1069: (dbg) Done: docker build -t minikube-local-cache-test:functional-20220531173104-2108 C:\Users\jenkins.minikube7\AppData\Local\Temp\TestFunctionalserialCacheCmdcacheadd_local1946352824\001: (2.3474555s)
functional_test.go:1081: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20220531173104-2108 cache add minikube-local-cache-test:functional-20220531173104-2108
E0531 17:34:13.019270    2108 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\addons-20220531171726-2108\client.crt: The system cannot find the path specified.
functional_test.go:1081: (dbg) Done: out/minikube-windows-amd64.exe -p functional-20220531173104-2108 cache add minikube-local-cache-test:functional-20220531173104-2108: (5.4312483s)
functional_test.go:1086: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20220531173104-2108 cache delete minikube-local-cache-test:functional-20220531173104-2108
functional_test.go:1075: (dbg) Run:  docker rmi minikube-local-cache-test:functional-20220531173104-2108
functional_test.go:1075: (dbg) Done: docker rmi minikube-local-cache-test:functional-20220531173104-2108: (1.0389449s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (9.21s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete_k8s.gcr.io/pause:3.3 (0.34s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete_k8s.gcr.io/pause:3.3
functional_test.go:1094: (dbg) Run:  out/minikube-windows-amd64.exe cache delete k8s.gcr.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/delete_k8s.gcr.io/pause:3.3 (0.34s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.35s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1102: (dbg) Run:  out/minikube-windows-amd64.exe cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.35s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (6.09s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1116: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20220531173104-2108 ssh sudo crictl images
functional_test.go:1116: (dbg) Done: out/minikube-windows-amd64.exe -p functional-20220531173104-2108 ssh sudo crictl images: (6.0860684s)
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (6.09s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (23.75s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1139: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20220531173104-2108 ssh sudo docker rmi k8s.gcr.io/pause:latest
functional_test.go:1139: (dbg) Done: out/minikube-windows-amd64.exe -p functional-20220531173104-2108 ssh sudo docker rmi k8s.gcr.io/pause:latest: (6.1445229s)
functional_test.go:1145: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20220531173104-2108 ssh sudo crictl inspecti k8s.gcr.io/pause:latest
functional_test.go:1145: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-20220531173104-2108 ssh sudo crictl inspecti k8s.gcr.io/pause:latest: exit status 1 (6.0127242s)

                                                
                                                
-- stdout --
	FATA[0000] no such image "k8s.gcr.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1150: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20220531173104-2108 cache reload
functional_test.go:1150: (dbg) Done: out/minikube-windows-amd64.exe -p functional-20220531173104-2108 cache reload: (5.5553711s)
functional_test.go:1155: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20220531173104-2108 ssh sudo crictl inspecti k8s.gcr.io/pause:latest
E0531 17:34:40.858899    2108 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\addons-20220531171726-2108\client.crt: The system cannot find the path specified.
functional_test.go:1155: (dbg) Done: out/minikube-windows-amd64.exe -p functional-20220531173104-2108 ssh sudo crictl inspecti k8s.gcr.io/pause:latest: (6.0331756s)
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (23.75s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.7s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1164: (dbg) Run:  out/minikube-windows-amd64.exe cache delete k8s.gcr.io/pause:3.1
functional_test.go:1164: (dbg) Run:  out/minikube-windows-amd64.exe cache delete k8s.gcr.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.70s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (1.95s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:708: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20220531173104-2108 kubectl -- --context functional-20220531173104-2108 get pods
functional_test.go:708: (dbg) Done: out/minikube-windows-amd64.exe -p functional-20220531173104-2108 kubectl -- --context functional-20220531173104-2108 get pods: (1.9471717s)
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (1.95s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (1.87s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:733: (dbg) Run:  out\kubectl.exe --context functional-20220531173104-2108 get pods
functional_test.go:733: (dbg) Done: out\kubectl.exe --context functional-20220531173104-2108 get pods: (1.8598064s)
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (1.87s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (61.76s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:749: (dbg) Run:  out/minikube-windows-amd64.exe start -p functional-20220531173104-2108 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:749: (dbg) Done: out/minikube-windows-amd64.exe start -p functional-20220531173104-2108 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (1m1.7640474s)
functional_test.go:753: restart took 1m1.7640474s for "functional-20220531173104-2108" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (61.76s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.3s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:802: (dbg) Run:  kubectl --context functional-20220531173104-2108 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:817: etcd phase: Running
functional_test.go:827: etcd status: Ready
functional_test.go:817: kube-apiserver phase: Running
functional_test.go:827: kube-apiserver status: Ready
functional_test.go:817: kube-controller-manager phase: Running
functional_test.go:827: kube-controller-manager status: Ready
functional_test.go:817: kube-scheduler phase: Running
functional_test.go:827: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.30s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (7.49s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1228: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20220531173104-2108 logs
functional_test.go:1228: (dbg) Done: out/minikube-windows-amd64.exe -p functional-20220531173104-2108 logs: (7.4891907s)
--- PASS: TestFunctional/serial/LogsCmd (7.49s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (8.46s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1242: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20220531173104-2108 logs --file C:\Users\jenkins.minikube7\AppData\Local\Temp\TestFunctionalserialLogsFileCmd1375591857\001\logs.txt
functional_test.go:1242: (dbg) Done: out/minikube-windows-amd64.exe -p functional-20220531173104-2108 logs --file C:\Users\jenkins.minikube7\AppData\Local\Temp\TestFunctionalserialLogsFileCmd1375591857\001\logs.txt: (8.4585931s)
--- PASS: TestFunctional/serial/LogsFileCmd (8.46s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (2.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1191: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20220531173104-2108 config unset cpus

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1191: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20220531173104-2108 config get cpus
functional_test.go:1191: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-20220531173104-2108 config get cpus: exit status 14 (376.7175ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1191: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20220531173104-2108 config set cpus 2
functional_test.go:1191: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20220531173104-2108 config get cpus
functional_test.go:1191: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20220531173104-2108 config unset cpus
functional_test.go:1191: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20220531173104-2108 config get cpus
functional_test.go:1191: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-20220531173104-2108 config get cpus: exit status 14 (345.269ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (2.27s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (12.96s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:966: (dbg) Run:  out/minikube-windows-amd64.exe start -p functional-20220531173104-2108 --dry-run --memory 250MB --alsologtostderr --driver=docker

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:966: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p functional-20220531173104-2108 --dry-run --memory 250MB --alsologtostderr --driver=docker: exit status 23 (5.5716118s)

                                                
                                                
-- stdout --
	* [functional-20220531173104-2108] minikube v1.26.0-beta.1 on Microsoft Windows 10 Enterprise N 10.0.19044 Build 19044
	  - KUBECONFIG=C:\Users\jenkins.minikube7\minikube-integration\kubeconfig
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube7\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=14079
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0531 17:37:15.764655    8032 out.go:296] Setting OutFile to fd 864 ...
	I0531 17:37:15.819655    8032 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0531 17:37:15.819655    8032 out.go:309] Setting ErrFile to fd 792...
	I0531 17:37:15.819655    8032 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0531 17:37:15.842653    8032 out.go:303] Setting JSON to false
	I0531 17:37:15.846661    8032 start.go:115] hostinfo: {"hostname":"minikube7","uptime":76906,"bootTime":1653941729,"procs":157,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19044 Build 19044","kernelVersion":"10.0.19044 Build 19044","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f66ed2ea-6c04-4a6b-8eea-b2fb0953e990"}
	W0531 17:37:15.846661    8032 start.go:123] gopshost.Virtualization returned error: not implemented yet
	I0531 17:37:15.851656    8032 out.go:177] * [functional-20220531173104-2108] minikube v1.26.0-beta.1 on Microsoft Windows 10 Enterprise N 10.0.19044 Build 19044
	I0531 17:37:15.855651    8032 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube7\minikube-integration\kubeconfig
	I0531 17:37:15.860674    8032 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube7\minikube-integration\.minikube
	I0531 17:37:15.863665    8032 out.go:177]   - MINIKUBE_LOCATION=14079
	I0531 17:37:15.865657    8032 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0531 17:37:15.869651    8032 config.go:178] Loaded profile config "functional-20220531173104-2108": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.6
	I0531 17:37:15.870656    8032 driver.go:358] Setting default libvirt URI to qemu:///system
	I0531 17:37:18.571169    8032 docker.go:137] docker version: linux-20.10.14
	I0531 17:37:18.579619    8032 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0531 17:37:20.731738    8032 cli_runner.go:217] Completed: docker system info --format "{{json .}}": (2.152108s)
	I0531 17:37:20.732736    8032 info.go:265] docker info: {ID:JKWR:L4LW:XYJC:G6AI:GZFU:RUGW:CCH6:OD2M:V572:4FTB:B7YC:DTUC Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:58 OomKillDisable:true NGoroutines:52 SystemTime:2022-05-31 17:37:19.592626 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.102.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_6
4 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:53902323712 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.14 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3df54a852345ae127d1fa3092b95168e4a88e2f8 Expected:3df54a852345ae127d1fa3092b95168e4a88e2f8} RuncCommit:{ID:v1.0.3-0-gf46b6ba Expected:v1.0.3-0-gf46b6ba} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,p
rofile=default] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.5.1] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:C:\Program Files\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc.
Version:v0.17.0]] Warnings:<nil>}}
	I0531 17:37:20.736484    8032 out.go:177] * Using the docker driver based on existing profile
	I0531 17:37:20.742590    8032 start.go:284] selected driver: docker
	I0531 17:37:20.742590    8032 start.go:806] validating driver "docker" against &{Name:functional-20220531173104-2108 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653596720-14230@sha256:e953786303ac8350802546ee187d34e89f0007072a54fdbcc2f86a1fb8575418 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.6 ClusterName:functional-20220531173104-2108 Namespace:de
fault APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.23.6 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false helm-tiller:false ingress:false ingress-dns:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false regis
try:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube7:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0531 17:37:20.742590    8032 start.go:817] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0531 17:37:21.056800    8032 out.go:177] 
	W0531 17:37:21.062804    8032 out.go:239] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0531 17:37:21.066797    8032 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:983: (dbg) Run:  out/minikube-windows-amd64.exe start -p functional-20220531173104-2108 --dry-run --alsologtostderr -v=1 --driver=docker

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:983: (dbg) Done: out/minikube-windows-amd64.exe start -p functional-20220531173104-2108 --dry-run --alsologtostderr -v=1 --driver=docker: (7.3914037s)
--- PASS: TestFunctional/parallel/DryRun (12.96s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (5.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1012: (dbg) Run:  out/minikube-windows-amd64.exe start -p functional-20220531173104-2108 --dry-run --memory 250MB --alsologtostderr --driver=docker

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1012: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p functional-20220531173104-2108 --dry-run --memory 250MB --alsologtostderr --driver=docker: exit status 23 (5.2508237s)

                                                
                                                
-- stdout --
	* [functional-20220531173104-2108] minikube v1.26.0-beta.1 sur Microsoft Windows 10 Enterprise N 10.0.19044 Build 19044
	  - KUBECONFIG=C:\Users\jenkins.minikube7\minikube-integration\kubeconfig
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube7\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=14079
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0531 17:37:13.288111    6152 out.go:296] Setting OutFile to fd 920 ...
	I0531 17:37:13.344121    6152 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0531 17:37:13.344121    6152 out.go:309] Setting ErrFile to fd 864...
	I0531 17:37:13.344121    6152 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0531 17:37:13.355121    6152 out.go:303] Setting JSON to false
	I0531 17:37:13.357112    6152 start.go:115] hostinfo: {"hostname":"minikube7","uptime":76903,"bootTime":1653941730,"procs":152,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19044 Build 19044","kernelVersion":"10.0.19044 Build 19044","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f66ed2ea-6c04-4a6b-8eea-b2fb0953e990"}
	W0531 17:37:13.357112    6152 start.go:123] gopshost.Virtualization returned error: not implemented yet
	I0531 17:37:13.362109    6152 out.go:177] * [functional-20220531173104-2108] minikube v1.26.0-beta.1 sur Microsoft Windows 10 Enterprise N 10.0.19044 Build 19044
	I0531 17:37:13.366124    6152 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube7\minikube-integration\kubeconfig
	I0531 17:37:13.368108    6152 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube7\minikube-integration\.minikube
	I0531 17:37:13.371118    6152 out.go:177]   - MINIKUBE_LOCATION=14079
	I0531 17:37:13.375126    6152 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0531 17:37:13.377715    6152 config.go:178] Loaded profile config "functional-20220531173104-2108": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.6
	I0531 17:37:13.378729    6152 driver.go:358] Setting default libvirt URI to qemu:///system
	I0531 17:37:16.023689    6152 docker.go:137] docker version: linux-20.10.14
	I0531 17:37:16.033760    6152 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0531 17:37:18.113758    6152 cli_runner.go:217] Completed: docker system info --format "{{json .}}": (2.0797535s)
	I0531 17:37:18.114536    6152 info.go:265] docker info: {ID:JKWR:L4LW:XYJC:G6AI:GZFU:RUGW:CCH6:OD2M:V572:4FTB:B7YC:DTUC Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:58 OomKillDisable:true NGoroutines:52 SystemTime:2022-05-31 17:37:17.0628106 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.102.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_
64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:53902323712 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.14 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3df54a852345ae127d1fa3092b95168e4a88e2f8 Expected:3df54a852345ae127d1fa3092b95168e4a88e2f8} RuncCommit:{ID:v1.0.3-0-gf46b6ba Expected:v1.0.3-0-gf46b6ba} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,
profile=default] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.5.1] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:C:\Program Files\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc.
Version:v0.17.0]] Warnings:<nil>}}
	I0531 17:37:18.118098    6152 out.go:177] * Utilisation du pilote docker basé sur le profil existant
	I0531 17:37:18.123088    6152 start.go:284] selected driver: docker
	I0531 17:37:18.123088    6152 start.go:806] validating driver "docker" against &{Name:functional-20220531173104-2108 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653596720-14230@sha256:e953786303ac8350802546ee187d34e89f0007072a54fdbcc2f86a1fb8575418 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.6 ClusterName:functional-20220531173104-2108 Namespace:de
fault APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.23.6 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false helm-tiller:false ingress:false ingress-dns:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false regis
try:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube7:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0531 17:37:18.123088    6152 start.go:817] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0531 17:37:18.236033    6152 out.go:177] 
	W0531 17:37:18.239282    6152 out.go:239] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0531 17:37:18.243308    6152 out.go:177] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (5.25s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (23.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:846: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20220531173104-2108 status

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:846: (dbg) Done: out/minikube-windows-amd64.exe -p functional-20220531173104-2108 status: (7.5931273s)
functional_test.go:852: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20220531173104-2108 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:852: (dbg) Done: out/minikube-windows-amd64.exe -p functional-20220531173104-2108 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}: (8.1210146s)
functional_test.go:864: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20220531173104-2108 status -o json

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:864: (dbg) Done: out/minikube-windows-amd64.exe -p functional-20220531173104-2108 status -o json: (7.4747468s)
--- PASS: TestFunctional/parallel/StatusCmd (23.19s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (3.67s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1619: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20220531173104-2108 addons list

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1619: (dbg) Done: out/minikube-windows-amd64.exe -p functional-20220531173104-2108 addons list: (3.2882664s)
functional_test.go:1631: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20220531173104-2108 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (3.67s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (53.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:342: "storage-provisioner" [b78f9e66-f4c8-40cb-90d3-24aae4fc2705] Running

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.0271037s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-20220531173104-2108 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-20220531173104-2108 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-20220531173104-2108 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-20220531173104-2108 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:342: "sp-pod" [21fd3f05-62fb-433b-8773-6875c87575ce] Pending
helpers_test.go:342: "sp-pod" [21fd3f05-62fb-433b-8773-6875c87575ce] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
helpers_test.go:342: "sp-pod" [21fd3f05-62fb-433b-8773-6875c87575ce] Running

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 30.1295064s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-20220531173104-2108 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:100: (dbg) Done: kubectl --context functional-20220531173104-2108 exec sp-pod -- touch /tmp/mount/foo: (1.1223322s)
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-20220531173104-2108 delete -f testdata/storage-provisioner/pod.yaml

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:106: (dbg) Done: kubectl --context functional-20220531173104-2108 delete -f testdata/storage-provisioner/pod.yaml: (4.0902185s)
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-20220531173104-2108 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:342: "sp-pod" [b31211e5-01d1-4a06-9d69-72182ab83802] Pending

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
helpers_test.go:342: "sp-pod" [b31211e5-01d1-4a06-9d69-72182ab83802] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
helpers_test.go:342: "sp-pod" [b31211e5-01d1-4a06-9d69-72182ab83802] Running

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 10.0881322s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-20220531173104-2108 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (53.10s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (14.72s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1654: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20220531173104-2108 ssh "echo hello"

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1654: (dbg) Done: out/minikube-windows-amd64.exe -p functional-20220531173104-2108 ssh "echo hello": (7.6488324s)
functional_test.go:1671: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20220531173104-2108 ssh "cat /etc/hostname"

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1671: (dbg) Done: out/minikube-windows-amd64.exe -p functional-20220531173104-2108 ssh "cat /etc/hostname": (7.071321s)
--- PASS: TestFunctional/parallel/SSHCmd (14.72s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (25.81s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:554: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20220531173104-2108 cp testdata\cp-test.txt /home/docker/cp-test.txt
helpers_test.go:554: (dbg) Done: out/minikube-windows-amd64.exe -p functional-20220531173104-2108 cp testdata\cp-test.txt /home/docker/cp-test.txt: (5.7099758s)
helpers_test.go:532: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20220531173104-2108 ssh -n functional-20220531173104-2108 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:532: (dbg) Done: out/minikube-windows-amd64.exe -p functional-20220531173104-2108 ssh -n functional-20220531173104-2108 "sudo cat /home/docker/cp-test.txt": (6.7822811s)
helpers_test.go:554: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20220531173104-2108 cp functional-20220531173104-2108:/home/docker/cp-test.txt C:\Users\jenkins.minikube7\AppData\Local\Temp\TestFunctionalparallelCpCmd2199164473\001\cp-test.txt

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:554: (dbg) Done: out/minikube-windows-amd64.exe -p functional-20220531173104-2108 cp functional-20220531173104-2108:/home/docker/cp-test.txt C:\Users\jenkins.minikube7\AppData\Local\Temp\TestFunctionalparallelCpCmd2199164473\001\cp-test.txt: (6.8750278s)
helpers_test.go:532: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20220531173104-2108 ssh -n functional-20220531173104-2108 "sudo cat /home/docker/cp-test.txt"

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:532: (dbg) Done: out/minikube-windows-amd64.exe -p functional-20220531173104-2108 ssh -n functional-20220531173104-2108 "sudo cat /home/docker/cp-test.txt": (6.4397462s)
--- PASS: TestFunctional/parallel/CpCmd (25.81s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (70.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1719: (dbg) Run:  kubectl --context functional-20220531173104-2108 replace --force -f testdata\mysql.yaml
functional_test.go:1725: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:342: "mysql-b87c45988-hqk85" [f511a394-30ea-40ce-9841-29501af6ecf7] Pending
helpers_test.go:342: "mysql-b87c45988-hqk85" [f511a394-30ea-40ce-9841-29501af6ecf7] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
helpers_test.go:342: "mysql-b87c45988-hqk85" [f511a394-30ea-40ce-9841-29501af6ecf7] Running

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1725: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 53.1508272s
functional_test.go:1733: (dbg) Run:  kubectl --context functional-20220531173104-2108 exec mysql-b87c45988-hqk85 -- mysql -ppassword -e "show databases;"
functional_test.go:1733: (dbg) Non-zero exit: kubectl --context functional-20220531173104-2108 exec mysql-b87c45988-hqk85 -- mysql -ppassword -e "show databases;": exit status 1 (435.9613ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1733: (dbg) Run:  kubectl --context functional-20220531173104-2108 exec mysql-b87c45988-hqk85 -- mysql -ppassword -e "show databases;"
functional_test.go:1733: (dbg) Non-zero exit: kubectl --context functional-20220531173104-2108 exec mysql-b87c45988-hqk85 -- mysql -ppassword -e "show databases;": exit status 1 (486.9022ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1733: (dbg) Run:  kubectl --context functional-20220531173104-2108 exec mysql-b87c45988-hqk85 -- mysql -ppassword -e "show databases;"
functional_test.go:1733: (dbg) Non-zero exit: kubectl --context functional-20220531173104-2108 exec mysql-b87c45988-hqk85 -- mysql -ppassword -e "show databases;": exit status 1 (561.4859ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1733: (dbg) Run:  kubectl --context functional-20220531173104-2108 exec mysql-b87c45988-hqk85 -- mysql -ppassword -e "show databases;"
functional_test.go:1733: (dbg) Non-zero exit: kubectl --context functional-20220531173104-2108 exec mysql-b87c45988-hqk85 -- mysql -ppassword -e "show databases;": exit status 1 (577.8928ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1733: (dbg) Run:  kubectl --context functional-20220531173104-2108 exec mysql-b87c45988-hqk85 -- mysql -ppassword -e "show databases;"
functional_test.go:1733: (dbg) Non-zero exit: kubectl --context functional-20220531173104-2108 exec mysql-b87c45988-hqk85 -- mysql -ppassword -e "show databases;": exit status 1 (639.3382ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1733: (dbg) Run:  kubectl --context functional-20220531173104-2108 exec mysql-b87c45988-hqk85 -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (70.49s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (6.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1855: Checking for existence of /etc/test/nested/copy/2108/hosts within VM
functional_test.go:1857: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20220531173104-2108 ssh "sudo cat /etc/test/nested/copy/2108/hosts"

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1857: (dbg) Done: out/minikube-windows-amd64.exe -p functional-20220531173104-2108 ssh "sudo cat /etc/test/nested/copy/2108/hosts": (6.456836s)
functional_test.go:1862: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (6.46s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (36.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1898: Checking for existence of /etc/ssl/certs/2108.pem within VM
functional_test.go:1899: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20220531173104-2108 ssh "sudo cat /etc/ssl/certs/2108.pem"

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1899: (dbg) Done: out/minikube-windows-amd64.exe -p functional-20220531173104-2108 ssh "sudo cat /etc/ssl/certs/2108.pem": (6.2145168s)
functional_test.go:1898: Checking for existence of /usr/share/ca-certificates/2108.pem within VM
functional_test.go:1899: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20220531173104-2108 ssh "sudo cat /usr/share/ca-certificates/2108.pem"

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1899: (dbg) Done: out/minikube-windows-amd64.exe -p functional-20220531173104-2108 ssh "sudo cat /usr/share/ca-certificates/2108.pem": (5.9827418s)
functional_test.go:1898: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1899: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20220531173104-2108 ssh "sudo cat /etc/ssl/certs/51391683.0"

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1899: (dbg) Done: out/minikube-windows-amd64.exe -p functional-20220531173104-2108 ssh "sudo cat /etc/ssl/certs/51391683.0": (6.1285483s)
functional_test.go:1925: Checking for existence of /etc/ssl/certs/21082.pem within VM
functional_test.go:1926: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20220531173104-2108 ssh "sudo cat /etc/ssl/certs/21082.pem"

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1926: (dbg) Done: out/minikube-windows-amd64.exe -p functional-20220531173104-2108 ssh "sudo cat /etc/ssl/certs/21082.pem": (6.0048123s)
functional_test.go:1925: Checking for existence of /usr/share/ca-certificates/21082.pem within VM
functional_test.go:1926: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20220531173104-2108 ssh "sudo cat /usr/share/ca-certificates/21082.pem"

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1926: (dbg) Done: out/minikube-windows-amd64.exe -p functional-20220531173104-2108 ssh "sudo cat /usr/share/ca-certificates/21082.pem": (6.1408826s)
functional_test.go:1925: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:1926: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20220531173104-2108 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1926: (dbg) Done: out/minikube-windows-amd64.exe -p functional-20220531173104-2108 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0": (5.9962517s)
--- PASS: TestFunctional/parallel/CertSync (36.47s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:214: (dbg) Run:  kubectl --context functional-20220531173104-2108 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (6.55s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:1953: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20220531173104-2108 ssh "sudo systemctl is-active crio"
functional_test.go:1953: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-20220531173104-2108 ssh "sudo systemctl is-active crio": exit status 1 (6.5460133s)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (6.55s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (11.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create

                                                
                                                
=== CONT  TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1265: (dbg) Run:  out/minikube-windows-amd64.exe profile lis

                                                
                                                
=== CONT  TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1265: (dbg) Done: out/minikube-windows-amd64.exe profile lis: (3.2592069s)
functional_test.go:1270: (dbg) Run:  out/minikube-windows-amd64.exe profile list --output json

                                                
                                                
=== CONT  TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1270: (dbg) Done: out/minikube-windows-amd64.exe profile list --output json: (7.8926011s)
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (11.15s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv/powershell (29.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv/powershell
functional_test.go:491: (dbg) Run:  powershell.exe -NoProfile -NonInteractive "out/minikube-windows-amd64.exe -p functional-20220531173104-2108 docker-env | Invoke-Expression ; out/minikube-windows-amd64.exe status -p functional-20220531173104-2108"

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv/powershell
functional_test.go:491: (dbg) Done: powershell.exe -NoProfile -NonInteractive "out/minikube-windows-amd64.exe -p functional-20220531173104-2108 docker-env | Invoke-Expression ; out/minikube-windows-amd64.exe status -p functional-20220531173104-2108": (17.4485382s)
functional_test.go:514: (dbg) Run:  powershell.exe -NoProfile -NonInteractive "out/minikube-windows-amd64.exe -p functional-20220531173104-2108 docker-env | Invoke-Expression ; docker images"

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv/powershell
functional_test.go:514: (dbg) Done: powershell.exe -NoProfile -NonInteractive "out/minikube-windows-amd64.exe -p functional-20220531173104-2108 docker-env | Invoke-Expression ; docker images": (11.7927196s)
--- PASS: TestFunctional/parallel/DockerEnv/powershell (29.25s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:127: (dbg) daemon: [out/minikube-windows-amd64.exe -p functional-20220531173104-2108 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (15.64s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:147: (dbg) Run:  kubectl --context functional-20220531173104-2108 apply -f testdata\testsvc.yaml
functional_test_tunnel_test.go:151: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:342: "nginx-svc" [90551f1c-7646-4527-b9de-4209247bb5eb] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])

                                                
                                                
=== CONT  TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
helpers_test.go:342: "nginx-svc" [90551f1c-7646-4527-b9de-4209247bb5eb] Running

                                                
                                                
=== CONT  TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:151: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 15.1273122s
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (15.64s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (7.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1305: (dbg) Run:  out/minikube-windows-amd64.exe profile list

                                                
                                                
=== CONT  TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1305: (dbg) Done: out/minikube-windows-amd64.exe profile list: (6.6514051s)
functional_test.go:1310: Took "6.6514051s" to run "out/minikube-windows-amd64.exe profile list"
functional_test.go:1319: (dbg) Run:  out/minikube-windows-amd64.exe profile list -l

                                                
                                                
=== CONT  TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1324: Took "398.138ms" to run "out/minikube-windows-amd64.exe profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (7.05s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (7.95s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1356: (dbg) Run:  out/minikube-windows-amd64.exe profile list -o json

                                                
                                                
=== CONT  TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1356: (dbg) Done: out/minikube-windows-amd64.exe profile list -o json: (7.4900082s)
functional_test.go:1361: Took "7.4900082s" to run "out/minikube-windows-amd64.exe profile list -o json"
functional_test.go:1369: (dbg) Run:  out/minikube-windows-amd64.exe profile list -o json --light
functional_test.go:1374: Took "461.0145ms" to run "out/minikube-windows-amd64.exe profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (7.95s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (4.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2045: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20220531173104-2108 update-context --alsologtostderr -v=2

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2045: (dbg) Done: out/minikube-windows-amd64.exe -p functional-20220531173104-2108 update-context --alsologtostderr -v=2: (4.1393059s)
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (4.14s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (4.03s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2045: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20220531173104-2108 update-context --alsologtostderr -v=2

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2045: (dbg) Done: out/minikube-windows-amd64.exe -p functional-20220531173104-2108 update-context --alsologtostderr -v=2: (4.0203807s)
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (4.03s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (4.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2045: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20220531173104-2108 update-context --alsologtostderr -v=2

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2045: (dbg) Done: out/minikube-windows-amd64.exe -p functional-20220531173104-2108 update-context --alsologtostderr -v=2: (4.0607122s)
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (4.06s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (4.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:256: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20220531173104-2108 image ls --format short

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:256: (dbg) Done: out/minikube-windows-amd64.exe -p functional-20220531173104-2108 image ls --format short: (4.1691478s)
functional_test.go:261: (dbg) Stdout: out/minikube-windows-amd64.exe -p functional-20220531173104-2108 image ls --format short:
k8s.gcr.io/pause:latest
k8s.gcr.io/pause:3.6
k8s.gcr.io/pause:3.3
k8s.gcr.io/pause:3.1
k8s.gcr.io/kube-scheduler:v1.23.6
k8s.gcr.io/kube-proxy:v1.23.6
k8s.gcr.io/kube-controller-manager:v1.23.6
k8s.gcr.io/kube-apiserver:v1.23.6
k8s.gcr.io/etcd:3.5.1-0
k8s.gcr.io/echoserver:1.8
k8s.gcr.io/coredns/coredns:v1.8.6
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/google-containers/addon-resizer:functional-20220531173104-2108
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/library/mysql:5.7
docker.io/library/minikube-local-cache-test:functional-20220531173104-2108
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (4.17s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (4.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:256: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20220531173104-2108 image ls --format table

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:256: (dbg) Done: out/minikube-windows-amd64.exe -p functional-20220531173104-2108 image ls --format table: (4.1589881s)
functional_test.go:261: (dbg) Stdout: out/minikube-windows-amd64.exe -p functional-20220531173104-2108 image ls --format table:
|---------------------------------------------|--------------------------------|---------------|--------|
|                    Image                    |              Tag               |   Image ID    |  Size  |
|---------------------------------------------|--------------------------------|---------------|--------|
| k8s.gcr.io/etcd                             | 3.5.1-0                        | 25f8c7f3da61c | 293MB  |
| k8s.gcr.io/coredns/coredns                  | v1.8.6                         | a4ca41631cc7a | 46.8MB |
| k8s.gcr.io/pause                            | 3.6                            | 6270bb605e12e | 683kB  |
| gcr.io/google-containers/addon-resizer      | functional-20220531173104-2108 | ffd4cfbbe753e | 32.9MB |
| docker.io/library/mysql                     | 5.7                            | 2a0961b7de03c | 462MB  |
| docker.io/library/nginx                     | alpine                         | b1c3acb288825 | 23.4MB |
| k8s.gcr.io/kube-scheduler                   | v1.23.6                        | 595f327f224a4 | 53.5MB |
| gcr.io/k8s-minikube/storage-provisioner     | v5                             | 6e38f40d628db | 31.5MB |
| k8s.gcr.io/pause                            | 3.1                            | da86e6ba6ca19 | 742kB  |
| k8s.gcr.io/kube-apiserver                   | v1.23.6                        | 8fa62c12256df | 135MB  |
| k8s.gcr.io/kube-proxy                       | v1.23.6                        | 4c03754524064 | 112MB  |
| k8s.gcr.io/kube-controller-manager          | v1.23.6                        | df7b72818ad2e | 125MB  |
| k8s.gcr.io/pause                            | latest                         | 350b164e7ae1d | 240kB  |
| docker.io/library/minikube-local-cache-test | functional-20220531173104-2108 | 766dd9629241f | 30B    |
| docker.io/library/nginx                     | latest                         | 0e901e68141fd | 142MB  |
| k8s.gcr.io/pause                            | 3.3                            | 0184c1613d929 | 683kB  |
| k8s.gcr.io/echoserver                       | 1.8                            | 82e4c8a736a4f | 95.4MB |
|---------------------------------------------|--------------------------------|---------------|--------|
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (4.16s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (4.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:256: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20220531173104-2108 image ls --format json

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:256: (dbg) Done: out/minikube-windows-amd64.exe -p functional-20220531173104-2108 image ls --format json: (4.23894s)
functional_test.go:261: (dbg) Stdout: out/minikube-windows-amd64.exe -p functional-20220531173104-2108 image ls --format json:
[{"id":"25f8c7f3da61c2a810effe5fa779cf80ca171afb0adf94c7cb51eb9a8546629d","repoDigests":[],"repoTags":["k8s.gcr.io/etcd:3.5.1-0"],"size":"293000000"},{"id":"82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410","repoDigests":[],"repoTags":["k8s.gcr.io/echoserver:1.8"],"size":"95400000"},{"id":"595f327f224a42213913a39d224c8aceb96c81ad3909ae13f6045f570aafe8f0","repoDigests":[],"repoTags":["k8s.gcr.io/kube-scheduler:v1.23.6"],"size":"53500000"},{"id":"2a0961b7de03c7b11afd13fec09d0d30992b6e0b4f947a4aba4273723778ed95","repoDigests":[],"repoTags":["docker.io/library/mysql:5.7"],"size":"462000000"},{"id":"8fa62c12256df9d9d0c3f1cf90856e27d90f209f42271c2f19326a705342c3b6","repoDigests":[],"repoTags":["k8s.gcr.io/kube-apiserver:v1.23.6"],"size":"135000000"},{"id":"4c037545240644e87d79f6b4071331f9adea6176339c98e529b4af8af00d4e47","repoDigests":[],"repoTags":["k8s.gcr.io/kube-proxy:v1.23.6"],"size":"112000000"},{"id":"0e901e68141fd02f237cf63eb842529f8a9500636a9419e3cf4fb986b8fe3d5d","repoDigests":[],"
repoTags":["docker.io/library/nginx:latest"],"size":"142000000"},{"id":"b1c3acb28882519cf6d3a4d7fe2b21d0ae20bde9cfd2c08a7de057f8cfccff15","repoDigests":[],"repoTags":["docker.io/library/nginx:alpine"],"size":"23400000"},{"id":"df7b72818ad2e4f1f204c7ffb51239de67f49c6b22671c70354ee5d65ac37657","repoDigests":[],"repoTags":["k8s.gcr.io/kube-controller-manager:v1.23.6"],"size":"125000000"},{"id":"a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03","repoDigests":[],"repoTags":["k8s.gcr.io/coredns/coredns:v1.8.6"],"size":"46800000"},{"id":"6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee","repoDigests":[],"repoTags":["k8s.gcr.io/pause:3.6"],"size":"683000"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31500000"},{"id":"ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91","repoDigests":[],"repoTags":["gcr.io/google-containers/addon-resizer:functional-20220531173104-
2108"],"size":"32900000"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":[],"repoTags":["k8s.gcr.io/pause:3.3"],"size":"683000"},{"id":"766dd9629241f84b8ad692235cbf45ea1850c0bef4405533309f264da07dc82f","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-20220531173104-2108"],"size":"30"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":[],"repoTags":["k8s.gcr.io/pause:latest"],"size":"240000"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":[],"repoTags":["k8s.gcr.io/pause:3.1"],"size":"742000"}]
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (4.24s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (4.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:256: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20220531173104-2108 image ls --format yaml

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:256: (dbg) Done: out/minikube-windows-amd64.exe -p functional-20220531173104-2108 image ls --format yaml: (4.3140312s)
functional_test.go:261: (dbg) Stdout: out/minikube-windows-amd64.exe -p functional-20220531173104-2108 image ls --format yaml:
- id: 766dd9629241f84b8ad692235cbf45ea1850c0bef4405533309f264da07dc82f
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-20220531173104-2108
size: "30"
- id: 2a0961b7de03c7b11afd13fec09d0d30992b6e0b4f947a4aba4273723778ed95
repoDigests: []
repoTags:
- docker.io/library/mysql:5.7
size: "462000000"
- id: 4c037545240644e87d79f6b4071331f9adea6176339c98e529b4af8af00d4e47
repoDigests: []
repoTags:
- k8s.gcr.io/kube-proxy:v1.23.6
size: "112000000"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests: []
repoTags:
- k8s.gcr.io/pause:3.1
size: "742000"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests: []
repoTags:
- k8s.gcr.io/pause:latest
size: "240000"
- id: 8fa62c12256df9d9d0c3f1cf90856e27d90f209f42271c2f19326a705342c3b6
repoDigests: []
repoTags:
- k8s.gcr.io/kube-apiserver:v1.23.6
size: "135000000"
- id: a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03
repoDigests: []
repoTags:
- k8s.gcr.io/coredns/coredns:v1.8.6
size: "46800000"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31500000"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests: []
repoTags:
- k8s.gcr.io/pause:3.3
size: "683000"
- id: 595f327f224a42213913a39d224c8aceb96c81ad3909ae13f6045f570aafe8f0
repoDigests: []
repoTags:
- k8s.gcr.io/kube-scheduler:v1.23.6
size: "53500000"
- id: 25f8c7f3da61c2a810effe5fa779cf80ca171afb0adf94c7cb51eb9a8546629d
repoDigests: []
repoTags:
- k8s.gcr.io/etcd:3.5.1-0
size: "293000000"
- id: 6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee
repoDigests: []
repoTags:
- k8s.gcr.io/pause:3.6
size: "683000"
- id: 0e901e68141fd02f237cf63eb842529f8a9500636a9419e3cf4fb986b8fe3d5d
repoDigests: []
repoTags:
- docker.io/library/nginx:latest
size: "142000000"
- id: b1c3acb28882519cf6d3a4d7fe2b21d0ae20bde9cfd2c08a7de057f8cfccff15
repoDigests: []
repoTags:
- docker.io/library/nginx:alpine
size: "23400000"
- id: df7b72818ad2e4f1f204c7ffb51239de67f49c6b22671c70354ee5d65ac37657
repoDigests: []
repoTags:
- k8s.gcr.io/kube-controller-manager:v1.23.6
size: "125000000"
- id: ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91
repoDigests: []
repoTags:
- gcr.io/google-containers/addon-resizer:functional-20220531173104-2108
size: "32900000"
- id: 82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410
repoDigests: []
repoTags:
- k8s.gcr.io/echoserver:1.8
size: "95400000"

                                                
                                                
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (4.31s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (18.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:303: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20220531173104-2108 ssh pgrep buildkitd

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:303: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-20220531173104-2108 ssh pgrep buildkitd: exit status 1 (6.5236865s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:310: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20220531173104-2108 image build -t localhost/my-image:functional-20220531173104-2108 testdata\build

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:310: (dbg) Done: out/minikube-windows-amd64.exe -p functional-20220531173104-2108 image build -t localhost/my-image:functional-20220531173104-2108 testdata\build: (7.8503842s)
functional_test.go:315: (dbg) Stdout: out/minikube-windows-amd64.exe -p functional-20220531173104-2108 image build -t localhost/my-image:functional-20220531173104-2108 testdata\build:
Sending build context to Docker daemon  3.072kB

Step 1/3 : FROM gcr.io/k8s-minikube/busybox
latest: Pulling from k8s-minikube/busybox
5cc84ad355aa: Pulling fs layer
5cc84ad355aa: Verifying Checksum
5cc84ad355aa: Download complete
5cc84ad355aa: Pull complete
Digest: sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
Status: Downloaded newer image for gcr.io/k8s-minikube/busybox:latest
---> beae173ccac6
Step 2/3 : RUN true
---> Running in 831ef2e56aa6
Removing intermediate container 831ef2e56aa6
---> 80f82eaa9fc4
Step 3/3 : ADD content.txt /
---> 86977654d690
Successfully built 86977654d690
Successfully tagged localhost/my-image:functional-20220531173104-2108
functional_test.go:443: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20220531173104-2108 image ls
functional_test.go:443: (dbg) Done: out/minikube-windows-amd64.exe -p functional-20220531173104-2108 image ls: (4.0463373s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (18.42s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (5.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:337: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.8

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/Setup
functional_test.go:337: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.8: (4.3133393s)
functional_test.go:342: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.8 gcr.io/google-containers/addon-resizer:functional-20220531173104-2108
functional_test.go:342: (dbg) Done: docker tag gcr.io/google-containers/addon-resizer:1.8.8 gcr.io/google-containers/addon-resizer:functional-20220531173104-2108: (1.0716525s)
--- PASS: TestFunctional/parallel/ImageCommands/Setup (5.40s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (16.69s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:350: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20220531173104-2108 image load --daemon gcr.io/google-containers/addon-resizer:functional-20220531173104-2108

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:350: (dbg) Done: out/minikube-windows-amd64.exe -p functional-20220531173104-2108 image load --daemon gcr.io/google-containers/addon-resizer:functional-20220531173104-2108: (12.5666347s)
functional_test.go:443: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20220531173104-2108 image ls

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:443: (dbg) Done: out/minikube-windows-amd64.exe -p functional-20220531173104-2108 image ls: (4.1199096s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (16.69s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (12.75s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:360: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20220531173104-2108 image load --daemon gcr.io/google-containers/addon-resizer:functional-20220531173104-2108

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:360: (dbg) Done: out/minikube-windows-amd64.exe -p functional-20220531173104-2108 image load --daemon gcr.io/google-containers/addon-resizer:functional-20220531173104-2108: (8.683047s)
functional_test.go:443: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20220531173104-2108 image ls

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:443: (dbg) Done: out/minikube-windows-amd64.exe -p functional-20220531173104-2108 image ls: (4.0658821s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (12.75s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (21.62s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:230: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.9

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:230: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.9: (4.3866603s)
functional_test.go:235: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.9 gcr.io/google-containers/addon-resizer:functional-20220531173104-2108
functional_test.go:235: (dbg) Done: docker tag gcr.io/google-containers/addon-resizer:1.8.9 gcr.io/google-containers/addon-resizer:functional-20220531173104-2108: (1.0841146s)
functional_test.go:240: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20220531173104-2108 image load --daemon gcr.io/google-containers/addon-resizer:functional-20220531173104-2108

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:240: (dbg) Done: out/minikube-windows-amd64.exe -p functional-20220531173104-2108 image load --daemon gcr.io/google-containers/addon-resizer:functional-20220531173104-2108: (12.0384036s)
functional_test.go:443: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20220531173104-2108 image ls

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:443: (dbg) Done: out/minikube-windows-amd64.exe -p functional-20220531173104-2108 image ls: (4.0937893s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (21.62s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (7.87s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:375: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20220531173104-2108 image save gcr.io/google-containers/addon-resizer:functional-20220531173104-2108 C:\jenkins\workspace\Docker_Windows_integration\addon-resizer-save.tar

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:375: (dbg) Done: out/minikube-windows-amd64.exe -p functional-20220531173104-2108 image save gcr.io/google-containers/addon-resizer:functional-20220531173104-2108 C:\jenkins\workspace\Docker_Windows_integration\addon-resizer-save.tar: (7.873005s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (7.87s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (8.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:387: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20220531173104-2108 image rm gcr.io/google-containers/addon-resizer:functional-20220531173104-2108

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:387: (dbg) Done: out/minikube-windows-amd64.exe -p functional-20220531173104-2108 image rm gcr.io/google-containers/addon-resizer:functional-20220531173104-2108: (4.205395s)
functional_test.go:443: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20220531173104-2108 image ls

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:443: (dbg) Done: out/minikube-windows-amd64.exe -p functional-20220531173104-2108 image ls: (3.9680574s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (8.17s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (12.62s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:404: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20220531173104-2108 image load C:\jenkins\workspace\Docker_Windows_integration\addon-resizer-save.tar

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:404: (dbg) Done: out/minikube-windows-amd64.exe -p functional-20220531173104-2108 image load C:\jenkins\workspace\Docker_Windows_integration\addon-resizer-save.tar: (8.2501016s)
functional_test.go:443: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20220531173104-2108 image ls

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:443: (dbg) Done: out/minikube-windows-amd64.exe -p functional-20220531173104-2108 image ls: (4.3673865s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (12.62s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:369: (dbg) stopping [out/minikube-windows-amd64.exe -p functional-20220531173104-2108 tunnel --alsologtostderr] ...
helpers_test.go:506: unable to kill pid 6240: TerminateProcess: Access is denied.
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (19.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:414: (dbg) Run:  docker rmi gcr.io/google-containers/addon-resizer:functional-20220531173104-2108
functional_test.go:414: (dbg) Done: docker rmi gcr.io/google-containers/addon-resizer:functional-20220531173104-2108: (1.1323687s)
functional_test.go:419: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20220531173104-2108 image save --daemon gcr.io/google-containers/addon-resizer:functional-20220531173104-2108
functional_test.go:419: (dbg) Done: out/minikube-windows-amd64.exe -p functional-20220531173104-2108 image save --daemon gcr.io/google-containers/addon-resizer:functional-20220531173104-2108: (16.9628443s)
functional_test.go:424: (dbg) Run:  docker image inspect gcr.io/google-containers/addon-resizer:functional-20220531173104-2108
functional_test.go:424: (dbg) Done: docker image inspect gcr.io/google-containers/addon-resizer:functional-20220531173104-2108: (1.0791421s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (19.20s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2182: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20220531173104-2108 version --short
--- PASS: TestFunctional/parallel/Version/short (0.36s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (5.81s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2196: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20220531173104-2108 version -o=json --components

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2196: (dbg) Done: out/minikube-windows-amd64.exe -p functional-20220531173104-2108 version -o=json --components: (5.8094472s)
--- PASS: TestFunctional/parallel/Version/components (5.81s)

                                                
                                    
x
+
TestFunctional/delete_addon-resizer_images (0.01s)

                                                
                                                
=== RUN   TestFunctional/delete_addon-resizer_images
functional_test.go:185: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:185: (dbg) Non-zero exit: docker rmi -f gcr.io/google-containers/addon-resizer:1.8.8: context deadline exceeded (0s)
functional_test.go:187: failed to remove image "gcr.io/google-containers/addon-resizer:1.8.8" from docker images. args "docker rmi -f gcr.io/google-containers/addon-resizer:1.8.8": context deadline exceeded
functional_test.go:185: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:functional-20220531173104-2108
functional_test.go:185: (dbg) Non-zero exit: docker rmi -f gcr.io/google-containers/addon-resizer:functional-20220531173104-2108: context deadline exceeded (0s)
functional_test.go:187: failed to remove image "gcr.io/google-containers/addon-resizer:functional-20220531173104-2108" from docker images. args "docker rmi -f gcr.io/google-containers/addon-resizer:functional-20220531173104-2108": context deadline exceeded
--- PASS: TestFunctional/delete_addon-resizer_images (0.01s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.01s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:193: (dbg) Run:  docker rmi -f localhost/my-image:functional-20220531173104-2108
functional_test.go:193: (dbg) Non-zero exit: docker rmi -f localhost/my-image:functional-20220531173104-2108: context deadline exceeded (0s)
functional_test.go:195: failed to remove image my-image from docker images. args "docker rmi -f localhost/my-image:functional-20220531173104-2108": context deadline exceeded
--- PASS: TestFunctional/delete_my-image_image (0.01s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.01s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:201: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-20220531173104-2108
functional_test.go:201: (dbg) Non-zero exit: docker rmi -f minikube-local-cache-test:functional-20220531173104-2108: context deadline exceeded (0s)
functional_test.go:203: failed to remove image minikube local cache test images from docker. args "docker rmi -f minikube-local-cache-test:functional-20220531173104-2108": context deadline exceeded
--- PASS: TestFunctional/delete_minikube_cached_images (0.01s)

                                                
                                    
x
+
TestIngressAddonLegacy/StartLegacyK8sCluster (134.82s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/StartLegacyK8sCluster
ingress_addon_legacy_test.go:39: (dbg) Run:  out/minikube-windows-amd64.exe start -p ingress-addon-legacy-20220531181152-2108 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=docker
ingress_addon_legacy_test.go:39: (dbg) Done: out/minikube-windows-amd64.exe start -p ingress-addon-legacy-20220531181152-2108 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=docker: (2m14.8209942s)
--- PASS: TestIngressAddonLegacy/StartLegacyK8sCluster (134.82s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (51s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddonActivation
ingress_addon_legacy_test.go:70: (dbg) Run:  out/minikube-windows-amd64.exe -p ingress-addon-legacy-20220531181152-2108 addons enable ingress --alsologtostderr -v=5
E0531 18:14:13.037301    2108 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\addons-20220531171726-2108\client.crt: The system cannot find the path specified.
ingress_addon_legacy_test.go:70: (dbg) Done: out/minikube-windows-amd64.exe -p ingress-addon-legacy-20220531181152-2108 addons enable ingress --alsologtostderr -v=5: (50.9973047s)
--- PASS: TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (51.00s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (4.57s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation
ingress_addon_legacy_test.go:79: (dbg) Run:  out/minikube-windows-amd64.exe -p ingress-addon-legacy-20220531181152-2108 addons enable ingress-dns --alsologtostderr -v=5
ingress_addon_legacy_test.go:79: (dbg) Done: out/minikube-windows-amd64.exe -p ingress-addon-legacy-20220531181152-2108 addons enable ingress-dns --alsologtostderr -v=5: (4.5738961s)
--- PASS: TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (4.57s)

                                                
                                    
x
+
TestJSONOutput/start/Command (127.46s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-windows-amd64.exe start -p json-output-20220531181613-2108 --output=json --user=testUser --memory=2200 --wait=true --driver=docker
E0531 18:16:15.206853    2108 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\functional-20220531173104-2108\client.crt: The system cannot find the path specified.
E0531 18:16:20.339794    2108 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\functional-20220531173104-2108\client.crt: The system cannot find the path specified.
E0531 18:16:30.582748    2108 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\functional-20220531173104-2108\client.crt: The system cannot find the path specified.
E0531 18:16:51.076038    2108 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\functional-20220531173104-2108\client.crt: The system cannot find the path specified.
E0531 18:17:32.042791    2108 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\functional-20220531173104-2108\client.crt: The system cannot find the path specified.
json_output_test.go:63: (dbg) Done: out/minikube-windows-amd64.exe start -p json-output-20220531181613-2108 --output=json --user=testUser --memory=2200 --wait=true --driver=docker: (2m7.4563837s)
--- PASS: TestJSONOutput/start/Command (127.46s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (6.01s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-windows-amd64.exe pause -p json-output-20220531181613-2108 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-windows-amd64.exe pause -p json-output-20220531181613-2108 --output=json --user=testUser: (6.0102345s)
--- PASS: TestJSONOutput/pause/Command (6.01s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (5.62s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-windows-amd64.exe unpause -p json-output-20220531181613-2108 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-windows-amd64.exe unpause -p json-output-20220531181613-2108 --output=json --user=testUser: (5.6239413s)
--- PASS: TestJSONOutput/unpause/Command (5.62s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (17.73s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-windows-amd64.exe stop -p json-output-20220531181613-2108 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-windows-amd64.exe stop -p json-output-20220531181613-2108 --output=json --user=testUser: (17.7339385s)
--- PASS: TestJSONOutput/stop/Command (17.73s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (7.11s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:149: (dbg) Run:  out/minikube-windows-amd64.exe start -p json-output-error-20220531181909-2108 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:149: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p json-output-error-20220531181909-2108 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (378.7982ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"3108519c-8430-40f9-987b-4e6e6f5cc944","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-20220531181909-2108] minikube v1.26.0-beta.1 on Microsoft Windows 10 Enterprise N 10.0.19044 Build 19044","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"643f0639-f85a-4559-be65-afa1da0d53b6","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=C:\\Users\\jenkins.minikube7\\minikube-integration\\kubeconfig"}}
	{"specversion":"1.0","id":"0bab2421-678f-434b-97b3-b7c1d98cafdf","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=C:\\Users\\jenkins.minikube7\\minikube-integration\\.minikube"}}
	{"specversion":"1.0","id":"fa2e4891-1808-4ae4-bb7c-2c9326475c3d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=14079"}}
	{"specversion":"1.0","id":"9d2c4884-8e52-4701-a013-5c73f89f847a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"f4f7a1a6-65f6-495d-80de-ea8de5622a7f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on windows/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-20220531181909-2108" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p json-output-error-20220531181909-2108
E0531 18:19:13.035476    2108 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\addons-20220531171726-2108\client.crt: The system cannot find the path specified.
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p json-output-error-20220531181909-2108: (6.7319742s)
--- PASS: TestErrorJSONOutput (7.11s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (135.08s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-windows-amd64.exe start -p docker-network-20220531181916-2108 --network=
E0531 18:20:02.494782    2108 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\ingress-addon-legacy-20220531181152-2108\client.crt: The system cannot find the path specified.
E0531 18:20:02.509767    2108 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\ingress-addon-legacy-20220531181152-2108\client.crt: The system cannot find the path specified.
E0531 18:20:02.524818    2108 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\ingress-addon-legacy-20220531181152-2108\client.crt: The system cannot find the path specified.
E0531 18:20:02.558092    2108 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\ingress-addon-legacy-20220531181152-2108\client.crt: The system cannot find the path specified.
E0531 18:20:02.605842    2108 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\ingress-addon-legacy-20220531181152-2108\client.crt: The system cannot find the path specified.
E0531 18:20:02.698572    2108 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\ingress-addon-legacy-20220531181152-2108\client.crt: The system cannot find the path specified.
E0531 18:20:02.868568    2108 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\ingress-addon-legacy-20220531181152-2108\client.crt: The system cannot find the path specified.
E0531 18:20:03.200052    2108 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\ingress-addon-legacy-20220531181152-2108\client.crt: The system cannot find the path specified.
E0531 18:20:03.845042    2108 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\ingress-addon-legacy-20220531181152-2108\client.crt: The system cannot find the path specified.
E0531 18:20:05.139935    2108 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\ingress-addon-legacy-20220531181152-2108\client.crt: The system cannot find the path specified.
E0531 18:20:07.702332    2108 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\ingress-addon-legacy-20220531181152-2108\client.crt: The system cannot find the path specified.
E0531 18:20:13.207075    2108 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\ingress-addon-legacy-20220531181152-2108\client.crt: The system cannot find the path specified.
E0531 18:20:23.457122    2108 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\ingress-addon-legacy-20220531181152-2108\client.crt: The system cannot find the path specified.
E0531 18:20:43.944928    2108 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\ingress-addon-legacy-20220531181152-2108\client.crt: The system cannot find the path specified.
kic_custom_network_test.go:57: (dbg) Done: out/minikube-windows-amd64.exe start -p docker-network-20220531181916-2108 --network=: (1m52.9774631s)
kic_custom_network_test.go:122: (dbg) Run:  docker network ls --format {{.Name}}
E0531 18:21:09.999095    2108 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\functional-20220531173104-2108\client.crt: The system cannot find the path specified.
kic_custom_network_test.go:122: (dbg) Done: docker network ls --format {{.Name}}: (1.0222145s)
helpers_test.go:175: Cleaning up "docker-network-20220531181916-2108" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p docker-network-20220531181916-2108
E0531 18:21:24.919690    2108 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\ingress-addon-legacy-20220531181152-2108\client.crt: The system cannot find the path specified.
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p docker-network-20220531181916-2108: (21.0702592s)
--- PASS: TestKicCustomNetwork/create_custom_network (135.08s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (128.23s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-windows-amd64.exe start -p docker-network-20220531182131-2108 --network=bridge
E0531 18:21:37.814697    2108 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\functional-20220531173104-2108\client.crt: The system cannot find the path specified.
E0531 18:22:46.845978    2108 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\ingress-addon-legacy-20220531181152-2108\client.crt: The system cannot find the path specified.
kic_custom_network_test.go:57: (dbg) Done: out/minikube-windows-amd64.exe start -p docker-network-20220531182131-2108 --network=bridge: (1m50.968955s)
kic_custom_network_test.go:122: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:122: (dbg) Done: docker network ls --format {{.Name}}: (1.0641417s)
helpers_test.go:175: Cleaning up "docker-network-20220531182131-2108" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p docker-network-20220531182131-2108
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p docker-network-20220531182131-2108: (16.1823785s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (128.23s)

                                                
                                    
x
+
TestKicExistingNetwork (136.52s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:122: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:122: (dbg) Done: docker network ls --format {{.Name}}: (1.0040302s)
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-windows-amd64.exe start -p existing-network-20220531182343-2108 --network=existing-network
E0531 18:24:13.034856    2108 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\addons-20220531171726-2108\client.crt: The system cannot find the path specified.
E0531 18:25:02.506336    2108 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\ingress-addon-legacy-20220531181152-2108\client.crt: The system cannot find the path specified.
E0531 18:25:30.699145    2108 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\ingress-addon-legacy-20220531181152-2108\client.crt: The system cannot find the path specified.
kic_custom_network_test.go:93: (dbg) Done: out/minikube-windows-amd64.exe start -p existing-network-20220531182343-2108 --network=existing-network: (1m49.0534148s)
helpers_test.go:175: Cleaning up "existing-network-20220531182343-2108" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p existing-network-20220531182343-2108
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p existing-network-20220531182343-2108: (21.1224655s)
--- PASS: TestKicExistingNetwork (136.52s)

                                                
                                    
x
+
TestKicCustomSubnet (135.05s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-windows-amd64.exe start -p custom-subnet-20220531182556-2108 --subnet=192.168.60.0/24
E0531 18:26:10.001247    2108 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\functional-20220531173104-2108\client.crt: The system cannot find the path specified.
kic_custom_network_test.go:112: (dbg) Done: out/minikube-windows-amd64.exe start -p custom-subnet-20220531182556-2108 --subnet=192.168.60.0/24: (1m52.8008211s)
kic_custom_network_test.go:133: (dbg) Run:  docker network inspect custom-subnet-20220531182556-2108 --format "{{(index .IPAM.Config 0).Subnet}}"
kic_custom_network_test.go:133: (dbg) Done: docker network inspect custom-subnet-20220531182556-2108 --format "{{(index .IPAM.Config 0).Subnet}}": (1.0173212s)
helpers_test.go:175: Cleaning up "custom-subnet-20220531182556-2108" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p custom-subnet-20220531182556-2108
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p custom-subnet-20220531182556-2108: (21.2205003s)
--- PASS: TestKicCustomSubnet (135.05s)

                                                
                                    
x
+
TestMainNoArgs (0.33s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-windows-amd64.exe
--- PASS: TestMainNoArgs (0.33s)

                                                
                                    
x
+
TestMinikubeProfile (292.11s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:42: (dbg) Run:  out/minikube-windows-amd64.exe start -p first-20220531182811-2108
E0531 18:29:13.030739    2108 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\addons-20220531171726-2108\client.crt: The system cannot find the path specified.
E0531 18:30:02.498748    2108 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\ingress-addon-legacy-20220531181152-2108\client.crt: The system cannot find the path specified.
minikube_profile_test.go:42: (dbg) Done: out/minikube-windows-amd64.exe start -p first-20220531182811-2108: (1m53.6027325s)
minikube_profile_test.go:42: (dbg) Run:  out/minikube-windows-amd64.exe start -p second-20220531182811-2108
E0531 18:31:10.010243    2108 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\functional-20220531173104-2108\client.crt: The system cannot find the path specified.
minikube_profile_test.go:42: (dbg) Done: out/minikube-windows-amd64.exe start -p second-20220531182811-2108: (1m51.2235287s)
minikube_profile_test.go:49: (dbg) Run:  out/minikube-windows-amd64.exe profile first-20220531182811-2108
minikube_profile_test.go:49: (dbg) Done: out/minikube-windows-amd64.exe profile first-20220531182811-2108: (2.9701997s)
minikube_profile_test.go:53: (dbg) Run:  out/minikube-windows-amd64.exe profile list -ojson
minikube_profile_test.go:53: (dbg) Done: out/minikube-windows-amd64.exe profile list -ojson: (10.3563293s)
minikube_profile_test.go:49: (dbg) Run:  out/minikube-windows-amd64.exe profile second-20220531182811-2108
minikube_profile_test.go:49: (dbg) Done: out/minikube-windows-amd64.exe profile second-20220531182811-2108: (2.9188269s)
minikube_profile_test.go:53: (dbg) Run:  out/minikube-windows-amd64.exe profile list -ojson
minikube_profile_test.go:53: (dbg) Done: out/minikube-windows-amd64.exe profile list -ojson: (9.7653235s)
helpers_test.go:175: Cleaning up "second-20220531182811-2108" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p second-20220531182811-2108
E0531 18:32:33.187945    2108 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\functional-20220531173104-2108\client.crt: The system cannot find the path specified.
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p second-20220531182811-2108: (20.6105834s)
helpers_test.go:175: Cleaning up "first-20220531182811-2108" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p first-20220531182811-2108
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p first-20220531182811-2108: (20.6572689s)
--- PASS: TestMinikubeProfile (292.11s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (49.39s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-windows-amd64.exe start -p mount-start-1-20220531183303-2108 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker
mount_start_test.go:98: (dbg) Done: out/minikube-windows-amd64.exe start -p mount-start-1-20220531183303-2108 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker: (48.3770354s)
--- PASS: TestMountStart/serial/StartWithMountFirst (49.39s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (5.98s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-windows-amd64.exe -p mount-start-1-20220531183303-2108 ssh -- ls /minikube-host
mount_start_test.go:114: (dbg) Done: out/minikube-windows-amd64.exe -p mount-start-1-20220531183303-2108 ssh -- ls /minikube-host: (5.9839422s)
--- PASS: TestMountStart/serial/VerifyMountFirst (5.98s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (49.8s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-windows-amd64.exe start -p mount-start-2-20220531183303-2108 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker
E0531 18:34:13.030059    2108 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\addons-20220531171726-2108\client.crt: The system cannot find the path specified.
mount_start_test.go:98: (dbg) Done: out/minikube-windows-amd64.exe start -p mount-start-2-20220531183303-2108 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker: (48.7870122s)
--- PASS: TestMountStart/serial/StartWithMountSecond (49.80s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (5.84s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-windows-amd64.exe -p mount-start-2-20220531183303-2108 ssh -- ls /minikube-host
mount_start_test.go:114: (dbg) Done: out/minikube-windows-amd64.exe -p mount-start-2-20220531183303-2108 ssh -- ls /minikube-host: (5.843593s)
--- PASS: TestMountStart/serial/VerifyMountSecond (5.84s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (18.41s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-windows-amd64.exe delete -p mount-start-1-20220531183303-2108 --alsologtostderr -v=5
E0531 18:35:02.502134    2108 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\ingress-addon-legacy-20220531181152-2108\client.crt: The system cannot find the path specified.
pause_test.go:132: (dbg) Done: out/minikube-windows-amd64.exe delete -p mount-start-1-20220531183303-2108 --alsologtostderr -v=5: (18.4080022s)
--- PASS: TestMountStart/serial/DeleteFirst (18.41s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (5.84s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-windows-amd64.exe -p mount-start-2-20220531183303-2108 ssh -- ls /minikube-host
mount_start_test.go:114: (dbg) Done: out/minikube-windows-amd64.exe -p mount-start-2-20220531183303-2108 ssh -- ls /minikube-host: (5.8400833s)
--- PASS: TestMountStart/serial/VerifyMountPostDelete (5.84s)

                                                
                                    
x
+
TestMountStart/serial/Stop (8.48s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-windows-amd64.exe stop -p mount-start-2-20220531183303-2108
mount_start_test.go:155: (dbg) Done: out/minikube-windows-amd64.exe stop -p mount-start-2-20220531183303-2108: (8.4841187s)
--- PASS: TestMountStart/serial/Stop (8.48s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (28.46s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-windows-amd64.exe start -p mount-start-2-20220531183303-2108
E0531 18:35:36.271972    2108 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\addons-20220531171726-2108\client.crt: The system cannot find the path specified.
mount_start_test.go:166: (dbg) Done: out/minikube-windows-amd64.exe start -p mount-start-2-20220531183303-2108: (27.4601552s)
--- PASS: TestMountStart/serial/RestartStopped (28.46s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (5.92s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-windows-amd64.exe -p mount-start-2-20220531183303-2108 ssh -- ls /minikube-host
mount_start_test.go:114: (dbg) Done: out/minikube-windows-amd64.exe -p mount-start-2-20220531183303-2108 ssh -- ls /minikube-host: (5.924235s)
--- PASS: TestMountStart/serial/VerifyMountPostStop (5.92s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (251.34s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:83: (dbg) Run:  out/minikube-windows-amd64.exe start -p multinode-20220531183625-2108 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker
E0531 18:36:26.076617    2108 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\ingress-addon-legacy-20220531181152-2108\client.crt: The system cannot find the path specified.
E0531 18:39:13.028864    2108 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\addons-20220531171726-2108\client.crt: The system cannot find the path specified.
E0531 18:40:02.503747    2108 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\ingress-addon-legacy-20220531181152-2108\client.crt: The system cannot find the path specified.
multinode_test.go:83: (dbg) Done: out/minikube-windows-amd64.exe start -p multinode-20220531183625-2108 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker: (4m1.4710351s)
multinode_test.go:89: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-20220531183625-2108 status --alsologtostderr
multinode_test.go:89: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-20220531183625-2108 status --alsologtostderr: (9.8706238s)
--- PASS: TestMultiNode/serial/FreshStart2Nodes (251.34s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (24.95s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:479: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-20220531183625-2108 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:479: (dbg) Done: out/minikube-windows-amd64.exe kubectl -p multinode-20220531183625-2108 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml: (2.5046817s)
multinode_test.go:484: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-20220531183625-2108 -- rollout status deployment/busybox
multinode_test.go:484: (dbg) Done: out/minikube-windows-amd64.exe kubectl -p multinode-20220531183625-2108 -- rollout status deployment/busybox: (3.569968s)
multinode_test.go:490: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-20220531183625-2108 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:490: (dbg) Done: out/minikube-windows-amd64.exe kubectl -p multinode-20220531183625-2108 -- get pods -o jsonpath='{.items[*].status.podIP}': (1.9487657s)
multinode_test.go:502: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-20220531183625-2108 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:502: (dbg) Done: out/minikube-windows-amd64.exe kubectl -p multinode-20220531183625-2108 -- get pods -o jsonpath='{.items[*].metadata.name}': (1.9735934s)
multinode_test.go:510: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-20220531183625-2108 -- exec busybox-7978565885-7gmrj -- nslookup kubernetes.io
multinode_test.go:510: (dbg) Done: out/minikube-windows-amd64.exe kubectl -p multinode-20220531183625-2108 -- exec busybox-7978565885-7gmrj -- nslookup kubernetes.io: (3.379423s)
multinode_test.go:510: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-20220531183625-2108 -- exec busybox-7978565885-xrh5m -- nslookup kubernetes.io
multinode_test.go:510: (dbg) Done: out/minikube-windows-amd64.exe kubectl -p multinode-20220531183625-2108 -- exec busybox-7978565885-xrh5m -- nslookup kubernetes.io: (3.1888365s)
multinode_test.go:520: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-20220531183625-2108 -- exec busybox-7978565885-7gmrj -- nslookup kubernetes.default
multinode_test.go:520: (dbg) Done: out/minikube-windows-amd64.exe kubectl -p multinode-20220531183625-2108 -- exec busybox-7978565885-7gmrj -- nslookup kubernetes.default: (2.1148129s)
multinode_test.go:520: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-20220531183625-2108 -- exec busybox-7978565885-xrh5m -- nslookup kubernetes.default
multinode_test.go:520: (dbg) Done: out/minikube-windows-amd64.exe kubectl -p multinode-20220531183625-2108 -- exec busybox-7978565885-xrh5m -- nslookup kubernetes.default: (2.14066s)
multinode_test.go:528: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-20220531183625-2108 -- exec busybox-7978565885-7gmrj -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:528: (dbg) Done: out/minikube-windows-amd64.exe kubectl -p multinode-20220531183625-2108 -- exec busybox-7978565885-7gmrj -- nslookup kubernetes.default.svc.cluster.local: (2.0788055s)
multinode_test.go:528: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-20220531183625-2108 -- exec busybox-7978565885-xrh5m -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:528: (dbg) Done: out/minikube-windows-amd64.exe kubectl -p multinode-20220531183625-2108 -- exec busybox-7978565885-xrh5m -- nslookup kubernetes.default.svc.cluster.local: (2.0462081s)
--- PASS: TestMultiNode/serial/DeployApp2Nodes (24.95s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (10.58s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:538: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-20220531183625-2108 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:538: (dbg) Done: out/minikube-windows-amd64.exe kubectl -p multinode-20220531183625-2108 -- get pods -o jsonpath='{.items[*].metadata.name}': (1.9252662s)
multinode_test.go:546: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-20220531183625-2108 -- exec busybox-7978565885-7gmrj -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:546: (dbg) Done: out/minikube-windows-amd64.exe kubectl -p multinode-20220531183625-2108 -- exec busybox-7978565885-7gmrj -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3": (2.1611408s)
multinode_test.go:554: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-20220531183625-2108 -- exec busybox-7978565885-7gmrj -- sh -c "ping -c 1 192.168.65.2"
multinode_test.go:554: (dbg) Done: out/minikube-windows-amd64.exe kubectl -p multinode-20220531183625-2108 -- exec busybox-7978565885-7gmrj -- sh -c "ping -c 1 192.168.65.2": (2.2038714s)
multinode_test.go:546: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-20220531183625-2108 -- exec busybox-7978565885-xrh5m -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:546: (dbg) Done: out/minikube-windows-amd64.exe kubectl -p multinode-20220531183625-2108 -- exec busybox-7978565885-xrh5m -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3": (2.1578523s)
multinode_test.go:554: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-20220531183625-2108 -- exec busybox-7978565885-xrh5m -- sh -c "ping -c 1 192.168.65.2"
E0531 18:41:10.005197    2108 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\functional-20220531173104-2108\client.crt: The system cannot find the path specified.
multinode_test.go:554: (dbg) Done: out/minikube-windows-amd64.exe kubectl -p multinode-20220531183625-2108 -- exec busybox-7978565885-xrh5m -- sh -c "ping -c 1 192.168.65.2": (2.1286786s)
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (10.58s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (118.24s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:108: (dbg) Run:  out/minikube-windows-amd64.exe node add -p multinode-20220531183625-2108 -v 3 --alsologtostderr
multinode_test.go:108: (dbg) Done: out/minikube-windows-amd64.exe node add -p multinode-20220531183625-2108 -v 3 --alsologtostderr: (1m44.9371076s)
multinode_test.go:114: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-20220531183625-2108 status --alsologtostderr
multinode_test.go:114: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-20220531183625-2108 status --alsologtostderr: (13.3074173s)
--- PASS: TestMultiNode/serial/AddNode (118.24s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (6.35s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:130: (dbg) Run:  out/minikube-windows-amd64.exe profile list --output json
multinode_test.go:130: (dbg) Done: out/minikube-windows-amd64.exe profile list --output json: (6.3490864s)
--- PASS: TestMultiNode/serial/ProfileList (6.35s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (215.31s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:171: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-20220531183625-2108 status --output json --alsologtostderr
multinode_test.go:171: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-20220531183625-2108 status --output json --alsologtostderr: (13.1587155s)
helpers_test.go:554: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-20220531183625-2108 cp testdata\cp-test.txt multinode-20220531183625-2108:/home/docker/cp-test.txt
helpers_test.go:554: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-20220531183625-2108 cp testdata\cp-test.txt multinode-20220531183625-2108:/home/docker/cp-test.txt: (6.3968274s)
helpers_test.go:532: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-20220531183625-2108 ssh -n multinode-20220531183625-2108 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:532: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-20220531183625-2108 ssh -n multinode-20220531183625-2108 "sudo cat /home/docker/cp-test.txt": (6.3227159s)
helpers_test.go:554: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-20220531183625-2108 cp multinode-20220531183625-2108:/home/docker/cp-test.txt C:\Users\jenkins.minikube7\AppData\Local\Temp\TestMultiNodeserialCopyFile3989871777\001\cp-test_multinode-20220531183625-2108.txt
helpers_test.go:554: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-20220531183625-2108 cp multinode-20220531183625-2108:/home/docker/cp-test.txt C:\Users\jenkins.minikube7\AppData\Local\Temp\TestMultiNodeserialCopyFile3989871777\001\cp-test_multinode-20220531183625-2108.txt: (6.261662s)
helpers_test.go:532: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-20220531183625-2108 ssh -n multinode-20220531183625-2108 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:532: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-20220531183625-2108 ssh -n multinode-20220531183625-2108 "sudo cat /home/docker/cp-test.txt": (6.3179685s)
helpers_test.go:554: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-20220531183625-2108 cp multinode-20220531183625-2108:/home/docker/cp-test.txt multinode-20220531183625-2108-m02:/home/docker/cp-test_multinode-20220531183625-2108_multinode-20220531183625-2108-m02.txt
helpers_test.go:554: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-20220531183625-2108 cp multinode-20220531183625-2108:/home/docker/cp-test.txt multinode-20220531183625-2108-m02:/home/docker/cp-test_multinode-20220531183625-2108_multinode-20220531183625-2108-m02.txt: (8.5877033s)
helpers_test.go:532: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-20220531183625-2108 ssh -n multinode-20220531183625-2108 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:532: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-20220531183625-2108 ssh -n multinode-20220531183625-2108 "sudo cat /home/docker/cp-test.txt": (6.2859243s)
helpers_test.go:532: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-20220531183625-2108 ssh -n multinode-20220531183625-2108-m02 "sudo cat /home/docker/cp-test_multinode-20220531183625-2108_multinode-20220531183625-2108-m02.txt"
E0531 18:44:13.034051    2108 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\addons-20220531171726-2108\client.crt: The system cannot find the path specified.
helpers_test.go:532: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-20220531183625-2108 ssh -n multinode-20220531183625-2108-m02 "sudo cat /home/docker/cp-test_multinode-20220531183625-2108_multinode-20220531183625-2108-m02.txt": (6.318347s)
helpers_test.go:554: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-20220531183625-2108 cp multinode-20220531183625-2108:/home/docker/cp-test.txt multinode-20220531183625-2108-m03:/home/docker/cp-test_multinode-20220531183625-2108_multinode-20220531183625-2108-m03.txt
helpers_test.go:554: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-20220531183625-2108 cp multinode-20220531183625-2108:/home/docker/cp-test.txt multinode-20220531183625-2108-m03:/home/docker/cp-test_multinode-20220531183625-2108_multinode-20220531183625-2108-m03.txt: (8.7285519s)
helpers_test.go:532: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-20220531183625-2108 ssh -n multinode-20220531183625-2108 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:532: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-20220531183625-2108 ssh -n multinode-20220531183625-2108 "sudo cat /home/docker/cp-test.txt": (6.2820754s)
helpers_test.go:532: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-20220531183625-2108 ssh -n multinode-20220531183625-2108-m03 "sudo cat /home/docker/cp-test_multinode-20220531183625-2108_multinode-20220531183625-2108-m03.txt"
helpers_test.go:532: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-20220531183625-2108 ssh -n multinode-20220531183625-2108-m03 "sudo cat /home/docker/cp-test_multinode-20220531183625-2108_multinode-20220531183625-2108-m03.txt": (6.1915721s)
helpers_test.go:554: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-20220531183625-2108 cp testdata\cp-test.txt multinode-20220531183625-2108-m02:/home/docker/cp-test.txt
helpers_test.go:554: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-20220531183625-2108 cp testdata\cp-test.txt multinode-20220531183625-2108-m02:/home/docker/cp-test.txt: (6.2065835s)
helpers_test.go:532: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-20220531183625-2108 ssh -n multinode-20220531183625-2108-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:532: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-20220531183625-2108 ssh -n multinode-20220531183625-2108-m02 "sudo cat /home/docker/cp-test.txt": (6.1992173s)
helpers_test.go:554: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-20220531183625-2108 cp multinode-20220531183625-2108-m02:/home/docker/cp-test.txt C:\Users\jenkins.minikube7\AppData\Local\Temp\TestMultiNodeserialCopyFile3989871777\001\cp-test_multinode-20220531183625-2108-m02.txt
helpers_test.go:554: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-20220531183625-2108 cp multinode-20220531183625-2108-m02:/home/docker/cp-test.txt C:\Users\jenkins.minikube7\AppData\Local\Temp\TestMultiNodeserialCopyFile3989871777\001\cp-test_multinode-20220531183625-2108-m02.txt: (6.1700218s)
helpers_test.go:532: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-20220531183625-2108 ssh -n multinode-20220531183625-2108-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:532: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-20220531183625-2108 ssh -n multinode-20220531183625-2108-m02 "sudo cat /home/docker/cp-test.txt": (6.1742279s)
helpers_test.go:554: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-20220531183625-2108 cp multinode-20220531183625-2108-m02:/home/docker/cp-test.txt multinode-20220531183625-2108:/home/docker/cp-test_multinode-20220531183625-2108-m02_multinode-20220531183625-2108.txt
E0531 18:45:02.509791    2108 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\ingress-addon-legacy-20220531181152-2108\client.crt: The system cannot find the path specified.
helpers_test.go:554: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-20220531183625-2108 cp multinode-20220531183625-2108-m02:/home/docker/cp-test.txt multinode-20220531183625-2108:/home/docker/cp-test_multinode-20220531183625-2108-m02_multinode-20220531183625-2108.txt: (8.6139441s)
helpers_test.go:532: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-20220531183625-2108 ssh -n multinode-20220531183625-2108-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:532: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-20220531183625-2108 ssh -n multinode-20220531183625-2108-m02 "sudo cat /home/docker/cp-test.txt": (6.2345197s)
helpers_test.go:532: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-20220531183625-2108 ssh -n multinode-20220531183625-2108 "sudo cat /home/docker/cp-test_multinode-20220531183625-2108-m02_multinode-20220531183625-2108.txt"
helpers_test.go:532: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-20220531183625-2108 ssh -n multinode-20220531183625-2108 "sudo cat /home/docker/cp-test_multinode-20220531183625-2108-m02_multinode-20220531183625-2108.txt": (6.2086803s)
helpers_test.go:554: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-20220531183625-2108 cp multinode-20220531183625-2108-m02:/home/docker/cp-test.txt multinode-20220531183625-2108-m03:/home/docker/cp-test_multinode-20220531183625-2108-m02_multinode-20220531183625-2108-m03.txt
helpers_test.go:554: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-20220531183625-2108 cp multinode-20220531183625-2108-m02:/home/docker/cp-test.txt multinode-20220531183625-2108-m03:/home/docker/cp-test_multinode-20220531183625-2108-m02_multinode-20220531183625-2108-m03.txt: (8.4602832s)
helpers_test.go:532: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-20220531183625-2108 ssh -n multinode-20220531183625-2108-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:532: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-20220531183625-2108 ssh -n multinode-20220531183625-2108-m02 "sudo cat /home/docker/cp-test.txt": (6.2068777s)
helpers_test.go:532: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-20220531183625-2108 ssh -n multinode-20220531183625-2108-m03 "sudo cat /home/docker/cp-test_multinode-20220531183625-2108-m02_multinode-20220531183625-2108-m03.txt"
helpers_test.go:532: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-20220531183625-2108 ssh -n multinode-20220531183625-2108-m03 "sudo cat /home/docker/cp-test_multinode-20220531183625-2108-m02_multinode-20220531183625-2108-m03.txt": (6.1356278s)
helpers_test.go:554: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-20220531183625-2108 cp testdata\cp-test.txt multinode-20220531183625-2108-m03:/home/docker/cp-test.txt
helpers_test.go:554: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-20220531183625-2108 cp testdata\cp-test.txt multinode-20220531183625-2108-m03:/home/docker/cp-test.txt: (6.3132102s)
helpers_test.go:532: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-20220531183625-2108 ssh -n multinode-20220531183625-2108-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:532: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-20220531183625-2108 ssh -n multinode-20220531183625-2108-m03 "sudo cat /home/docker/cp-test.txt": (6.2672328s)
helpers_test.go:554: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-20220531183625-2108 cp multinode-20220531183625-2108-m03:/home/docker/cp-test.txt C:\Users\jenkins.minikube7\AppData\Local\Temp\TestMultiNodeserialCopyFile3989871777\001\cp-test_multinode-20220531183625-2108-m03.txt
helpers_test.go:554: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-20220531183625-2108 cp multinode-20220531183625-2108-m03:/home/docker/cp-test.txt C:\Users\jenkins.minikube7\AppData\Local\Temp\TestMultiNodeserialCopyFile3989871777\001\cp-test_multinode-20220531183625-2108-m03.txt: (6.2514535s)
helpers_test.go:532: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-20220531183625-2108 ssh -n multinode-20220531183625-2108-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:532: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-20220531183625-2108 ssh -n multinode-20220531183625-2108-m03 "sudo cat /home/docker/cp-test.txt": (6.3695019s)
helpers_test.go:554: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-20220531183625-2108 cp multinode-20220531183625-2108-m03:/home/docker/cp-test.txt multinode-20220531183625-2108:/home/docker/cp-test_multinode-20220531183625-2108-m03_multinode-20220531183625-2108.txt
E0531 18:46:10.005087    2108 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\functional-20220531173104-2108\client.crt: The system cannot find the path specified.
helpers_test.go:554: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-20220531183625-2108 cp multinode-20220531183625-2108-m03:/home/docker/cp-test.txt multinode-20220531183625-2108:/home/docker/cp-test_multinode-20220531183625-2108-m03_multinode-20220531183625-2108.txt: (8.6671656s)
helpers_test.go:532: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-20220531183625-2108 ssh -n multinode-20220531183625-2108-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:532: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-20220531183625-2108 ssh -n multinode-20220531183625-2108-m03 "sudo cat /home/docker/cp-test.txt": (6.3445157s)
helpers_test.go:532: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-20220531183625-2108 ssh -n multinode-20220531183625-2108 "sudo cat /home/docker/cp-test_multinode-20220531183625-2108-m03_multinode-20220531183625-2108.txt"
helpers_test.go:532: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-20220531183625-2108 ssh -n multinode-20220531183625-2108 "sudo cat /home/docker/cp-test_multinode-20220531183625-2108-m03_multinode-20220531183625-2108.txt": (6.4132414s)
helpers_test.go:554: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-20220531183625-2108 cp multinode-20220531183625-2108-m03:/home/docker/cp-test.txt multinode-20220531183625-2108-m02:/home/docker/cp-test_multinode-20220531183625-2108-m03_multinode-20220531183625-2108-m02.txt
helpers_test.go:554: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-20220531183625-2108 cp multinode-20220531183625-2108-m03:/home/docker/cp-test.txt multinode-20220531183625-2108-m02:/home/docker/cp-test_multinode-20220531183625-2108-m03_multinode-20220531183625-2108-m02.txt: (8.5896152s)
helpers_test.go:532: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-20220531183625-2108 ssh -n multinode-20220531183625-2108-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:532: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-20220531183625-2108 ssh -n multinode-20220531183625-2108-m03 "sudo cat /home/docker/cp-test.txt": (6.3144643s)
helpers_test.go:532: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-20220531183625-2108 ssh -n multinode-20220531183625-2108-m02 "sudo cat /home/docker/cp-test_multinode-20220531183625-2108-m03_multinode-20220531183625-2108-m02.txt"
helpers_test.go:532: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-20220531183625-2108 ssh -n multinode-20220531183625-2108-m02 "sudo cat /home/docker/cp-test_multinode-20220531183625-2108-m03_multinode-20220531183625-2108-m02.txt": (6.3050714s)
--- PASS: TestMultiNode/serial/CopyFile (215.31s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (29.07s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:208: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-20220531183625-2108 node stop m03
multinode_test.go:208: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-20220531183625-2108 node stop m03: (7.3627382s)
multinode_test.go:214: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-20220531183625-2108 status
multinode_test.go:214: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p multinode-20220531183625-2108 status: exit status 7 (10.8186976s)

                                                
                                                
-- stdout --
	multinode-20220531183625-2108
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-20220531183625-2108-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-20220531183625-2108-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:221: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-20220531183625-2108 status --alsologtostderr
multinode_test.go:221: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p multinode-20220531183625-2108 status --alsologtostderr: exit status 7 (10.8910825s)

                                                
                                                
-- stdout --
	multinode-20220531183625-2108
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-20220531183625-2108-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-20220531183625-2108-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0531 18:47:10.412652    5388 out.go:296] Setting OutFile to fd 908 ...
	I0531 18:47:10.468068    5388 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0531 18:47:10.468068    5388 out.go:309] Setting ErrFile to fd 848...
	I0531 18:47:10.468068    5388 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0531 18:47:10.477711    5388 out.go:303] Setting JSON to false
	I0531 18:47:10.477711    5388 mustload.go:65] Loading cluster: multinode-20220531183625-2108
	I0531 18:47:10.479001    5388 config.go:178] Loaded profile config "multinode-20220531183625-2108": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.6
	I0531 18:47:10.479001    5388 status.go:253] checking status of multinode-20220531183625-2108 ...
	I0531 18:47:10.494657    5388 cli_runner.go:164] Run: docker container inspect multinode-20220531183625-2108 --format={{.State.Status}}
	I0531 18:47:12.960756    5388 cli_runner.go:217] Completed: docker container inspect multinode-20220531183625-2108 --format={{.State.Status}}: (2.4660318s)
	I0531 18:47:12.960756    5388 status.go:328] multinode-20220531183625-2108 host status = "Running" (err=<nil>)
	I0531 18:47:12.960756    5388 host.go:66] Checking if "multinode-20220531183625-2108" exists ...
	I0531 18:47:12.968465    5388 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-20220531183625-2108
	I0531 18:47:14.031842    5388 cli_runner.go:217] Completed: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-20220531183625-2108: (1.0633728s)
	I0531 18:47:14.031842    5388 host.go:66] Checking if "multinode-20220531183625-2108" exists ...
	I0531 18:47:14.042630    5388 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0531 18:47:14.048526    5388 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220531183625-2108
	I0531 18:47:15.119151    5388 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220531183625-2108: (1.0706203s)
	I0531 18:47:15.119151    5388 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:52451 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\multinode-20220531183625-2108\id_rsa Username:docker}
	I0531 18:47:15.270618    5388 ssh_runner.go:235] Completed: sh -c "df -h /var | awk 'NR==2{print $5}'": (1.2279534s)
	I0531 18:47:15.281579    5388 ssh_runner.go:195] Run: systemctl --version
	I0531 18:47:15.312047    5388 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0531 18:47:15.352185    5388 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" multinode-20220531183625-2108
	I0531 18:47:16.425028    5388 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" multinode-20220531183625-2108: (1.0719517s)
	I0531 18:47:16.426229    5388 kubeconfig.go:92] found "multinode-20220531183625-2108" server: "https://127.0.0.1:52455"
	I0531 18:47:16.426229    5388 api_server.go:165] Checking apiserver status ...
	I0531 18:47:16.436730    5388 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 18:47:16.478635    5388 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1765/cgroup
	I0531 18:47:16.510867    5388 api_server.go:181] apiserver freezer: "20:freezer:/docker/b79b9e5c90ac9be66a616fd2ca4c6914de3f9925586f6050118cd138b7722917/kubepods/burstable/podc41ef445dc8c4e876b1cb9b9aab7da97/e5620f2089a1df4105188512a09457c8a0533a11fad19e55e12d62e498c97690"
	I0531 18:47:16.521743    5388 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/b79b9e5c90ac9be66a616fd2ca4c6914de3f9925586f6050118cd138b7722917/kubepods/burstable/podc41ef445dc8c4e876b1cb9b9aab7da97/e5620f2089a1df4105188512a09457c8a0533a11fad19e55e12d62e498c97690/freezer.state
	I0531 18:47:16.548579    5388 api_server.go:203] freezer state: "THAWED"
	I0531 18:47:16.548579    5388 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:52455/healthz ...
	I0531 18:47:16.563594    5388 api_server.go:266] https://127.0.0.1:52455/healthz returned 200:
	ok
	I0531 18:47:16.563594    5388 status.go:419] multinode-20220531183625-2108 apiserver status = Running (err=<nil>)
	I0531 18:47:16.563594    5388 status.go:255] multinode-20220531183625-2108 status: &{Name:multinode-20220531183625-2108 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0531 18:47:16.563594    5388 status.go:253] checking status of multinode-20220531183625-2108-m02 ...
	I0531 18:47:16.578019    5388 cli_runner.go:164] Run: docker container inspect multinode-20220531183625-2108-m02 --format={{.State.Status}}
	I0531 18:47:17.650674    5388 cli_runner.go:217] Completed: docker container inspect multinode-20220531183625-2108-m02 --format={{.State.Status}}: (1.0726175s)
	I0531 18:47:17.650828    5388 status.go:328] multinode-20220531183625-2108-m02 host status = "Running" (err=<nil>)
	I0531 18:47:17.650828    5388 host.go:66] Checking if "multinode-20220531183625-2108-m02" exists ...
	I0531 18:47:17.659366    5388 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-20220531183625-2108-m02
	I0531 18:47:18.695409    5388 cli_runner.go:217] Completed: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-20220531183625-2108-m02: (1.0360383s)
	I0531 18:47:18.695409    5388 host.go:66] Checking if "multinode-20220531183625-2108-m02" exists ...
	I0531 18:47:18.706088    5388 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0531 18:47:18.711798    5388 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220531183625-2108-m02
	I0531 18:47:19.772015    5388 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220531183625-2108-m02: (1.0602119s)
	I0531 18:47:19.772687    5388 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:52511 SSHKeyPath:C:\Users\jenkins.minikube7\minikube-integration\.minikube\machines\multinode-20220531183625-2108-m02\id_rsa Username:docker}
	I0531 18:47:19.917942    5388 ssh_runner.go:235] Completed: sh -c "df -h /var | awk 'NR==2{print $5}'": (1.2118489s)
	I0531 18:47:19.930438    5388 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0531 18:47:19.963385    5388 status.go:255] multinode-20220531183625-2108-m02 status: &{Name:multinode-20220531183625-2108-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0531 18:47:19.963385    5388 status.go:253] checking status of multinode-20220531183625-2108-m03 ...
	I0531 18:47:19.981585    5388 cli_runner.go:164] Run: docker container inspect multinode-20220531183625-2108-m03 --format={{.State.Status}}
	I0531 18:47:21.035720    5388 cli_runner.go:217] Completed: docker container inspect multinode-20220531183625-2108-m03 --format={{.State.Status}}: (1.0541305s)
	I0531 18:47:21.035720    5388 status.go:328] multinode-20220531183625-2108-m03 host status = "Stopped" (err=<nil>)
	I0531 18:47:21.035720    5388 status.go:341] host is not running, skipping remaining checks
	I0531 18:47:21.035720    5388 status.go:255] multinode-20220531183625-2108-m03 status: &{Name:multinode-20220531183625-2108-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (29.07s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (53.22s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:242: (dbg) Run:  docker version -f {{.Server.Version}}
multinode_test.go:242: (dbg) Done: docker version -f {{.Server.Version}}: (1.1163128s)
multinode_test.go:252: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-20220531183625-2108 node start m03 --alsologtostderr
multinode_test.go:252: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-20220531183625-2108 node start m03 --alsologtostderr: (38.491913s)
multinode_test.go:259: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-20220531183625-2108 status
multinode_test.go:259: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-20220531183625-2108 status: (13.3706101s)
multinode_test.go:273: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (53.22s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (214.24s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:281: (dbg) Run:  out/minikube-windows-amd64.exe node list -p multinode-20220531183625-2108
multinode_test.go:288: (dbg) Run:  out/minikube-windows-amd64.exe stop -p multinode-20220531183625-2108
multinode_test.go:288: (dbg) Done: out/minikube-windows-amd64.exe stop -p multinode-20220531183625-2108: (38.1730244s)
multinode_test.go:293: (dbg) Run:  out/minikube-windows-amd64.exe start -p multinode-20220531183625-2108 --wait=true -v=8 --alsologtostderr
E0531 18:49:13.041188    2108 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\addons-20220531171726-2108\client.crt: The system cannot find the path specified.
E0531 18:49:13.195243    2108 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\functional-20220531173104-2108\client.crt: The system cannot find the path specified.
E0531 18:50:02.500971    2108 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\ingress-addon-legacy-20220531181152-2108\client.crt: The system cannot find the path specified.
E0531 18:51:10.020323    2108 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\functional-20220531173104-2108\client.crt: The system cannot find the path specified.
multinode_test.go:293: (dbg) Done: out/minikube-windows-amd64.exe start -p multinode-20220531183625-2108 --wait=true -v=8 --alsologtostderr: (2m55.356692s)
multinode_test.go:298: (dbg) Run:  out/minikube-windows-amd64.exe node list -p multinode-20220531183625-2108
--- PASS: TestMultiNode/serial/RestartKeepsNodes (214.24s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (43.07s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:392: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-20220531183625-2108 node delete m03
E0531 18:52:16.281556    2108 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\addons-20220531171726-2108\client.crt: The system cannot find the path specified.
multinode_test.go:392: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-20220531183625-2108 node delete m03: (31.6968696s)
multinode_test.go:398: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-20220531183625-2108 status --alsologtostderr
multinode_test.go:398: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-20220531183625-2108 status --alsologtostderr: (9.7659106s)
multinode_test.go:412: (dbg) Run:  docker volume ls
multinode_test.go:412: (dbg) Done: docker volume ls: (1.0190699s)
multinode_test.go:422: (dbg) Run:  kubectl get nodes
multinode_test.go:430: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (43.07s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (39.96s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:312: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-20220531183625-2108 stop
multinode_test.go:312: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-20220531183625-2108 stop: (32.3983812s)
multinode_test.go:318: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-20220531183625-2108 status
E0531 18:53:06.084504    2108 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\ingress-addon-legacy-20220531181152-2108\client.crt: The system cannot find the path specified.
multinode_test.go:318: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p multinode-20220531183625-2108 status: exit status 7 (3.7616089s)

                                                
                                                
-- stdout --
	multinode-20220531183625-2108
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-20220531183625-2108-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:325: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-20220531183625-2108 status --alsologtostderr
multinode_test.go:325: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p multinode-20220531183625-2108 status --alsologtostderr: exit status 7 (3.8022626s)

                                                
                                                
-- stdout --
	multinode-20220531183625-2108
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-20220531183625-2108-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0531 18:53:07.992667    5612 out.go:296] Setting OutFile to fd 1004 ...
	I0531 18:53:08.057944    5612 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0531 18:53:08.057944    5612 out.go:309] Setting ErrFile to fd 848...
	I0531 18:53:08.058024    5612 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0531 18:53:08.068768    5612 out.go:303] Setting JSON to false
	I0531 18:53:08.068768    5612 mustload.go:65] Loading cluster: multinode-20220531183625-2108
	I0531 18:53:08.069108    5612 config.go:178] Loaded profile config "multinode-20220531183625-2108": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.6
	I0531 18:53:08.069108    5612 status.go:253] checking status of multinode-20220531183625-2108 ...
	I0531 18:53:08.084129    5612 cli_runner.go:164] Run: docker container inspect multinode-20220531183625-2108 --format={{.State.Status}}
	I0531 18:53:10.517674    5612 cli_runner.go:217] Completed: docker container inspect multinode-20220531183625-2108 --format={{.State.Status}}: (2.4335339s)
	I0531 18:53:10.517674    5612 status.go:328] multinode-20220531183625-2108 host status = "Stopped" (err=<nil>)
	I0531 18:53:10.517674    5612 status.go:341] host is not running, skipping remaining checks
	I0531 18:53:10.517674    5612 status.go:255] multinode-20220531183625-2108 status: &{Name:multinode-20220531183625-2108 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0531 18:53:10.517674    5612 status.go:253] checking status of multinode-20220531183625-2108-m02 ...
	I0531 18:53:10.530673    5612 cli_runner.go:164] Run: docker container inspect multinode-20220531183625-2108-m02 --format={{.State.Status}}
	I0531 18:53:11.524600    5612 status.go:328] multinode-20220531183625-2108-m02 host status = "Stopped" (err=<nil>)
	I0531 18:53:11.524772    5612 status.go:341] host is not running, skipping remaining checks
	I0531 18:53:11.524772    5612 status.go:255] multinode-20220531183625-2108-m02 status: &{Name:multinode-20220531183625-2108-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (39.96s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (122.11s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:342: (dbg) Run:  docker version -f {{.Server.Version}}
multinode_test.go:342: (dbg) Done: docker version -f {{.Server.Version}}: (1.0743884s)
multinode_test.go:352: (dbg) Run:  out/minikube-windows-amd64.exe start -p multinode-20220531183625-2108 --wait=true -v=8 --alsologtostderr --driver=docker
E0531 18:54:13.042701    2108 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\addons-20220531171726-2108\client.crt: The system cannot find the path specified.
E0531 18:55:02.505356    2108 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\ingress-addon-legacy-20220531181152-2108\client.crt: The system cannot find the path specified.
multinode_test.go:352: (dbg) Done: out/minikube-windows-amd64.exe start -p multinode-20220531183625-2108 --wait=true -v=8 --alsologtostderr --driver=docker: (1m50.6463112s)
multinode_test.go:358: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-20220531183625-2108 status --alsologtostderr
multinode_test.go:358: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-20220531183625-2108 status --alsologtostderr: (9.8314263s)
multinode_test.go:372: (dbg) Run:  kubectl get nodes
multinode_test.go:380: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (122.11s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (141.59s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:441: (dbg) Run:  out/minikube-windows-amd64.exe node list -p multinode-20220531183625-2108
multinode_test.go:450: (dbg) Run:  out/minikube-windows-amd64.exe start -p multinode-20220531183625-2108-m02 --driver=docker
multinode_test.go:450: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p multinode-20220531183625-2108-m02 --driver=docker: exit status 14 (405.1971ms)

                                                
                                                
-- stdout --
	* [multinode-20220531183625-2108-m02] minikube v1.26.0-beta.1 on Microsoft Windows 10 Enterprise N 10.0.19044 Build 19044
	  - KUBECONFIG=C:\Users\jenkins.minikube7\minikube-integration\kubeconfig
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube7\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=14079
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-20220531183625-2108-m02' is duplicated with machine name 'multinode-20220531183625-2108-m02' in profile 'multinode-20220531183625-2108'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:458: (dbg) Run:  out/minikube-windows-amd64.exe start -p multinode-20220531183625-2108-m03 --driver=docker
E0531 18:56:10.015382    2108 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\functional-20220531173104-2108\client.crt: The system cannot find the path specified.
multinode_test.go:458: (dbg) Done: out/minikube-windows-amd64.exe start -p multinode-20220531183625-2108-m03 --driver=docker: (1m53.8895518s)
multinode_test.go:465: (dbg) Run:  out/minikube-windows-amd64.exe node add -p multinode-20220531183625-2108
multinode_test.go:465: (dbg) Non-zero exit: out/minikube-windows-amd64.exe node add -p multinode-20220531183625-2108: exit status 80 (5.731955s)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-20220531183625-2108
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: Node multinode-20220531183625-2108-m03 already exists in multinode-20220531183625-2108-m03 profile
	* 
	╭───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                       │
	│    * If the above advice does not help, please let us know:                                                           │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                         │
	│                                                                                                                       │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                              │
	│    * Please also attach the following file to the GitHub issue:                                                       │
	│    * - C:\Users\jenkins.minikube7\AppData\Local\Temp\minikube_node_f30df829a49c27e09829ed66f8254940e71c1eac_14.log    │
	│                                                                                                                       │
	╰───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:470: (dbg) Run:  out/minikube-windows-amd64.exe delete -p multinode-20220531183625-2108-m03
multinode_test.go:470: (dbg) Done: out/minikube-windows-amd64.exe delete -p multinode-20220531183625-2108-m03: (21.2146335s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (141.59s)

                                                
                                    
x
+
TestPreload (345.63s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:48: (dbg) Run:  out/minikube-windows-amd64.exe start -p test-preload-20220531185811-2108 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker --kubernetes-version=v1.17.0
E0531 18:59:13.035231    2108 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\addons-20220531171726-2108\client.crt: The system cannot find the path specified.
E0531 19:00:02.509450    2108 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\ingress-addon-legacy-20220531181152-2108\client.crt: The system cannot find the path specified.
preload_test.go:48: (dbg) Done: out/minikube-windows-amd64.exe start -p test-preload-20220531185811-2108 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker --kubernetes-version=v1.17.0: (2m47.8879868s)
preload_test.go:61: (dbg) Run:  out/minikube-windows-amd64.exe ssh -p test-preload-20220531185811-2108 -- docker pull gcr.io/k8s-minikube/busybox
preload_test.go:61: (dbg) Done: out/minikube-windows-amd64.exe ssh -p test-preload-20220531185811-2108 -- docker pull gcr.io/k8s-minikube/busybox: (7.4046763s)
preload_test.go:71: (dbg) Run:  out/minikube-windows-amd64.exe start -p test-preload-20220531185811-2108 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker --kubernetes-version=v1.17.3
E0531 19:01:10.019714    2108 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\functional-20220531173104-2108\client.crt: The system cannot find the path specified.
preload_test.go:71: (dbg) Done: out/minikube-windows-amd64.exe start -p test-preload-20220531185811-2108 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker --kubernetes-version=v1.17.3: (2m20.8709109s)
preload_test.go:80: (dbg) Run:  out/minikube-windows-amd64.exe ssh -p test-preload-20220531185811-2108 -- docker images
preload_test.go:80: (dbg) Done: out/minikube-windows-amd64.exe ssh -p test-preload-20220531185811-2108 -- docker images: (6.2529748s)
helpers_test.go:175: Cleaning up "test-preload-20220531185811-2108" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p test-preload-20220531185811-2108
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p test-preload-20220531185811-2108: (23.2105076s)
--- PASS: TestPreload (345.63s)

                                                
                                    
x
+
TestScheduledStopWindows (216s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-windows-amd64.exe start -p scheduled-stop-20220531190357-2108 --memory=2048 --driver=docker
E0531 19:04:13.046373    2108 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\addons-20220531171726-2108\client.crt: The system cannot find the path specified.
E0531 19:05:02.517811    2108 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\ingress-addon-legacy-20220531181152-2108\client.crt: The system cannot find the path specified.
scheduled_stop_test.go:128: (dbg) Done: out/minikube-windows-amd64.exe start -p scheduled-stop-20220531190357-2108 --memory=2048 --driver=docker: (1m48.6298879s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-windows-amd64.exe stop -p scheduled-stop-20220531190357-2108 --schedule 5m
scheduled_stop_test.go:137: (dbg) Done: out/minikube-windows-amd64.exe stop -p scheduled-stop-20220531190357-2108 --schedule 5m: (5.0207337s)
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.TimeToStop}} -p scheduled-stop-20220531190357-2108 -n scheduled-stop-20220531190357-2108
E0531 19:05:53.212365    2108 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\functional-20220531173104-2108\client.crt: The system cannot find the path specified.
scheduled_stop_test.go:191: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.TimeToStop}} -p scheduled-stop-20220531190357-2108 -n scheduled-stop-20220531190357-2108: (6.6890373s)
scheduled_stop_test.go:54: (dbg) Run:  out/minikube-windows-amd64.exe ssh -p scheduled-stop-20220531190357-2108 -- sudo systemctl show minikube-scheduled-stop --no-page
scheduled_stop_test.go:54: (dbg) Done: out/minikube-windows-amd64.exe ssh -p scheduled-stop-20220531190357-2108 -- sudo systemctl show minikube-scheduled-stop --no-page: (6.0959763s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-windows-amd64.exe stop -p scheduled-stop-20220531190357-2108 --schedule 5s
scheduled_stop_test.go:137: (dbg) Done: out/minikube-windows-amd64.exe stop -p scheduled-stop-20220531190357-2108 --schedule 5s: (4.6849334s)
E0531 19:06:10.020629    2108 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\functional-20220531173104-2108\client.crt: The system cannot find the path specified.
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-windows-amd64.exe status -p scheduled-stop-20220531190357-2108
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status -p scheduled-stop-20220531190357-2108: exit status 7 (2.7570654s)

                                                
                                                
-- stdout --
	scheduled-stop-20220531190357-2108
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p scheduled-stop-20220531190357-2108 -n scheduled-stop-20220531190357-2108
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p scheduled-stop-20220531190357-2108 -n scheduled-stop-20220531190357-2108: exit status 7 (2.716614s)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-20220531190357-2108" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p scheduled-stop-20220531190357-2108
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p scheduled-stop-20220531190357-2108: (19.3976265s)
--- PASS: TestScheduledStopWindows (216.00s)

                                                
                                    
x
+
TestInsufficientStorage (107.36s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-windows-amd64.exe start -p insufficient-storage-20220531190733-2108 --memory=2048 --output=json --wait=true --driver=docker
status_test.go:50: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p insufficient-storage-20220531190733-2108 --memory=2048 --output=json --wait=true --driver=docker: exit status 26 (1m16.3390916s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"9ef82816-b397-4d24-9771-23cdf2fcb348","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-20220531190733-2108] minikube v1.26.0-beta.1 on Microsoft Windows 10 Enterprise N 10.0.19044 Build 19044","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"1ec73337-b1fd-435f-8138-5ed55815b1be","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=C:\\Users\\jenkins.minikube7\\minikube-integration\\kubeconfig"}}
	{"specversion":"1.0","id":"f6eda640-3dca-4c27-aeb6-fab11b25954a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=C:\\Users\\jenkins.minikube7\\minikube-integration\\.minikube"}}
	{"specversion":"1.0","id":"54e49c71-a479-4a6b-ab98-3f244c785885","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=14079"}}
	{"specversion":"1.0","id":"70492236-0616-461d-95b8-4c67ed934209","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"67b4cce4-89b3-48a4-a751-030c0a865033","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"70054797-c249-4fd0-a5ab-21e50c42076a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"2d6e4dee-1142-4b6e-a050-01d2c20970a0","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"c18605a1-c2c5-41bd-8101-953e3c11f8b6","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker Desktop driver with the root privilege"}}
	{"specversion":"1.0","id":"0a275ce4-add0-4f43-9230-19f9ced7ce73","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting control plane node insufficient-storage-20220531190733-2108 in cluster insufficient-storage-20220531190733-2108","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"e4cfa515-a538-4523-987b-d5e372334495","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"1c803cd6-a7a4-4bbd-a46d-b31c73f2e3a0","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=2048MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"abbd6599-447e-4d0e-8c30-431b17076bc6","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\t\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100%% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-windows-amd64.exe status -p insufficient-storage-20220531190733-2108 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status -p insufficient-storage-20220531190733-2108 --output=json --layout=cluster: exit status 7 (5.9495856s)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-20220531190733-2108","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=2048MB) ...","BinaryVersion":"v1.26.0-beta.1","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-20220531190733-2108","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0531 19:08:55.264261    6368 status.go:413] kubeconfig endpoint: extract IP: "insufficient-storage-20220531190733-2108" does not appear in C:\Users\jenkins.minikube7\minikube-integration\kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-windows-amd64.exe status -p insufficient-storage-20220531190733-2108 --output=json --layout=cluster
E0531 19:08:56.290340    2108 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\addons-20220531171726-2108\client.crt: The system cannot find the path specified.
status_test.go:76: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status -p insufficient-storage-20220531190733-2108 --output=json --layout=cluster: exit status 7 (5.9079467s)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-20220531190733-2108","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.26.0-beta.1","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-20220531190733-2108","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0531 19:09:01.174152    3748 status.go:413] kubeconfig endpoint: extract IP: "insufficient-storage-20220531190733-2108" does not appear in C:\Users\jenkins.minikube7\minikube-integration\kubeconfig
	E0531 19:09:01.211902    3748 status.go:557] unable to read event log: stat: CreateFile C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\insufficient-storage-20220531190733-2108\events.json: The system cannot find the file specified.

                                                
                                                
** /stderr **
helpers_test.go:175: Cleaning up "insufficient-storage-20220531190733-2108" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p insufficient-storage-20220531190733-2108
E0531 19:09:13.042625    2108 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\addons-20220531171726-2108\client.crt: The system cannot find the path specified.
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p insufficient-storage-20220531190733-2108: (19.1604082s)
--- PASS: TestInsufficientStorage (107.36s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (324.42s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:127: (dbg) Run:  C:\Users\jenkins.minikube7\AppData\Local\Temp\minikube-v1.9.0.3997657792.exe start -p running-upgrade-20220531191723-2108 --memory=2200 --vm-driver=docker

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:127: (dbg) Done: C:\Users\jenkins.minikube7\AppData\Local\Temp\minikube-v1.9.0.3997657792.exe start -p running-upgrade-20220531191723-2108 --memory=2200 --vm-driver=docker: (3m20.7771677s)
version_upgrade_test.go:137: (dbg) Run:  out/minikube-windows-amd64.exe start -p running-upgrade-20220531191723-2108 --memory=2200 --alsologtostderr -v=1 --driver=docker
E0531 19:21:10.017424    2108 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\functional-20220531173104-2108\client.crt: The system cannot find the path specified.

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:137: (dbg) Done: out/minikube-windows-amd64.exe start -p running-upgrade-20220531191723-2108 --memory=2200 --alsologtostderr -v=1 --driver=docker: (1m39.4015649s)
helpers_test.go:175: Cleaning up "running-upgrade-20220531191723-2108" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p running-upgrade-20220531191723-2108
E0531 19:22:33.229356    2108 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\functional-20220531173104-2108\client.crt: The system cannot find the path specified.
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p running-upgrade-20220531191723-2108: (23.6793872s)
--- PASS: TestRunningBinaryUpgrade (324.42s)

                                                
                                    
x
+
TestMissingContainerUpgrade (484.3s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:316: (dbg) Run:  C:\Users\jenkins.minikube7\AppData\Local\Temp\minikube-v1.9.1.2838634733.exe start -p missing-upgrade-20220531190920-2108 --memory=2200 --driver=docker

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:316: (dbg) Done: C:\Users\jenkins.minikube7\AppData\Local\Temp\minikube-v1.9.1.2838634733.exe start -p missing-upgrade-20220531190920-2108 --memory=2200 --driver=docker: (4m48.8056144s)
version_upgrade_test.go:325: (dbg) Run:  docker stop missing-upgrade-20220531190920-2108
E0531 19:14:13.041396    2108 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\addons-20220531171726-2108\client.crt: The system cannot find the path specified.
version_upgrade_test.go:325: (dbg) Done: docker stop missing-upgrade-20220531190920-2108: (5.962024s)
version_upgrade_test.go:330: (dbg) Run:  docker rm missing-upgrade-20220531190920-2108
version_upgrade_test.go:330: (dbg) Done: docker rm missing-upgrade-20220531190920-2108: (1.2590224s)
version_upgrade_test.go:336: (dbg) Run:  out/minikube-windows-amd64.exe start -p missing-upgrade-20220531190920-2108 --memory=2200 --alsologtostderr -v=1 --driver=docker

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:336: (dbg) Done: out/minikube-windows-amd64.exe start -p missing-upgrade-20220531190920-2108 --memory=2200 --alsologtostderr -v=1 --driver=docker: (2m35.2538445s)
helpers_test.go:175: Cleaning up "missing-upgrade-20220531190920-2108" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p missing-upgrade-20220531190920-2108

                                                
                                                
=== CONT  TestMissingContainerUpgrade
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p missing-upgrade-20220531190920-2108: (32.3838394s)
--- PASS: TestMissingContainerUpgrade (484.30s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.49s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion

                                                
                                                
=== CONT  TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-windows-amd64.exe start -p NoKubernetes-20220531190920-2108 --no-kubernetes --kubernetes-version=1.20 --driver=docker

                                                
                                                
=== CONT  TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p NoKubernetes-20220531190920-2108 --no-kubernetes --kubernetes-version=1.20 --driver=docker: exit status 14 (487.1805ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-20220531190920-2108] minikube v1.26.0-beta.1 on Microsoft Windows 10 Enterprise N 10.0.19044 Build 19044
	  - KUBECONFIG=C:\Users\jenkins.minikube7\minikube-integration\kubeconfig
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube7\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=14079
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.49s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (0.62s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (0.62s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (192.25s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-windows-amd64.exe start -p NoKubernetes-20220531190920-2108 --driver=docker

                                                
                                                
=== CONT  TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Done: out/minikube-windows-amd64.exe start -p NoKubernetes-20220531190920-2108 --driver=docker: (2m58.7596542s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-windows-amd64.exe -p NoKubernetes-20220531190920-2108 status -o json
no_kubernetes_test.go:200: (dbg) Done: out/minikube-windows-amd64.exe -p NoKubernetes-20220531190920-2108 status -o json: (13.4880662s)
--- PASS: TestNoKubernetes/serial/StartWithK8s (192.25s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (438.76s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:190: (dbg) Run:  C:\Users\jenkins.minikube7\AppData\Local\Temp\minikube-v1.9.0.4142982750.exe start -p stopped-upgrade-20220531190920-2108 --memory=2200 --vm-driver=docker
E0531 19:09:46.097407    2108 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\ingress-addon-legacy-20220531181152-2108\client.crt: The system cannot find the path specified.
E0531 19:10:02.521186    2108 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\ingress-addon-legacy-20220531181152-2108\client.crt: The system cannot find the path specified.
E0531 19:11:10.025671    2108 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\functional-20220531173104-2108\client.crt: The system cannot find the path specified.

                                                
                                                
=== CONT  TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:190: (dbg) Done: C:\Users\jenkins.minikube7\AppData\Local\Temp\minikube-v1.9.0.4142982750.exe start -p stopped-upgrade-20220531190920-2108 --memory=2200 --vm-driver=docker: (5m38.9080482s)
version_upgrade_test.go:199: (dbg) Run:  C:\Users\jenkins.minikube7\AppData\Local\Temp\minikube-v1.9.0.4142982750.exe -p stopped-upgrade-20220531190920-2108 stop
E0531 19:15:02.523487    2108 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\ingress-addon-legacy-20220531181152-2108\client.crt: The system cannot find the path specified.
version_upgrade_test.go:199: (dbg) Done: C:\Users\jenkins.minikube7\AppData\Local\Temp\minikube-v1.9.0.4142982750.exe -p stopped-upgrade-20220531190920-2108 stop: (24.7631623s)
version_upgrade_test.go:205: (dbg) Run:  out/minikube-windows-amd64.exe start -p stopped-upgrade-20220531190920-2108 --memory=2200 --alsologtostderr -v=1 --driver=docker
E0531 19:16:10.014049    2108 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\functional-20220531173104-2108\client.crt: The system cannot find the path specified.
version_upgrade_test.go:205: (dbg) Done: out/minikube-windows-amd64.exe start -p stopped-upgrade-20220531190920-2108 --memory=2200 --alsologtostderr -v=1 --driver=docker: (1m15.0915454s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (438.76s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (91.71s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-windows-amd64.exe start -p NoKubernetes-20220531190920-2108 --no-kubernetes --driver=docker

                                                
                                                
=== CONT  TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Done: out/minikube-windows-amd64.exe start -p NoKubernetes-20220531190920-2108 --no-kubernetes --driver=docker: (51.4292734s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-windows-amd64.exe -p NoKubernetes-20220531190920-2108 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p NoKubernetes-20220531190920-2108 status -o json: exit status 2 (7.1429292s)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-20220531190920-2108","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-windows-amd64.exe delete -p NoKubernetes-20220531190920-2108
no_kubernetes_test.go:124: (dbg) Done: out/minikube-windows-amd64.exe delete -p NoKubernetes-20220531190920-2108: (33.1388317s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (91.71s)

                                                
                                    
x
+
TestPause/serial/Start (130.43s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-windows-amd64.exe start -p pause-20220531191437-2108 --memory=2048 --install-addons=false --wait=all --driver=docker

                                                
                                                
=== CONT  TestPause/serial/Start
pause_test.go:80: (dbg) Done: out/minikube-windows-amd64.exe start -p pause-20220531191437-2108 --memory=2048 --install-addons=false --wait=all --driver=docker: (2m10.4262244s)
--- PASS: TestPause/serial/Start (130.43s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (10.99s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:213: (dbg) Run:  out/minikube-windows-amd64.exe logs -p stopped-upgrade-20220531190920-2108

                                                
                                                
=== CONT  TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:213: (dbg) Done: out/minikube-windows-amd64.exe logs -p stopped-upgrade-20220531190920-2108: (10.9888284s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (10.99s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (38.82s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-windows-amd64.exe start -p pause-20220531191437-2108 --alsologtostderr -v=1 --driver=docker

                                                
                                                
=== CONT  TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Done: out/minikube-windows-amd64.exe start -p pause-20220531191437-2108 --alsologtostderr -v=1 --driver=docker: (38.7944885s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (38.82s)

                                                
                                    
x
+
TestPause/serial/Pause (7.3s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-windows-amd64.exe pause -p pause-20220531191437-2108 --alsologtostderr -v=5
pause_test.go:110: (dbg) Done: out/minikube-windows-amd64.exe pause -p pause-20220531191437-2108 --alsologtostderr -v=5: (7.3031161s)
--- PASS: TestPause/serial/Pause (7.30s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (6.87s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-windows-amd64.exe status -p pause-20220531191437-2108 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status -p pause-20220531191437-2108 --output=json --layout=cluster: exit status 2 (6.8677727s)

                                                
                                                
-- stdout --
	{"Name":"pause-20220531191437-2108","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 14 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.26.0-beta.1","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-20220531191437-2108","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (6.87s)

                                                
                                    
x
+
TestPause/serial/Unpause (6.95s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-windows-amd64.exe unpause -p pause-20220531191437-2108 --alsologtostderr -v=5
pause_test.go:121: (dbg) Done: out/minikube-windows-amd64.exe unpause -p pause-20220531191437-2108 --alsologtostderr -v=5: (6.9504295s)
--- PASS: TestPause/serial/Unpause (6.95s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (6.63s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-windows-amd64.exe pause -p pause-20220531191437-2108 --alsologtostderr -v=5
pause_test.go:110: (dbg) Done: out/minikube-windows-amd64.exe pause -p pause-20220531191437-2108 --alsologtostderr -v=5: (6.6313553s)
--- PASS: TestPause/serial/PauseAgain (6.63s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (50.71s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-windows-amd64.exe delete -p pause-20220531191437-2108 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-windows-amd64.exe delete -p pause-20220531191437-2108 --alsologtostderr -v=5: (50.7073393s)
--- PASS: TestPause/serial/DeletePaused (50.71s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (564.47s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:188: (dbg) Run:  out/minikube-windows-amd64.exe start -p old-k8s-version-20220531192531-2108 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker --kubernetes-version=v1.16.0
E0531 19:25:36.304179    2108 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\addons-20220531171726-2108\client.crt: The system cannot find the path specified.

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:188: (dbg) Done: out/minikube-windows-amd64.exe start -p old-k8s-version-20220531192531-2108 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker --kubernetes-version=v1.16.0: (9m24.467124s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (564.47s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (180.38s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:188: (dbg) Run:  out/minikube-windows-amd64.exe start -p no-preload-20220531192611-2108 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker --kubernetes-version=v1.23.6
E0531 19:26:26.113605    2108 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\ingress-addon-legacy-20220531181152-2108\client.crt: The system cannot find the path specified.
start_stop_delete_test.go:188: (dbg) Done: out/minikube-windows-amd64.exe start -p no-preload-20220531192611-2108 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker --kubernetes-version=v1.23.6: (3m0.3754644s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (180.38s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (11.44s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:198: (dbg) Run:  kubectl --context no-preload-20220531192611-2108 create -f testdata\busybox.yaml
start_stop_delete_test.go:198: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:342: "busybox" [0eb34ca6-fa63-4329-873f-e55a80a53c6c] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
E0531 19:29:13.053626    2108 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\addons-20220531171726-2108\client.crt: The system cannot find the path specified.
helpers_test.go:342: "busybox" [0eb34ca6-fa63-4329-873f-e55a80a53c6c] Running
start_stop_delete_test.go:198: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 10.1148997s
start_stop_delete_test.go:198: (dbg) Run:  kubectl --context no-preload-20220531192611-2108 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (11.44s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (5.89s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:207: (dbg) Run:  out/minikube-windows-amd64.exe addons enable metrics-server -p no-preload-20220531192611-2108 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:207: (dbg) Done: out/minikube-windows-amd64.exe addons enable metrics-server -p no-preload-20220531192611-2108 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (5.5117203s)
start_stop_delete_test.go:217: (dbg) Run:  kubectl --context no-preload-20220531192611-2108 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (5.89s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (18.6s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:230: (dbg) Run:  out/minikube-windows-amd64.exe stop -p no-preload-20220531192611-2108 --alsologtostderr -v=3
start_stop_delete_test.go:230: (dbg) Done: out/minikube-windows-amd64.exe stop -p no-preload-20220531192611-2108 --alsologtostderr -v=3: (18.5955005s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (18.60s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (5.95s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:241: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p no-preload-20220531192611-2108 -n no-preload-20220531192611-2108
start_stop_delete_test.go:241: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p no-preload-20220531192611-2108 -n no-preload-20220531192611-2108: exit status 7 (2.9372816s)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:241: status error: exit status 7 (may be ok)
start_stop_delete_test.go:248: (dbg) Run:  out/minikube-windows-amd64.exe addons enable dashboard -p no-preload-20220531192611-2108 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4
start_stop_delete_test.go:248: (dbg) Done: out/minikube-windows-amd64.exe addons enable dashboard -p no-preload-20220531192611-2108 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4: (3.0165266s)
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (5.95s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (414.65s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:258: (dbg) Run:  out/minikube-windows-amd64.exe start -p no-preload-20220531192611-2108 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker --kubernetes-version=v1.23.6
E0531 19:30:02.518604    2108 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\ingress-addon-legacy-20220531181152-2108\client.crt: The system cannot find the path specified.

                                                
                                                
=== CONT  TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:258: (dbg) Done: out/minikube-windows-amd64.exe start -p no-preload-20220531192611-2108 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker --kubernetes-version=v1.23.6: (6m45.5366327s)
start_stop_delete_test.go:264: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p no-preload-20220531192611-2108 -n no-preload-20220531192611-2108
start_stop_delete_test.go:264: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.Host}} -p no-preload-20220531192611-2108 -n no-preload-20220531192611-2108: (9.1135626s)
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (414.65s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (497.33s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:188: (dbg) Run:  out/minikube-windows-amd64.exe start -p embed-certs-20220531193346-2108 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker --kubernetes-version=v1.23.6

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:188: (dbg) Done: out/minikube-windows-amd64.exe start -p embed-certs-20220531193346-2108 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker --kubernetes-version=v1.23.6: (8m17.3300787s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (497.33s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/FirstStart (131.84s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/FirstStart
start_stop_delete_test.go:188: (dbg) Run:  out/minikube-windows-amd64.exe start -p default-k8s-different-port-20220531193451-2108 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker --kubernetes-version=v1.23.6

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/FirstStart
start_stop_delete_test.go:188: (dbg) Done: out/minikube-windows-amd64.exe start -p default-k8s-different-port-20220531193451-2108 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker --kubernetes-version=v1.23.6: (2m11.8431217s)
--- PASS: TestStartStop/group/default-k8s-different-port/serial/FirstStart (131.84s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (11.03s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:198: (dbg) Run:  kubectl --context old-k8s-version-20220531192531-2108 create -f testdata\busybox.yaml
start_stop_delete_test.go:198: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:342: "busybox" [2cbb6754-4d8e-454a-89cd-338071935532] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:342: "busybox" [2cbb6754-4d8e-454a-89cd-338071935532] Running
E0531 19:35:02.526204    2108 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\ingress-addon-legacy-20220531181152-2108\client.crt: The system cannot find the path specified.
start_stop_delete_test.go:198: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 10.047518s
start_stop_delete_test.go:198: (dbg) Run:  kubectl --context old-k8s-version-20220531192531-2108 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (11.03s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (6.16s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:207: (dbg) Run:  out/minikube-windows-amd64.exe addons enable metrics-server -p old-k8s-version-20220531192531-2108 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:207: (dbg) Done: out/minikube-windows-amd64.exe addons enable metrics-server -p old-k8s-version-20220531192531-2108 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (5.7288567s)
start_stop_delete_test.go:217: (dbg) Run:  kubectl --context old-k8s-version-20220531192531-2108 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (6.16s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (23.29s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:230: (dbg) Run:  out/minikube-windows-amd64.exe stop -p old-k8s-version-20220531192531-2108 --alsologtostderr -v=3
start_stop_delete_test.go:230: (dbg) Done: out/minikube-windows-amd64.exe stop -p old-k8s-version-20220531192531-2108 --alsologtostderr -v=3: (23.2932569s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (23.29s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (5.95s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:241: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p old-k8s-version-20220531192531-2108 -n old-k8s-version-20220531192531-2108
start_stop_delete_test.go:241: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p old-k8s-version-20220531192531-2108 -n old-k8s-version-20220531192531-2108: exit status 7 (2.967791s)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:241: status error: exit status 7 (may be ok)
start_stop_delete_test.go:248: (dbg) Run:  out/minikube-windows-amd64.exe addons enable dashboard -p old-k8s-version-20220531192531-2108 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4
start_stop_delete_test.go:248: (dbg) Done: out/minikube-windows-amd64.exe addons enable dashboard -p old-k8s-version-20220531192531-2108 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4: (2.9839086s)
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (5.95s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (470.27s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:258: (dbg) Run:  out/minikube-windows-amd64.exe start -p old-k8s-version-20220531192531-2108 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker --kubernetes-version=v1.16.0
E0531 19:36:10.030068    2108 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\functional-20220531173104-2108\client.crt: The system cannot find the path specified.

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:258: (dbg) Done: out/minikube-windows-amd64.exe start -p old-k8s-version-20220531192531-2108 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker --kubernetes-version=v1.16.0: (7m42.585194s)
start_stop_delete_test.go:264: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p old-k8s-version-20220531192531-2108 -n old-k8s-version-20220531192531-2108

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:264: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.Host}} -p old-k8s-version-20220531192531-2108 -n old-k8s-version-20220531192531-2108: (7.6852544s)
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (470.27s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (23.15s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:276: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:342: "kubernetes-dashboard-8469778f77-s8kr7" [3f139557-3392-4749-b88f-c245d8ed99da] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])

                                                
                                                
=== CONT  TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
helpers_test.go:342: "kubernetes-dashboard-8469778f77-s8kr7" [3f139557-3392-4749-b88f-c245d8ed99da] Running

                                                
                                                
=== CONT  TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:276: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 23.150043s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (23.15s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/DeployApp (13.45s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/DeployApp
start_stop_delete_test.go:198: (dbg) Run:  kubectl --context default-k8s-different-port-20220531193451-2108 create -f testdata\busybox.yaml
start_stop_delete_test.go:198: (dbg) TestStartStop/group/default-k8s-different-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:342: "busybox" [cec6c589-ee86-4d7c-a9e6-7404f3b459aa] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/DeployApp
helpers_test.go:342: "busybox" [cec6c589-ee86-4d7c-a9e6-7404f3b459aa] Running

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/DeployApp
start_stop_delete_test.go:198: (dbg) TestStartStop/group/default-k8s-different-port/serial/DeployApp: integration-test=busybox healthy within 12.1028863s
start_stop_delete_test.go:198: (dbg) Run:  kubectl --context default-k8s-different-port-20220531193451-2108 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-different-port/serial/DeployApp (13.45s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.96s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:289: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:342: "kubernetes-dashboard-8469778f77-s8kr7" [3f139557-3392-4749-b88f-c245d8ed99da] Running

                                                
                                                
=== CONT  TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:289: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.0916363s
start_stop_delete_test.go:293: (dbg) Run:  kubectl --context no-preload-20220531192611-2108 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.96s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/EnableAddonWhileActive (6.68s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:207: (dbg) Run:  out/minikube-windows-amd64.exe addons enable metrics-server -p default-k8s-different-port-20220531193451-2108 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:207: (dbg) Done: out/minikube-windows-amd64.exe addons enable metrics-server -p default-k8s-different-port-20220531193451-2108 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (6.1212476s)
start_stop_delete_test.go:217: (dbg) Run:  kubectl --context default-k8s-different-port-20220531193451-2108 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-different-port/serial/EnableAddonWhileActive (6.68s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (7.89s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:306: (dbg) Run:  out/minikube-windows-amd64.exe ssh -p no-preload-20220531192611-2108 "sudo crictl images -o json"

                                                
                                                
=== CONT  TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:306: (dbg) Done: out/minikube-windows-amd64.exe ssh -p no-preload-20220531192611-2108 "sudo crictl images -o json": (7.8919331s)
start_stop_delete_test.go:306: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (7.89s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/Stop (19.32s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/Stop
start_stop_delete_test.go:230: (dbg) Run:  out/minikube-windows-amd64.exe stop -p default-k8s-different-port-20220531193451-2108 --alsologtostderr -v=3

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/Stop
start_stop_delete_test.go:230: (dbg) Done: out/minikube-windows-amd64.exe stop -p default-k8s-different-port-20220531193451-2108 --alsologtostderr -v=3: (19.3229139s)
--- PASS: TestStartStop/group/default-k8s-different-port/serial/Stop (19.32s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (42.34s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:313: (dbg) Run:  out/minikube-windows-amd64.exe pause -p no-preload-20220531192611-2108 --alsologtostderr -v=1
start_stop_delete_test.go:313: (dbg) Done: out/minikube-windows-amd64.exe pause -p no-preload-20220531192611-2108 --alsologtostderr -v=1: (7.1084299s)
start_stop_delete_test.go:313: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p no-preload-20220531192611-2108 -n no-preload-20220531192611-2108
start_stop_delete_test.go:313: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p no-preload-20220531192611-2108 -n no-preload-20220531192611-2108: exit status 2 (7.3098331s)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:313: status error: exit status 2 (may be ok)
start_stop_delete_test.go:313: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Kubelet}} -p no-preload-20220531192611-2108 -n no-preload-20220531192611-2108

                                                
                                                
=== CONT  TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:313: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Kubelet}} -p no-preload-20220531192611-2108 -n no-preload-20220531192611-2108: exit status 2 (6.8892333s)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:313: status error: exit status 2 (may be ok)
start_stop_delete_test.go:313: (dbg) Run:  out/minikube-windows-amd64.exe unpause -p no-preload-20220531192611-2108 --alsologtostderr -v=1

                                                
                                                
=== CONT  TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:313: (dbg) Done: out/minikube-windows-amd64.exe unpause -p no-preload-20220531192611-2108 --alsologtostderr -v=1: (6.6742012s)
start_stop_delete_test.go:313: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p no-preload-20220531192611-2108 -n no-preload-20220531192611-2108
start_stop_delete_test.go:313: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p no-preload-20220531192611-2108 -n no-preload-20220531192611-2108: (7.1475592s)
start_stop_delete_test.go:313: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Kubelet}} -p no-preload-20220531192611-2108 -n no-preload-20220531192611-2108
start_stop_delete_test.go:313: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.Kubelet}} -p no-preload-20220531192611-2108 -n no-preload-20220531192611-2108: (7.2069428s)
--- PASS: TestStartStop/group/no-preload/serial/Pause (42.34s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/EnableAddonAfterStop (6.11s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:241: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p default-k8s-different-port-20220531193451-2108 -n default-k8s-different-port-20220531193451-2108
start_stop_delete_test.go:241: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p default-k8s-different-port-20220531193451-2108 -n default-k8s-different-port-20220531193451-2108: exit status 7 (3.0780738s)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:241: status error: exit status 7 (may be ok)
start_stop_delete_test.go:248: (dbg) Run:  out/minikube-windows-amd64.exe addons enable dashboard -p default-k8s-different-port-20220531193451-2108 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:248: (dbg) Done: out/minikube-windows-amd64.exe addons enable dashboard -p default-k8s-different-port-20220531193451-2108 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4: (3.0353574s)
--- PASS: TestStartStop/group/default-k8s-different-port/serial/EnableAddonAfterStop (6.11s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/SecondStart (417.3s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/SecondStart
start_stop_delete_test.go:258: (dbg) Run:  out/minikube-windows-amd64.exe start -p default-k8s-different-port-20220531193451-2108 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker --kubernetes-version=v1.23.6

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/SecondStart
start_stop_delete_test.go:258: (dbg) Done: out/minikube-windows-amd64.exe start -p default-k8s-different-port-20220531193451-2108 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker --kubernetes-version=v1.23.6: (6m48.1485538s)
start_stop_delete_test.go:264: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p default-k8s-different-port-20220531193451-2108 -n default-k8s-different-port-20220531193451-2108

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/SecondStart
start_stop_delete_test.go:264: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.Host}} -p default-k8s-different-port-20220531193451-2108 -n default-k8s-different-port-20220531193451-2108: (9.1496204s)
--- PASS: TestStartStop/group/default-k8s-different-port/serial/SecondStart (417.30s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (144.06s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:188: (dbg) Run:  out/minikube-windows-amd64.exe start -p newest-cni-20220531193849-2108 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubelet.network-plugin=cni --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 --driver=docker --kubernetes-version=v1.23.6
E0531 19:39:12.271047    2108 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\no-preload-20220531192611-2108\client.crt: The system cannot find the path specified.
E0531 19:39:12.286406    2108 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\no-preload-20220531192611-2108\client.crt: The system cannot find the path specified.
E0531 19:39:12.302131    2108 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\no-preload-20220531192611-2108\client.crt: The system cannot find the path specified.
E0531 19:39:12.333608    2108 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\no-preload-20220531192611-2108\client.crt: The system cannot find the path specified.
E0531 19:39:12.380953    2108 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\no-preload-20220531192611-2108\client.crt: The system cannot find the path specified.
E0531 19:39:12.475841    2108 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\no-preload-20220531192611-2108\client.crt: The system cannot find the path specified.
E0531 19:39:12.648252    2108 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\no-preload-20220531192611-2108\client.crt: The system cannot find the path specified.
E0531 19:39:12.977686    2108 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\no-preload-20220531192611-2108\client.crt: The system cannot find the path specified.
E0531 19:39:13.055627    2108 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\addons-20220531171726-2108\client.crt: The system cannot find the path specified.
E0531 19:39:13.247783    2108 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\functional-20220531173104-2108\client.crt: The system cannot find the path specified.
E0531 19:39:13.629231    2108 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\no-preload-20220531192611-2108\client.crt: The system cannot find the path specified.
E0531 19:39:14.911912    2108 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\no-preload-20220531192611-2108\client.crt: The system cannot find the path specified.
E0531 19:39:17.472815    2108 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\no-preload-20220531192611-2108\client.crt: The system cannot find the path specified.
E0531 19:39:22.601858    2108 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\no-preload-20220531192611-2108\client.crt: The system cannot find the path specified.
E0531 19:39:36.112194    2108 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\no-preload-20220531192611-2108\client.crt: The system cannot find the path specified.
E0531 19:39:56.599861    2108 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\no-preload-20220531192611-2108\client.crt: The system cannot find the path specified.
E0531 19:40:02.517635    2108 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\ingress-addon-legacy-20220531181152-2108\client.crt: The system cannot find the path specified.
E0531 19:40:37.570122    2108 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\no-preload-20220531192611-2108\client.crt: The system cannot find the path specified.
E0531 19:41:10.031270    2108 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\functional-20220531173104-2108\client.crt: The system cannot find the path specified.
start_stop_delete_test.go:188: (dbg) Done: out/minikube-windows-amd64.exe start -p newest-cni-20220531193849-2108 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubelet.network-plugin=cni --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 --driver=docker --kubernetes-version=v1.23.6: (2m24.0588387s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (144.06s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (6.38s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:207: (dbg) Run:  out/minikube-windows-amd64.exe addons enable metrics-server -p newest-cni-20220531193849-2108 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:207: (dbg) Done: out/minikube-windows-amd64.exe addons enable metrics-server -p newest-cni-20220531193849-2108 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (6.3780812s)
start_stop_delete_test.go:213: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (6.38s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (19.69s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:230: (dbg) Run:  out/minikube-windows-amd64.exe stop -p newest-cni-20220531193849-2108 --alsologtostderr -v=3
start_stop_delete_test.go:230: (dbg) Done: out/minikube-windows-amd64.exe stop -p newest-cni-20220531193849-2108 --alsologtostderr -v=3: (19.687105s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (19.69s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (6.67s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:241: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p newest-cni-20220531193849-2108 -n newest-cni-20220531193849-2108
start_stop_delete_test.go:241: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p newest-cni-20220531193849-2108 -n newest-cni-20220531193849-2108: exit status 7 (3.2836707s)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:241: status error: exit status 7 (may be ok)
start_stop_delete_test.go:248: (dbg) Run:  out/minikube-windows-amd64.exe addons enable dashboard -p newest-cni-20220531193849-2108 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4
start_stop_delete_test.go:248: (dbg) Done: out/minikube-windows-amd64.exe addons enable dashboard -p newest-cni-20220531193849-2108 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4: (3.3884947s)
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (6.67s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (86.19s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:258: (dbg) Run:  out/minikube-windows-amd64.exe start -p newest-cni-20220531193849-2108 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubelet.network-plugin=cni --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 --driver=docker --kubernetes-version=v1.23.6
E0531 19:41:59.499455    2108 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\no-preload-20220531192611-2108\client.crt: The system cannot find the path specified.

                                                
                                                
=== CONT  TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:258: (dbg) Done: out/minikube-windows-amd64.exe start -p newest-cni-20220531193849-2108 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubelet.network-plugin=cni --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 --driver=docker --kubernetes-version=v1.23.6: (1m17.2524847s)
start_stop_delete_test.go:264: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p newest-cni-20220531193849-2108 -n newest-cni-20220531193849-2108
E0531 19:43:06.130428    2108 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\ingress-addon-legacy-20220531181152-2108\client.crt: The system cannot find the path specified.
start_stop_delete_test.go:264: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.Host}} -p newest-cni-20220531193849-2108 -n newest-cni-20220531193849-2108: (8.9341292s)
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (86.19s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (12.11s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:198: (dbg) Run:  kubectl --context embed-certs-20220531193346-2108 create -f testdata\busybox.yaml
start_stop_delete_test.go:198: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:342: "busybox" [6ffd0477-d293-4341-b968-8374e49617c9] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:342: "busybox" [6ffd0477-d293-4341-b968-8374e49617c9] Running
start_stop_delete_test.go:198: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 11.0821465s
start_stop_delete_test.go:198: (dbg) Run:  kubectl --context embed-certs-20220531193346-2108 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (12.11s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (7.52s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:207: (dbg) Run:  out/minikube-windows-amd64.exe addons enable metrics-server -p embed-certs-20220531193346-2108 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain
E0531 19:42:16.311529    2108 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\addons-20220531171726-2108\client.crt: The system cannot find the path specified.
start_stop_delete_test.go:207: (dbg) Done: out/minikube-windows-amd64.exe addons enable metrics-server -p embed-certs-20220531193346-2108 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (7.0404662s)
start_stop_delete_test.go:217: (dbg) Run:  kubectl --context embed-certs-20220531193346-2108 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (7.52s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (19.68s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:230: (dbg) Run:  out/minikube-windows-amd64.exe stop -p embed-certs-20220531193346-2108 --alsologtostderr -v=3
start_stop_delete_test.go:230: (dbg) Done: out/minikube-windows-amd64.exe stop -p embed-certs-20220531193346-2108 --alsologtostderr -v=3: (19.6801636s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (19.68s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (6.93s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:241: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p embed-certs-20220531193346-2108 -n embed-certs-20220531193346-2108
start_stop_delete_test.go:241: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p embed-certs-20220531193346-2108 -n embed-certs-20220531193346-2108: exit status 7 (3.3004161s)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:241: status error: exit status 7 (may be ok)
start_stop_delete_test.go:248: (dbg) Run:  out/minikube-windows-amd64.exe addons enable dashboard -p embed-certs-20220531193346-2108 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4
start_stop_delete_test.go:248: (dbg) Done: out/minikube-windows-amd64.exe addons enable dashboard -p embed-certs-20220531193346-2108 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4: (3.6275731s)
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (6.93s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (430.66s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:258: (dbg) Run:  out/minikube-windows-amd64.exe start -p embed-certs-20220531193346-2108 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker --kubernetes-version=v1.23.6

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:258: (dbg) Done: out/minikube-windows-amd64.exe start -p embed-certs-20220531193346-2108 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker --kubernetes-version=v1.23.6: (7m1.5597288s)
start_stop_delete_test.go:264: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p embed-certs-20220531193346-2108 -n embed-certs-20220531193346-2108
E0531 19:49:56.237357    2108 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\old-k8s-version-20220531192531-2108\client.crt: The system cannot find the path specified.
E0531 19:49:56.253317    2108 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\old-k8s-version-20220531192531-2108\client.crt: The system cannot find the path specified.
E0531 19:49:56.268451    2108 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\old-k8s-version-20220531192531-2108\client.crt: The system cannot find the path specified.
E0531 19:49:56.299298    2108 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\old-k8s-version-20220531192531-2108\client.crt: The system cannot find the path specified.
E0531 19:49:56.345288    2108 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\old-k8s-version-20220531192531-2108\client.crt: The system cannot find the path specified.
E0531 19:49:56.437806    2108 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\old-k8s-version-20220531192531-2108\client.crt: The system cannot find the path specified.
E0531 19:49:56.610970    2108 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\old-k8s-version-20220531192531-2108\client.crt: The system cannot find the path specified.
E0531 19:49:56.934369    2108 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\old-k8s-version-20220531192531-2108\client.crt: The system cannot find the path specified.
E0531 19:49:57.579150    2108 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\old-k8s-version-20220531192531-2108\client.crt: The system cannot find the path specified.
E0531 19:49:58.866591    2108 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\old-k8s-version-20220531192531-2108\client.crt: The system cannot find the path specified.
start_stop_delete_test.go:264: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.Host}} -p embed-certs-20220531193346-2108 -n embed-certs-20220531193346-2108: (9.1047157s)
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (430.66s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:275: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:286: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (8.29s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:306: (dbg) Run:  out/minikube-windows-amd64.exe ssh -p newest-cni-20220531193849-2108 "sudo crictl images -o json"
start_stop_delete_test.go:306: (dbg) Done: out/minikube-windows-amd64.exe ssh -p newest-cni-20220531193849-2108 "sudo crictl images -o json": (8.2892778s)
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (8.29s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (48.9s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:313: (dbg) Run:  out/minikube-windows-amd64.exe pause -p newest-cni-20220531193849-2108 --alsologtostderr -v=1

                                                
                                                
=== CONT  TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:313: (dbg) Done: out/minikube-windows-amd64.exe pause -p newest-cni-20220531193849-2108 --alsologtostderr -v=1: (8.6544878s)
start_stop_delete_test.go:313: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p newest-cni-20220531193849-2108 -n newest-cni-20220531193849-2108

                                                
                                                
=== CONT  TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:313: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p newest-cni-20220531193849-2108 -n newest-cni-20220531193849-2108: exit status 2 (7.1447475s)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:313: status error: exit status 2 (may be ok)
start_stop_delete_test.go:313: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Kubelet}} -p newest-cni-20220531193849-2108 -n newest-cni-20220531193849-2108

                                                
                                                
=== CONT  TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:313: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Kubelet}} -p newest-cni-20220531193849-2108 -n newest-cni-20220531193849-2108: exit status 2 (7.3895515s)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:313: status error: exit status 2 (may be ok)
start_stop_delete_test.go:313: (dbg) Run:  out/minikube-windows-amd64.exe unpause -p newest-cni-20220531193849-2108 --alsologtostderr -v=1

                                                
                                                
=== CONT  TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:313: (dbg) Done: out/minikube-windows-amd64.exe unpause -p newest-cni-20220531193849-2108 --alsologtostderr -v=1: (7.5526078s)
start_stop_delete_test.go:313: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p newest-cni-20220531193849-2108 -n newest-cni-20220531193849-2108

                                                
                                                
=== CONT  TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:313: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p newest-cni-20220531193849-2108 -n newest-cni-20220531193849-2108: (9.631099s)
start_stop_delete_test.go:313: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Kubelet}} -p newest-cni-20220531193849-2108 -n newest-cni-20220531193849-2108

                                                
                                                
=== CONT  TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:313: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.Kubelet}} -p newest-cni-20220531193849-2108 -n newest-cni-20220531193849-2108: (8.526691s)
--- PASS: TestStartStop/group/newest-cni/serial/Pause (48.90s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (5.04s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:276: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:342: "kubernetes-dashboard-6fb5469cf5-spl8w" [80d370ac-dc7c-46b5-bd5f-99e80be1d3da] Running

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:276: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.0370377s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (5.04s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.55s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:289: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:342: "kubernetes-dashboard-6fb5469cf5-spl8w" [80d370ac-dc7c-46b5-bd5f-99e80be1d3da] Running
start_stop_delete_test.go:289: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.0370111s
start_stop_delete_test.go:293: (dbg) Run:  kubectl --context old-k8s-version-20220531192531-2108 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.55s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (7.48s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:306: (dbg) Run:  out/minikube-windows-amd64.exe ssh -p old-k8s-version-20220531192531-2108 "sudo crictl images -o json"

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:306: (dbg) Done: out/minikube-windows-amd64.exe ssh -p old-k8s-version-20220531192531-2108 "sudo crictl images -o json": (7.480319s)
start_stop_delete_test.go:306: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (7.48s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop (46.09s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:276: (dbg) TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:342: "kubernetes-dashboard-8469778f77-wq7fr" [1132d0e4-d366-4d40-b80e-3386ac648baf] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop
helpers_test.go:342: "kubernetes-dashboard-8469778f77-wq7fr" [1132d0e4-d366-4d40-b80e-3386ac648baf] Running
start_stop_delete_test.go:276: (dbg) TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 46.0863668s
--- PASS: TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop (46.09s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (764.77s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:101: (dbg) Run:  out/minikube-windows-amd64.exe start -p auto-20220531191922-2108 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --driver=docker

                                                
                                                
=== CONT  TestNetworkPlugins/group/auto/Start
net_test.go:101: (dbg) Done: out/minikube-windows-amd64.exe start -p auto-20220531191922-2108 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --driver=docker: (12m44.7714047s)
--- PASS: TestNetworkPlugins/group/auto/Start (764.77s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop (5.52s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:289: (dbg) TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:342: "kubernetes-dashboard-8469778f77-wq7fr" [1132d0e4-d366-4d40-b80e-3386ac648baf] Running

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:289: (dbg) TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.0404548s
start_stop_delete_test.go:293: (dbg) Run:  kubectl --context default-k8s-different-port-20220531193451-2108 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop (5.52s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/VerifyKubernetesImages (6.83s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:306: (dbg) Run:  out/minikube-windows-amd64.exe ssh -p default-k8s-different-port-20220531193451-2108 "sudo crictl images -o json"
start_stop_delete_test.go:306: (dbg) Done: out/minikube-windows-amd64.exe ssh -p default-k8s-different-port-20220531193451-2108 "sudo crictl images -o json": (6.8305837s)
start_stop_delete_test.go:306: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/default-k8s-different-port/serial/VerifyKubernetesImages (6.83s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/Pause (48.98s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/Pause
start_stop_delete_test.go:313: (dbg) Run:  out/minikube-windows-amd64.exe pause -p default-k8s-different-port-20220531193451-2108 --alsologtostderr -v=1

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/Pause
start_stop_delete_test.go:313: (dbg) Done: out/minikube-windows-amd64.exe pause -p default-k8s-different-port-20220531193451-2108 --alsologtostderr -v=1: (12.6278411s)
start_stop_delete_test.go:313: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p default-k8s-different-port-20220531193451-2108 -n default-k8s-different-port-20220531193451-2108
start_stop_delete_test.go:313: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p default-k8s-different-port-20220531193451-2108 -n default-k8s-different-port-20220531193451-2108: exit status 2 (6.929274s)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:313: status error: exit status 2 (may be ok)
start_stop_delete_test.go:313: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Kubelet}} -p default-k8s-different-port-20220531193451-2108 -n default-k8s-different-port-20220531193451-2108
E0531 19:46:10.026588    2108 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\functional-20220531173104-2108\client.crt: The system cannot find the path specified.
start_stop_delete_test.go:313: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Kubelet}} -p default-k8s-different-port-20220531193451-2108 -n default-k8s-different-port-20220531193451-2108: exit status 2 (6.9028341s)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:313: status error: exit status 2 (may be ok)
start_stop_delete_test.go:313: (dbg) Run:  out/minikube-windows-amd64.exe unpause -p default-k8s-different-port-20220531193451-2108 --alsologtostderr -v=1
start_stop_delete_test.go:313: (dbg) Done: out/minikube-windows-amd64.exe unpause -p default-k8s-different-port-20220531193451-2108 --alsologtostderr -v=1: (7.6913995s)
start_stop_delete_test.go:313: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p default-k8s-different-port-20220531193451-2108 -n default-k8s-different-port-20220531193451-2108
start_stop_delete_test.go:313: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p default-k8s-different-port-20220531193451-2108 -n default-k8s-different-port-20220531193451-2108: (7.6402862s)
start_stop_delete_test.go:313: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Kubelet}} -p default-k8s-different-port-20220531193451-2108 -n default-k8s-different-port-20220531193451-2108
start_stop_delete_test.go:313: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.Kubelet}} -p default-k8s-different-port-20220531193451-2108 -n default-k8s-different-port-20220531193451-2108: (7.1854586s)
--- PASS: TestStartStop/group/default-k8s-different-port/serial/Pause (48.98s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (171.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:101: (dbg) Run:  out/minikube-windows-amd64.exe start -p kindnet-20220531191930-2108 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=kindnet --driver=docker

                                                
                                                
=== CONT  TestNetworkPlugins/group/kindnet/Start
net_test.go:101: (dbg) Done: out/minikube-windows-amd64.exe start -p kindnet-20220531191930-2108 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=kindnet --driver=docker: (2m51.2986343s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (171.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (5.04s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:109: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:342: "kindnet-kldwr" [c002dcb9-f08b-4873-b760-1aa296f27f23] Running
net_test.go:109: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 5.0385521s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (5.04s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (6.84s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:122: (dbg) Run:  out/minikube-windows-amd64.exe ssh -p kindnet-20220531191930-2108 "pgrep -a kubelet"
net_test.go:122: (dbg) Done: out/minikube-windows-amd64.exe ssh -p kindnet-20220531191930-2108 "pgrep -a kubelet": (6.8443717s)
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (6.84s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (21.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:138: (dbg) Run:  kubectl --context kindnet-20220531191930-2108 replace --force -f testdata\netcat-deployment.yaml
net_test.go:152: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:342: "netcat-668db85669-d7ftp" [beea48da-1153-439b-af21-120a89916021] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:342: "netcat-668db85669-d7ftp" [beea48da-1153-439b-af21-120a89916021] Running
E0531 19:49:12.280866    2108 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\no-preload-20220531192611-2108\client.crt: The system cannot find the path specified.
E0531 19:49:13.054126    2108 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\addons-20220531171726-2108\client.crt: The system cannot find the path specified.
net_test.go:152: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 20.0975181s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (21.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.61s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:169: (dbg) Run:  kubectl --context kindnet-20220531191930-2108 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.61s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.5s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:188: (dbg) Run:  kubectl --context kindnet-20220531191930-2108 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.50s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.63s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:238: (dbg) Run:  kubectl --context kindnet-20220531191930-2108 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.63s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (40.05s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:276: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:342: "kubernetes-dashboard-8469778f77-9rhsq" [36018c96-4ba0-4da6-a2c3-d0049a513cac] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
E0531 19:50:01.438033    2108 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\old-k8s-version-20220531192531-2108\client.crt: The system cannot find the path specified.
E0531 19:50:02.526610    2108 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\ingress-addon-legacy-20220531181152-2108\client.crt: The system cannot find the path specified.
E0531 19:50:06.565686    2108 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\old-k8s-version-20220531192531-2108\client.crt: The system cannot find the path specified.
E0531 19:50:16.818006    2108 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\old-k8s-version-20220531192531-2108\client.crt: The system cannot find the path specified.
helpers_test.go:342: "kubernetes-dashboard-8469778f77-9rhsq" [36018c96-4ba0-4da6-a2c3-d0049a513cac] Running
E0531 19:50:37.303965    2108 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\old-k8s-version-20220531192531-2108\client.crt: The system cannot find the path specified.
start_stop_delete_test.go:276: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 40.048925s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (40.05s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.55s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:289: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:342: "kubernetes-dashboard-8469778f77-9rhsq" [36018c96-4ba0-4da6-a2c3-d0049a513cac] Running
start_stop_delete_test.go:289: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.0420785s
start_stop_delete_test.go:293: (dbg) Run:  kubectl --context embed-certs-20220531193346-2108 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.55s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (7.43s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:306: (dbg) Run:  out/minikube-windows-amd64.exe ssh -p embed-certs-20220531193346-2108 "sudo crictl images -o json"
start_stop_delete_test.go:306: (dbg) Done: out/minikube-windows-amd64.exe ssh -p embed-certs-20220531193346-2108 "sudo crictl images -o json": (7.425268s)
start_stop_delete_test.go:306: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (7.43s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Start (394.39s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Start
net_test.go:101: (dbg) Run:  out/minikube-windows-amd64.exe start -p false-20220531191930-2108 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=false --driver=docker
E0531 19:53:26.148973    2108 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\default-k8s-different-port-20220531193451-2108\client.crt: The system cannot find the path specified.
E0531 19:53:43.337516    2108 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\kindnet-20220531191930-2108\client.crt: The system cannot find the path specified.
E0531 19:53:43.352133    2108 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\kindnet-20220531191930-2108\client.crt: The system cannot find the path specified.
E0531 19:53:43.367423    2108 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\kindnet-20220531191930-2108\client.crt: The system cannot find the path specified.
E0531 19:53:43.398143    2108 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\kindnet-20220531191930-2108\client.crt: The system cannot find the path specified.
E0531 19:53:43.445379    2108 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\kindnet-20220531191930-2108\client.crt: The system cannot find the path specified.
E0531 19:53:43.538362    2108 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\kindnet-20220531191930-2108\client.crt: The system cannot find the path specified.
E0531 19:53:43.707927    2108 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\kindnet-20220531191930-2108\client.crt: The system cannot find the path specified.
E0531 19:53:44.038713    2108 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\kindnet-20220531191930-2108\client.crt: The system cannot find the path specified.
E0531 19:53:44.684279    2108 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\kindnet-20220531191930-2108\client.crt: The system cannot find the path specified.
E0531 19:53:45.974419    2108 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\kindnet-20220531191930-2108\client.crt: The system cannot find the path specified.
E0531 19:53:48.549004    2108 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\kindnet-20220531191930-2108\client.crt: The system cannot find the path specified.
E0531 19:53:53.674086    2108 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\kindnet-20220531191930-2108\client.crt: The system cannot find the path specified.
E0531 19:54:03.923342    2108 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\kindnet-20220531191930-2108\client.crt: The system cannot find the path specified.
E0531 19:54:12.275109    2108 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\no-preload-20220531192611-2108\client.crt: The system cannot find the path specified.
E0531 19:54:13.060065    2108 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\addons-20220531171726-2108\client.crt: The system cannot find the path specified.
E0531 19:54:24.406832    2108 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\kindnet-20220531191930-2108\client.crt: The system cannot find the path specified.
E0531 19:54:48.074404    2108 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\default-k8s-different-port-20220531193451-2108\client.crt: The system cannot find the path specified.
E0531 19:54:56.242852    2108 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\old-k8s-version-20220531192531-2108\client.crt: The system cannot find the path specified.
E0531 19:55:02.527343    2108 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\ingress-addon-legacy-20220531181152-2108\client.crt: The system cannot find the path specified.
E0531 19:55:05.378810    2108 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\kindnet-20220531191930-2108\client.crt: The system cannot find the path specified.
E0531 19:55:24.038506    2108 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\old-k8s-version-20220531192531-2108\client.crt: The system cannot find the path specified.
E0531 19:55:38.724415    2108 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\no-preload-20220531192611-2108\client.crt: The system cannot find the path specified.
E0531 19:55:53.261351    2108 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\functional-20220531173104-2108\client.crt: The system cannot find the path specified.
E0531 19:56:10.030527    2108 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\functional-20220531173104-2108\client.crt: The system cannot find the path specified.
E0531 19:56:27.313017    2108 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\kindnet-20220531191930-2108\client.crt: The system cannot find the path specified.
E0531 19:57:04.086121    2108 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\default-k8s-different-port-20220531193451-2108\client.crt: The system cannot find the path specified.

                                                
                                                
=== CONT  TestNetworkPlugins/group/false/Start
net_test.go:101: (dbg) Done: out/minikube-windows-amd64.exe start -p false-20220531191930-2108 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=false --driver=docker: (6m34.3890355s)
--- PASS: TestNetworkPlugins/group/false/Start (394.39s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (7.64s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:122: (dbg) Run:  out/minikube-windows-amd64.exe ssh -p auto-20220531191922-2108 "pgrep -a kubelet"

                                                
                                                
=== CONT  TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:122: (dbg) Done: out/minikube-windows-amd64.exe ssh -p auto-20220531191922-2108 "pgrep -a kubelet": (7.6373836s)
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (7.64s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (22.02s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:138: (dbg) Run:  kubectl --context auto-20220531191922-2108 replace --force -f testdata\netcat-deployment.yaml
net_test.go:152: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:342: "netcat-668db85669-z8rhn" [d717ae44-fb53-4826-b939-5589e857c4a8] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])

                                                
                                                
=== CONT  TestNetworkPlugins/group/auto/NetCatPod
helpers_test.go:342: "netcat-668db85669-z8rhn" [d717ae44-fb53-4826-b939-5589e857c4a8] Running
net_test.go:152: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 21.0368245s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (22.02s)

                                                
                                    

Test skip (25/254)

x
+
TestDownloadOnly/v1.16.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/cached-images
aaa_download_only_test.go:121: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.16.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/binaries
aaa_download_only_test.go:140: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.16.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.23.6/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.23.6/cached-images
aaa_download_only_test.go:121: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.23.6/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.23.6/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.23.6/binaries
aaa_download_only_test.go:140: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.23.6/binaries (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Registry (30.03s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:280: registry stabilized in 35.951ms

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:282: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...

                                                
                                                
=== CONT  TestAddons/parallel/Registry
helpers_test.go:342: "registry-gbrnf" [4eead096-86a0-4262-848c-5165c7998117] Running

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:282: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.0372056s

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:285: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...

                                                
                                                
=== CONT  TestAddons/parallel/Registry
helpers_test.go:342: "registry-proxy-sn97l" [7d98aa5a-2ee6-4aa8-ba65-ab5dcdf61bad] Running

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:285: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.0975415s
addons_test.go:290: (dbg) Run:  kubectl --context addons-20220531171726-2108 delete po -l run=registry-test --now
addons_test.go:295: (dbg) Run:  kubectl --context addons-20220531171726-2108 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:295: (dbg) Done: kubectl --context addons-20220531171726-2108 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (19.4053596s)
addons_test.go:305: Unable to complete rest of the test due to connectivity assumptions
--- SKIP: TestAddons/parallel/Registry (30.03s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (50.19s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:162: (dbg) Run:  kubectl --context addons-20220531171726-2108 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:182: (dbg) Run:  kubectl --context addons-20220531171726-2108 replace --force -f testdata\nginx-ingress-v1.yaml
addons_test.go:182: (dbg) Done: kubectl --context addons-20220531171726-2108 replace --force -f testdata\nginx-ingress-v1.yaml: (5.034552s)
addons_test.go:195: (dbg) Run:  kubectl --context addons-20220531171726-2108 replace --force -f testdata\nginx-pod-svc.yaml
addons_test.go:195: (dbg) Done: kubectl --context addons-20220531171726-2108 replace --force -f testdata\nginx-pod-svc.yaml: (2.0703872s)
addons_test.go:200: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:342: "nginx" [77ab5c13-2a3b-4e0d-9693-d05ec1510551] Pending
helpers_test.go:342: "nginx" [77ab5c13-2a3b-4e0d-9693-d05ec1510551] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
helpers_test.go:342: "nginx" [77ab5c13-2a3b-4e0d-9693-d05ec1510551] Running

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:200: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 36.1949347s
addons_test.go:212: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-20220531171726-2108 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:212: (dbg) Done: out/minikube-windows-amd64.exe -p addons-20220531171726-2108 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": (6.2106849s)
addons_test.go:232: skipping ingress DNS test for any combination that needs port forwarding
--- SKIP: TestAddons/parallel/Ingress (50.19s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:448: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
driver_install_or_update_test.go:41: Skip if not linux.
--- SKIP: TestKVMDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (300.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:897: (dbg) daemon: [out/minikube-windows-amd64.exe dashboard --url --port 36195 -p functional-20220531173104-2108 --alsologtostderr -v=1]

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:908: output didn't produce a URL
functional_test.go:902: (dbg) stopping [out/minikube-windows-amd64.exe dashboard --url --port 36195 -p functional-20220531173104-2108 --alsologtostderr -v=1] ...
helpers_test.go:488: unable to find parent, assuming dead: process does not exist
E0531 17:44:13.016157    2108 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\addons-20220531171726-2108\client.crt: The system cannot find the path specified.
E0531 17:45:36.228526    2108 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\addons-20220531171726-2108\client.crt: The system cannot find the path specified.
E0531 17:49:13.030356    2108 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\addons-20220531171726-2108\client.crt: The system cannot find the path specified.
E0531 17:54:13.025567    2108 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\addons-20220531171726-2108\client.crt: The system cannot find the path specified.
E0531 17:59:13.024349    2108 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\addons-20220531171726-2108\client.crt: The system cannot find the path specified.
E0531 18:02:16.242797    2108 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\addons-20220531171726-2108\client.crt: The system cannot find the path specified.
E0531 18:04:13.020775    2108 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\addons-20220531171726-2108\client.crt: The system cannot find the path specified.
E0531 18:09:13.031553    2108 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube7\minikube-integration\.minikube\profiles\addons-20220531171726-2108\client.crt: The system cannot find the path specified.
--- SKIP: TestFunctional/parallel/DashboardCmd (300.02s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd
=== PAUSE TestFunctional/parallel/MountCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd
functional_test_mount_test.go:57: skipping: mount broken on windows: https://github.com/kubernetes/minikube/issues/8303
--- SKIP: TestFunctional/parallel/MountCmd (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (36.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1558: (dbg) Run:  kubectl --context functional-20220531173104-2108 create deployment hello-node-connect --image=k8s.gcr.io/echoserver:1.8

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1564: (dbg) Run:  kubectl --context functional-20220531173104-2108 expose deployment hello-node-connect --type=NodePort --port=8080

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1569: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:342: "hello-node-connect-74cf8bc446-5wbb6" [74c4ce43-f838-42ab-971e-c9df73ddc188] Pending
helpers_test.go:342: "hello-node-connect-74cf8bc446-5wbb6" [74c4ce43-f838-42ab-971e-c9df73ddc188] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
helpers_test.go:342: "hello-node-connect-74cf8bc446-5wbb6" [74c4ce43-f838-42ab-971e-c9df73ddc188] Running

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1569: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 35.2052707s
functional_test.go:1575: test is broken for port-forwarded drivers: https://github.com/kubernetes/minikube/issues/7383
--- SKIP: TestFunctional/parallel/ServiceCmdConnect (36.07s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:542: only validate podman env with docker container runtime, currently testing docker
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:193: skipping: access direct test is broken on windows: https://github.com/kubernetes/minikube/issues/8304
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:97: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:97: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:97: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddons (47.43s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddons
addons_test.go:162: (dbg) Run:  kubectl --context ingress-addon-legacy-20220531181152-2108 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:162: (dbg) Done: kubectl --context ingress-addon-legacy-20220531181152-2108 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s: (13.16819s)
addons_test.go:182: (dbg) Run:  kubectl --context ingress-addon-legacy-20220531181152-2108 replace --force -f testdata\nginx-ingress-v1beta1.yaml
addons_test.go:182: (dbg) Done: kubectl --context ingress-addon-legacy-20220531181152-2108 replace --force -f testdata\nginx-ingress-v1beta1.yaml: (1.1355726s)
addons_test.go:195: (dbg) Run:  kubectl --context ingress-addon-legacy-20220531181152-2108 replace --force -f testdata\nginx-pod-svc.yaml
addons_test.go:195: (dbg) Done: kubectl --context ingress-addon-legacy-20220531181152-2108 replace --force -f testdata\nginx-pod-svc.yaml: (1.2985518s)
addons_test.go:200: (dbg) TestIngressAddonLegacy/serial/ValidateIngressAddons: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:342: "nginx" [13c74a00-126d-47de-beff-fcb6f52d203b] Pending
helpers_test.go:342: "nginx" [13c74a00-126d-47de-beff-fcb6f52d203b] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:342: "nginx" [13c74a00-126d-47de-beff-fcb6f52d203b] Running
addons_test.go:200: (dbg) TestIngressAddonLegacy/serial/ValidateIngressAddons: run=nginx healthy within 25.2618826s
addons_test.go:212: (dbg) Run:  out/minikube-windows-amd64.exe -p ingress-addon-legacy-20220531181152-2108 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:212: (dbg) Done: out/minikube-windows-amd64.exe -p ingress-addon-legacy-20220531181152-2108 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": (6.3876701s)
addons_test.go:232: skipping ingress DNS test for any combination that needs port forwarding
--- SKIP: TestIngressAddonLegacy/serial/ValidateIngressAddons (47.43s)

                                                
                                    
x
+
TestScheduledStopUnix (0s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:76: test only runs on unix
--- SKIP: TestScheduledStopUnix (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:39: skipping due to https://github.com/kubernetes/minikube/issues/14232
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (11.8s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:105: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-20220531193439-2108" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p disable-driver-mounts-20220531193439-2108
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p disable-driver-mounts-20220531193439-2108: (11.8034639s)
--- SKIP: TestStartStop/group/disable-driver-mounts (11.80s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel (7.63s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel
net_test.go:79: flannel is not yet compatible with Docker driver: iptables v1.8.3 (legacy): Couldn't load target `CNI-x': No such file or directory
helpers_test.go:175: Cleaning up "flannel-20220531191922-2108" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p flannel-20220531191922-2108
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p flannel-20220531191922-2108: (7.6260972s)
--- SKIP: TestNetworkPlugins/group/flannel (7.63s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel (7.57s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel
net_test.go:79: flannel is not yet compatible with Docker driver: iptables v1.8.3 (legacy): Couldn't load target `CNI-x': No such file or directory
helpers_test.go:175: Cleaning up "custom-flannel-20220531191930-2108" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p custom-flannel-20220531191930-2108
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p custom-flannel-20220531191930-2108: (7.5717946s)
--- SKIP: TestNetworkPlugins/group/custom-flannel (7.57s)

                                                
                                    
Copied to clipboard