Test Report: Docker_Windows 13639

                    
                      60328d4d40a11ac7c18c6243f597bcfbb3050148:2022-05-12:23896
                    
                

Test fail (13/268)

x
+
TestFunctional/parallel/ServiceCmd (1963.92s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd
=== PAUSE TestFunctional/parallel/ServiceCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmd
functional_test.go:1435: (dbg) Run:  kubectl --context functional-20220511231058-7184 create deployment hello-node --image=k8s.gcr.io/echoserver:1.8
functional_test.go:1441: (dbg) Run:  kubectl --context functional-20220511231058-7184 expose deployment hello-node --type=NodePort --port=8080

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmd
functional_test.go:1446: (dbg) TestFunctional/parallel/ServiceCmd: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:342: "hello-node-54fbb85-swswq" [0c2db6df-37b9-4201-b3a9-44e6d839ff68] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmd
helpers_test.go:342: "hello-node-54fbb85-swswq" [0c2db6df-37b9-4201-b3a9-44e6d839ff68] Running

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmd
functional_test.go:1446: (dbg) TestFunctional/parallel/ServiceCmd: app=hello-node healthy within 9.0945525s
functional_test.go:1451: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20220511231058-7184 service list

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmd
functional_test.go:1451: (dbg) Done: out/minikube-windows-amd64.exe -p functional-20220511231058-7184 service list: (7.33968s)
functional_test.go:1465: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20220511231058-7184 service --namespace=default --https --url hello-node

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmd
functional_test.go:1394: Failed to sent interrupt to proc not supported by windows

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmd
functional_test.go:1465: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-20220511231058-7184 service --namespace=default --https --url hello-node: exit status 1 (32m2.3005163s)

                                                
                                                
-- stdout --
	https://127.0.0.1:64099

                                                
                                                
-- /stdout --
** stderr ** 
	! Because you are using a Docker driver on windows, the terminal needs to be open to run it.

                                                
                                                
** /stderr **
functional_test.go:1467: failed to get service url. args "out/minikube-windows-amd64.exe -p functional-20220511231058-7184 service --namespace=default --https --url hello-node" : exit status 1
functional_test.go:1404: service test failed - dumping debug information
functional_test.go:1405: -----------------------service failure post-mortem--------------------------------
functional_test.go:1408: (dbg) Run:  kubectl --context functional-20220511231058-7184 describe po hello-node
functional_test.go:1412: hello-node pod describe:
Name:         hello-node-54fbb85-swswq
Namespace:    default
Priority:     0
Node:         functional-20220511231058-7184/192.168.49.2
Start Time:   Wed, 11 May 2022 23:18:39 +0000
Labels:       app=hello-node
pod-template-hash=54fbb85
Annotations:  <none>
Status:       Running
IP:           172.17.0.7
IPs:
IP:           172.17.0.7
Controlled By:  ReplicaSet/hello-node-54fbb85
Containers:
echoserver:
Container ID:   docker://b131f8171a5cea2e9295bfdf416a54e8a4dd7b066ff46c81afefc4a1b6138ee9
Image:          k8s.gcr.io/echoserver:1.8
Image ID:       docker-pullable://k8s.gcr.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969
Port:           <none>
Host Port:      <none>
State:          Running
Started:      Wed, 11 May 2022 23:18:42 +0000
Ready:          True
Restart Count:  0
Environment:    <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-cbt42 (ro)
Conditions:
Type              Status
Initialized       True 
Ready             True 
ContainersReady   True 
PodScheduled      True 
Volumes:
kube-api-access-cbt42:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
ConfigMapOptional:       <nil>
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute for 300s
node.kubernetes.io/unreachable:NoExecute for 300s
Events:
Type    Reason     Age   From                                     Message
----    ------     ----  ----                                     -------
Normal  Scheduled  32m   default-scheduler                        Successfully assigned default/hello-node-54fbb85-swswq to functional-20220511231058-7184
Normal  Pulled     32m   kubelet, functional-20220511231058-7184  Container image "k8s.gcr.io/echoserver:1.8" already present on machine
Normal  Created    32m   kubelet, functional-20220511231058-7184  Created container echoserver
Normal  Started    32m   kubelet, functional-20220511231058-7184  Started container echoserver

                                                
                                                
Name:         hello-node-connect-74cf8bc446-45d4d
Namespace:    default
Priority:     0
Node:         functional-20220511231058-7184/192.168.49.2
Start Time:   Wed, 11 May 2022 23:17:39 +0000
Labels:       app=hello-node-connect
pod-template-hash=74cf8bc446
Annotations:  <none>
Status:       Running
IP:           172.17.0.6
IPs:
IP:           172.17.0.6
Controlled By:  ReplicaSet/hello-node-connect-74cf8bc446
Containers:
echoserver:
Container ID:   docker://8b3160aeac55c9783a8f576430c7a12c8654fb2ca05dd6d8c2e1b9b96bff8c4f
Image:          k8s.gcr.io/echoserver:1.8
Image ID:       docker-pullable://k8s.gcr.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969
Port:           <none>
Host Port:      <none>
State:          Running
Started:      Wed, 11 May 2022 23:18:33 +0000
Ready:          True
Restart Count:  0
Environment:    <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-cdxjs (ro)
Conditions:
Type              Status
Initialized       True 
Ready             True 
ContainersReady   True 
PodScheduled      True 
Volumes:
kube-api-access-cdxjs:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
ConfigMapOptional:       <nil>
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute for 300s
node.kubernetes.io/unreachable:NoExecute for 300s
Events:
Type    Reason     Age   From                                     Message
----    ------     ----  ----                                     -------
Normal  Scheduled  33m   default-scheduler                        Successfully assigned default/hello-node-connect-74cf8bc446-45d4d to functional-20220511231058-7184
Normal  Pulling    33m   kubelet, functional-20220511231058-7184  Pulling image "k8s.gcr.io/echoserver:1.8"
Normal  Pulled     32m   kubelet, functional-20220511231058-7184  Successfully pulled image "k8s.gcr.io/echoserver:1.8" in 47.1956236s
Normal  Created    32m   kubelet, functional-20220511231058-7184  Created container echoserver
Normal  Started    32m   kubelet, functional-20220511231058-7184  Started container echoserver

                                                
                                                
functional_test.go:1414: (dbg) Run:  kubectl --context functional-20220511231058-7184 logs -l app=hello-node
functional_test.go:1418: hello-node logs:
functional_test.go:1420: (dbg) Run:  kubectl --context functional-20220511231058-7184 describe svc hello-node
functional_test.go:1424: hello-node svc describe:
Name:                     hello-node
Namespace:                default
Labels:                   app=hello-node
Annotations:              <none>
Selector:                 app=hello-node
Type:                     NodePort
IP:                       10.99.234.103
Port:                     <unset>  8080/TCP
TargetPort:               8080/TCP
NodePort:                 <unset>  31921/TCP
Endpoints:                172.17.0.7:8080
Session Affinity:         None
External Traffic Policy:  Cluster
Events:                   <none>
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestFunctional/parallel/ServiceCmd]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect functional-20220511231058-7184
helpers_test.go:231: (dbg) Done: docker inspect functional-20220511231058-7184: (1.06803s)
helpers_test.go:235: (dbg) docker inspect functional-20220511231058-7184:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "03f6e31851f4480d46a93f6cf4e4b4d76c14a2571de89d9e8bf5d133274d2d2e",
	        "Created": "2022-05-11T23:11:54.2093463Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 21016,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2022-05-11T23:11:55.1892393Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:8a42e1145657f551cd435eddb43b96ab44d0facbe44106da934225366eeb7757",
	        "ResolvConfPath": "/var/lib/docker/containers/03f6e31851f4480d46a93f6cf4e4b4d76c14a2571de89d9e8bf5d133274d2d2e/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/03f6e31851f4480d46a93f6cf4e4b4d76c14a2571de89d9e8bf5d133274d2d2e/hostname",
	        "HostsPath": "/var/lib/docker/containers/03f6e31851f4480d46a93f6cf4e4b4d76c14a2571de89d9e8bf5d133274d2d2e/hosts",
	        "LogPath": "/var/lib/docker/containers/03f6e31851f4480d46a93f6cf4e4b4d76c14a2571de89d9e8bf5d133274d2d2e/03f6e31851f4480d46a93f6cf4e4b4d76c14a2571de89d9e8bf5d133274d2d2e-json.log",
	        "Name": "/functional-20220511231058-7184",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-20220511231058-7184:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-20220511231058-7184",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4194304000,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 4194304000,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/b7ba6a2e1d7aa95e3e95797f43b541ad015c3547d42b5e8ff611b22b2b7a2610-init/diff:/var/lib/docker/overlay2/f5e9ce82d74c36e0b6de7ac5d28dc4eacb2abae094b3550db66974795ad74446/diff:/var/lib/docker/overlay2/63fa2edc88f635760dd847938696c8fc50aad8a0c51ab6c6f93b0aa9a6fcefe6/diff:/var/lib/docker/overlay2/5fcaace21e215fd120a775470b4a78ef06d9f466e024f0b1baef234ddf87f04f/diff:/var/lib/docker/overlay2/6c9accb62919ca088333f8b3a193f43886e2c3042c5ec726e1c7fd19ee183024/diff:/var/lib/docker/overlay2/a9a1aea640018bd9e524c36422ea97cc97a391601b718810fed779260e48c87a/diff:/var/lib/docker/overlay2/2476b6e8d656e43d76d4288098236bc765cb37fa9dde696f09bfce485e99229e/diff:/var/lib/docker/overlay2/6cdf08ddc61561e961de67f3e14478397868c53a4d8552909a5b84e35b28db1f/diff:/var/lib/docker/overlay2/6f6f8b6686cc7838a52ce30fdc4e938cde2fb68b161e09a9bc81fa11011619a6/diff:/var/lib/docker/overlay2/9f55b91f762ea60dc3da71cf56e5ca24181e712389d99cb8d972bba474f5d6a4/diff:/var/lib/docker/overlay2/4c269b
df57eae0d21d2034dc086308278f9b00f2301c726b6de876b9ff97298d/diff:/var/lib/docker/overlay2/5fb8ed9b9e765df8150f27b7847ec7333b2c7d978dbc1161da97d630ec7e43e2/diff:/var/lib/docker/overlay2/3a297e9f6ab51d930ef61c49a0fea772cdc2a2704a077db6adb142eb044d9a93/diff:/var/lib/docker/overlay2/2068464f4655627fb513b31660ab34c938e559da10d44fd723ce9e1d744a037d/diff:/var/lib/docker/overlay2/f783596106daebadefbb7774015c369d757665d434c96581b426b2e5f5b453c4/diff:/var/lib/docker/overlay2/ac8927d3cc7829cc82e4c0214dd4adee97aedbe2b7d992cbbc08288443c8e320/diff:/var/lib/docker/overlay2/91eff4612dd56b2c82f4357b879f9cdcbc13704bf8f6dcbfc56febb104774843/diff:/var/lib/docker/overlay2/ee7366acc162efb7b878c4c56df021a8ef756fa595230ffe898cd0dd0355eb44/diff:/var/lib/docker/overlay2/ab5df115d2ec8cd71172942a2e449de32b93a3b6b5d90122c0e734c5e11d6bb2/diff:/var/lib/docker/overlay2/59fcbe9b552129cdc5c96e9e8ff27f4b88a12645aec3cf8f48f28d91521760e0/diff:/var/lib/docker/overlay2/0684681eb5880654b43e8803ef8f17b85e6129c85ff81c13b509563184a77625/diff:/var/lib/d
ocker/overlay2/8bbd24801b480df6ca8545e8f8bb09c17b7598c2868fb94ea5b8775ce2f311b4/diff:/var/lib/docker/overlay2/f28553cc59fccccfdfc5c24b7b8dfe4055c625d0a004731911c34b4ba32a9dfb/diff:/var/lib/docker/overlay2/2e47a8ef6e4481885d71f57a1d9ef99898b741644addd2796de5c2f4c696edb0/diff:/var/lib/docker/overlay2/e1f1eaaa809c974dabd197f590d19be05325f506a53a9a1f8ba29defd7096f60/diff:/var/lib/docker/overlay2/83ce12af60df76f98283ed8f3450cd5727b42d06055b18f04a18068b105ae128/diff:/var/lib/docker/overlay2/5fd34820f54e7f8f0c898c21b5d9d030e5b82c65c901897306c3db475481167f/diff:/var/lib/docker/overlay2/3ecb6f46fa47a8906ff5de1da5a63be9c664ff5bc66faf870126868d36bb77c6/diff:/var/lib/docker/overlay2/ccb92f12dd3e84b11b2c9b1ef6a0581ad5894648432ebe7cb5d16d48c7aacf6e/diff:/var/lib/docker/overlay2/7c6d11dc9abdd4916f3759c8ae4db8c3011cff872f2fd3cc502e7f663e496765/diff:/var/lib/docker/overlay2/b865b0351704115fa113e25f7651d1dc1e2f0348c332552e555e898094f34802/diff:/var/lib/docker/overlay2/bbcf207462c3f88368214d8e4ca222f28a828bd30661741d421665b4d10
80f07/diff:/var/lib/docker/overlay2/b554a32e9a2e4d3773e918754c27a1b32bc7ec5327d3bd1f52d7a146a07fa2c5/diff:/var/lib/docker/overlay2/d0a997bacfa9b1b54f61c62f00ad2797616ea9bb55182aad68ed805f96f5f72b/diff:/var/lib/docker/overlay2/e0c168ecfe6a93618f4f653c1aba422023114f242ab1045591d0c8454573d5c2/diff:/var/lib/docker/overlay2/fb67af38a46ef55935fcfb4f1be5f34b45b3d0e1c571538828117f23eedea417/diff:/var/lib/docker/overlay2/e96ed0776e5f27ef225469ac5f5e8ed2e299c72d5db88782599c0fdd1cec2fe3/diff:/var/lib/docker/overlay2/91b77e60e0a7864ace4f5a4d65f465bd7fe862616a87a74ee9fee21dc5dceb07/diff:/var/lib/docker/overlay2/9829211293f70b356dfa8d07b5dbbc3a6d05415cbd2840fd9dd948b8b315bf18/diff:/var/lib/docker/overlay2/dc35dda36e34a2f4f3a5d958b1a7d4d75db8655c4bc7b4b3d9591f43f9a645fc/diff:/var/lib/docker/overlay2/968c2bb04f641a9c8bd30d38659dc28973b31bfd577bb1aa355ae6c2ab4a0d34/diff:/var/lib/docker/overlay2/37432c6ae0b10a52e95b215fdd2e256362060f32c4a52f0d2021b6e10b3ed77b/diff:/var/lib/docker/overlay2/77687f9734b19f3e8a5bb08b07067e0b572775
20867b7a6ad80b67ffebe332d7/diff",
	                "MergedDir": "/var/lib/docker/overlay2/b7ba6a2e1d7aa95e3e95797f43b541ad015c3547d42b5e8ff611b22b2b7a2610/merged",
	                "UpperDir": "/var/lib/docker/overlay2/b7ba6a2e1d7aa95e3e95797f43b541ad015c3547d42b5e8ff611b22b2b7a2610/diff",
	                "WorkDir": "/var/lib/docker/overlay2/b7ba6a2e1d7aa95e3e95797f43b541ad015c3547d42b5e8ff611b22b2b7a2610/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-20220511231058-7184",
	                "Source": "/var/lib/docker/volumes/functional-20220511231058-7184/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-20220511231058-7184",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1652251400-14138@sha256:8c847a4aa2afc5a7fc659f9731046bf9cc7e788283deecc83c8633014fb0828a",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-20220511231058-7184",
	                "name.minikube.sigs.k8s.io": "functional-20220511231058-7184",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "2fbef82a382778c047170c6728b78eff526a37dc48d3a1b6bab2c12784116af8",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "63732"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "63728"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "63729"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "63730"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "63731"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/2fbef82a3827",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-20220511231058-7184": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "03f6e31851f4",
	                        "functional-20220511231058-7184"
	                    ],
	                    "NetworkID": "9bc7760fe8956141b37970dabd4c2de8f9f54cc49f02c83af1d07ae10d266b63",
	                    "EndpointID": "571a75d2ebe2c13326f209c314de4e50d603277e93cca451109f303a24d608bc",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p functional-20220511231058-7184 -n functional-20220511231058-7184
helpers_test.go:239: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.Host}} -p functional-20220511231058-7184 -n functional-20220511231058-7184: (6.5657883s)
helpers_test.go:244: <<< TestFunctional/parallel/ServiceCmd FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestFunctional/parallel/ServiceCmd]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20220511231058-7184 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-windows-amd64.exe -p functional-20220511231058-7184 logs -n 25: (8.3836959s)
helpers_test.go:252: TestFunctional/parallel/ServiceCmd logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |----------------|-----------------------------------------------------------------------------------------------------|--------------------------------|-------------------|---------|---------------------|---------------------|
	|    Command     |                                                Args                                                 |            Profile             |       User        | Version |     Start Time      |      End Time       |
	|----------------|-----------------------------------------------------------------------------------------------------|--------------------------------|-------------------|---------|---------------------|---------------------|
	| image          | functional-20220511231058-7184 image save                                                           | functional-20220511231058-7184 | minikube4\jenkins | v1.25.2 | 11 May 22 23:17 GMT | 11 May 22 23:17 GMT |
	|                | gcr.io/google-containers/addon-resizer:functional-20220511231058-7184                               |                                |                   |         |                     |                     |
	|                | C:\jenkins\workspace\Docker_Windows_integration\addon-resizer-save.tar                              |                                |                   |         |                     |                     |
	| image          | functional-20220511231058-7184 image rm                                                             | functional-20220511231058-7184 | minikube4\jenkins | v1.25.2 | 11 May 22 23:17 GMT | 11 May 22 23:17 GMT |
	|                | gcr.io/google-containers/addon-resizer:functional-20220511231058-7184                               |                                |                   |         |                     |                     |
	| image          | functional-20220511231058-7184                                                                      | functional-20220511231058-7184 | minikube4\jenkins | v1.25.2 | 11 May 22 23:17 GMT | 11 May 22 23:17 GMT |
	|                | image ls                                                                                            |                                |                   |         |                     |                     |
	| image          | functional-20220511231058-7184 image load                                                           | functional-20220511231058-7184 | minikube4\jenkins | v1.25.2 | 11 May 22 23:17 GMT | 11 May 22 23:18 GMT |
	|                | C:\jenkins\workspace\Docker_Windows_integration\addon-resizer-save.tar                              |                                |                   |         |                     |                     |
	| image          | functional-20220511231058-7184                                                                      | functional-20220511231058-7184 | minikube4\jenkins | v1.25.2 | 11 May 22 23:18 GMT | 11 May 22 23:18 GMT |
	|                | image ls                                                                                            |                                |                   |         |                     |                     |
	| image          | functional-20220511231058-7184 image save --daemon                                                  | functional-20220511231058-7184 | minikube4\jenkins | v1.25.2 | 11 May 22 23:18 GMT | 11 May 22 23:18 GMT |
	|                | gcr.io/google-containers/addon-resizer:functional-20220511231058-7184                               |                                |                   |         |                     |                     |
	| cp             | functional-20220511231058-7184                                                                      | functional-20220511231058-7184 | minikube4\jenkins | v1.25.2 | 11 May 22 23:18 GMT | 11 May 22 23:18 GMT |
	|                | cp testdata\cp-test.txt                                                                             |                                |                   |         |                     |                     |
	|                | /home/docker/cp-test.txt                                                                            |                                |                   |         |                     |                     |
	| ssh            | functional-20220511231058-7184                                                                      | functional-20220511231058-7184 | minikube4\jenkins | v1.25.2 | 11 May 22 23:18 GMT | 11 May 22 23:18 GMT |
	|                | ssh -n                                                                                              |                                |                   |         |                     |                     |
	|                | functional-20220511231058-7184                                                                      |                                |                   |         |                     |                     |
	|                | sudo cat                                                                                            |                                |                   |         |                     |                     |
	|                | /home/docker/cp-test.txt                                                                            |                                |                   |         |                     |                     |
	| cp             | functional-20220511231058-7184 cp functional-20220511231058-7184:/home/docker/cp-test.txt           | functional-20220511231058-7184 | minikube4\jenkins | v1.25.2 | 11 May 22 23:18 GMT | 11 May 22 23:18 GMT |
	|                | C:\Users\jenkins.minikube4\AppData\Local\Temp\TestFunctionalparallelCpCmd3903165102\001\cp-test.txt |                                |                   |         |                     |                     |
	| ssh            | functional-20220511231058-7184                                                                      | functional-20220511231058-7184 | minikube4\jenkins | v1.25.2 | 11 May 22 23:18 GMT | 11 May 22 23:18 GMT |
	|                | ssh -n                                                                                              |                                |                   |         |                     |                     |
	|                | functional-20220511231058-7184                                                                      |                                |                   |         |                     |                     |
	|                | sudo cat                                                                                            |                                |                   |         |                     |                     |
	|                | /home/docker/cp-test.txt                                                                            |                                |                   |         |                     |                     |
	| service        | functional-20220511231058-7184                                                                      | functional-20220511231058-7184 | minikube4\jenkins | v1.25.2 | 11 May 22 23:18 GMT | 11 May 22 23:18 GMT |
	|                | service list                                                                                        |                                |                   |         |                     |                     |
	| profile        | list --output json                                                                                  | minikube                       | minikube4\jenkins | v1.25.2 | 11 May 22 23:18 GMT | 11 May 22 23:18 GMT |
	| profile        | list                                                                                                | minikube                       | minikube4\jenkins | v1.25.2 | 11 May 22 23:18 GMT | 11 May 22 23:19 GMT |
	| profile        | list -l                                                                                             | minikube                       | minikube4\jenkins | v1.25.2 | 11 May 22 23:19 GMT | 11 May 22 23:19 GMT |
	| profile        | list -o json                                                                                        | minikube                       | minikube4\jenkins | v1.25.2 | 11 May 22 23:19 GMT | 11 May 22 23:19 GMT |
	| profile        | list -o json --light                                                                                | minikube                       | minikube4\jenkins | v1.25.2 | 11 May 22 23:19 GMT | 11 May 22 23:19 GMT |
	| update-context | functional-20220511231058-7184                                                                      | functional-20220511231058-7184 | minikube4\jenkins | v1.25.2 | 11 May 22 23:19 GMT | 11 May 22 23:19 GMT |
	|                | update-context                                                                                      |                                |                   |         |                     |                     |
	|                | --alsologtostderr -v=2                                                                              |                                |                   |         |                     |                     |
	| update-context | functional-20220511231058-7184                                                                      | functional-20220511231058-7184 | minikube4\jenkins | v1.25.2 | 11 May 22 23:19 GMT | 11 May 22 23:19 GMT |
	|                | update-context                                                                                      |                                |                   |         |                     |                     |
	|                | --alsologtostderr -v=2                                                                              |                                |                   |         |                     |                     |
	| update-context | functional-20220511231058-7184                                                                      | functional-20220511231058-7184 | minikube4\jenkins | v1.25.2 | 11 May 22 23:19 GMT | 11 May 22 23:19 GMT |
	|                | update-context                                                                                      |                                |                   |         |                     |                     |
	|                | --alsologtostderr -v=2                                                                              |                                |                   |         |                     |                     |
	| image          | functional-20220511231058-7184                                                                      | functional-20220511231058-7184 | minikube4\jenkins | v1.25.2 | 11 May 22 23:19 GMT | 11 May 22 23:19 GMT |
	|                | image ls --format short                                                                             |                                |                   |         |                     |                     |
	| image          | functional-20220511231058-7184                                                                      | functional-20220511231058-7184 | minikube4\jenkins | v1.25.2 | 11 May 22 23:19 GMT | 11 May 22 23:19 GMT |
	|                | image ls --format yaml                                                                              |                                |                   |         |                     |                     |
	| image          | functional-20220511231058-7184                                                                      | functional-20220511231058-7184 | minikube4\jenkins | v1.25.2 | 11 May 22 23:19 GMT | 11 May 22 23:19 GMT |
	|                | image ls --format json                                                                              |                                |                   |         |                     |                     |
	| image          | functional-20220511231058-7184                                                                      | functional-20220511231058-7184 | minikube4\jenkins | v1.25.2 | 11 May 22 23:19 GMT | 11 May 22 23:20 GMT |
	|                | image ls --format table                                                                             |                                |                   |         |                     |                     |
	| image          | functional-20220511231058-7184 image build -t                                                       | functional-20220511231058-7184 | minikube4\jenkins | v1.25.2 | 11 May 22 23:19 GMT | 11 May 22 23:20 GMT |
	|                | localhost/my-image:functional-20220511231058-7184                                                   |                                |                   |         |                     |                     |
	|                | testdata\build                                                                                      |                                |                   |         |                     |                     |
	| image          | functional-20220511231058-7184                                                                      | functional-20220511231058-7184 | minikube4\jenkins | v1.25.2 | 11 May 22 23:20 GMT | 11 May 22 23:20 GMT |
	|                | image ls                                                                                            |                                |                   |         |                     |                     |
	|----------------|-----------------------------------------------------------------------------------------------------|--------------------------------|-------------------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2022/05/11 23:19:06
	Running on machine: minikube4
	Binary: Built with gc go1.18.1 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0511 23:19:06.698773    9584 out.go:296] Setting OutFile to fd 800 ...
	I0511 23:19:06.757884    9584 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0511 23:19:06.757884    9584 out.go:309] Setting ErrFile to fd 572...
	I0511 23:19:06.757884    9584 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0511 23:19:06.772410    9584 out.go:303] Setting JSON to false
	I0511 23:19:06.774880    9584 start.go:115] hostinfo: {"hostname":"minikube4","uptime":9600,"bootTime":1652301546,"procs":167,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19042 Build 19042","kernelVersion":"10.0.19042 Build 19042","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"df6bfb5f-73f2-4acb-9365-df7854ecbb28"}
	W0511 23:19:06.774945    9584 start.go:123] gopshost.Virtualization returned error: not implemented yet
	I0511 23:19:06.779150    9584 out.go:177] * [functional-20220511231058-7184] minikube v1.25.2 on Microsoft Windows 10 Enterprise N 10.0.19042 Build 19042
	I0511 23:19:06.783005    9584 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	I0511 23:19:06.785051    9584 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube4\minikube-integration\.minikube
	I0511 23:19:06.788009    9584 out.go:177]   - MINIKUBE_LOCATION=13639
	I0511 23:19:06.790019    9584 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0511 23:19:06.793788    9584 config.go:178] Loaded profile config "functional-20220511231058-7184": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.5
	I0511 23:19:06.795256    9584 driver.go:358] Setting default libvirt URI to qemu:///system
	I0511 23:19:09.423075    9584 docker.go:137] docker version: linux-20.10.14
	I0511 23:19:09.431754    9584 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0511 23:19:11.509117    9584 cli_runner.go:217] Completed: docker system info --format "{{json .}}": (2.0772594s)
	I0511 23:19:11.509117    9584 info.go:265] docker info: {ID:5MOX:W55Z:6RSS:V5PU:46KT:D723:NTM4:N7FK:USOO:URA3:TW6J:2PNT Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:58 OomKillDisable:true NGoroutines:52 SystemTime:2022-05-11 23:19:10.4452228 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.102.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_
64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:53902323712 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.14 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3df54a852345ae127d1fa3092b95168e4a88e2f8 Expected:3df54a852345ae127d1fa3092b95168e4a88e2f8} RuncCommit:{ID:v1.0.3-0-gf46b6ba Expected:v1.0.3-0-gf46b6ba} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,
profile=default] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.5.0] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:C:\Program Files\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc.
Version:v0.17.0]] Warnings:<nil>}}
	I0511 23:19:11.513113    9584 out.go:177] * Using the docker driver based on existing profile
	I0511 23:19:11.517656    9584 start.go:284] selected driver: docker
	I0511 23:19:11.517712    9584 start.go:801] validating driver "docker" against &{Name:functional-20220511231058-7184 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1652251400-14138@sha256:8c847a4aa2afc5a7fc659f9731046bf9cc7e788283deecc83c8633014fb0828a Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.5 ClusterName:functional-20220511231058-7184 Namespace:de
fault APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.23.5 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false helm-tiller:false ingress:false ingress-dns:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registr
y-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube4:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0511 23:19:11.518055    9584 start.go:812] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0511 23:19:11.543847    9584 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0511 23:19:13.666248    9584 cli_runner.go:217] Completed: docker system info --format "{{json .}}": (2.1222954s)
	I0511 23:19:13.666648    9584 info.go:265] docker info: {ID:5MOX:W55Z:6RSS:V5PU:46KT:D723:NTM4:N7FK:USOO:URA3:TW6J:2PNT Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:58 OomKillDisable:true NGoroutines:52 SystemTime:2022-05-11 23:19:12.5944059 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.102.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_
64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:53902323712 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.14 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3df54a852345ae127d1fa3092b95168e4a88e2f8 Expected:3df54a852345ae127d1fa3092b95168e4a88e2f8} RuncCommit:{ID:v1.0.3-0-gf46b6ba Expected:v1.0.3-0-gf46b6ba} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,
profile=default] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.5.0] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:C:\Program Files\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc.
Version:v0.17.0]] Warnings:<nil>}}
	I0511 23:19:15.049183    9584 cni.go:95] Creating CNI manager for ""
	I0511 23:19:15.049183    9584 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0511 23:19:15.049716    9584 start_flags.go:306] config:
	{Name:functional-20220511231058-7184 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1652251400-14138@sha256:8c847a4aa2afc5a7fc659f9731046bf9cc7e788283deecc83c8633014fb0828a Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.5 ClusterName:functional-20220511231058-7184 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:clust
er.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.23.5 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false helm-tiller:false ingress:false ingress-dns:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisione
r-gluster:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube4:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0511 23:19:15.053088    9584 out.go:177] * dry-run validation complete!
	
	* 
	* ==> Docker <==
	* -- Logs begin at Wed 2022-05-11 23:11:55 UTC, end at Wed 2022-05-11 23:51:13 UTC. --
	May 11 23:13:11 functional-20220511231058-7184 dockerd[511]: time="2022-05-11T23:13:11.986682600Z" level=info msg="ignoring event" container=46b5638c21176fa40751b7ed9541c4ea5c01223705ebe6b9b99c1b268479f66e module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	May 11 23:15:21 functional-20220511231058-7184 dockerd[511]: time="2022-05-11T23:15:21.686348000Z" level=info msg="ignoring event" container=94b39cd31ab0c72c81972d03ac089a6623359390d6905d683930f2933abedf9a module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	May 11 23:15:21 functional-20220511231058-7184 dockerd[511]: time="2022-05-11T23:15:21.791479600Z" level=info msg="ignoring event" container=b3d71c27cc01f5ffccc9b1f78f0b44ea932b066232616ea4bed579112cca1639 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	May 11 23:15:21 functional-20220511231058-7184 dockerd[511]: time="2022-05-11T23:15:21.793385200Z" level=info msg="ignoring event" container=1d317d3b1b9d543fcb1eb150b6f05fdc65bd939a340f88724c115fbf3e5df0c7 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	May 11 23:15:21 functional-20220511231058-7184 dockerd[511]: time="2022-05-11T23:15:21.886629900Z" level=info msg="ignoring event" container=840d6d1e99cbc0fac642f4a4f9a98b10f11de239630bedbf583cae69b7e41439 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	May 11 23:15:21 functional-20220511231058-7184 dockerd[511]: time="2022-05-11T23:15:21.887410500Z" level=info msg="ignoring event" container=81972fb122db21bbc81d05c3c7958ca071de16b06b57384065757e3b284492ff module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	May 11 23:15:21 functional-20220511231058-7184 dockerd[511]: time="2022-05-11T23:15:21.888686400Z" level=info msg="ignoring event" container=d1e04917fa9dd04a1a1e0d5398a4b8e34aa08916f31d36fa393fa076f4e33c72 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	May 11 23:15:21 functional-20220511231058-7184 dockerd[511]: time="2022-05-11T23:15:21.990975200Z" level=info msg="ignoring event" container=b980e1dc9dff756563d38dedde660771a3ea2052ce403ed563d5b9e756c71b67 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	May 11 23:15:21 functional-20220511231058-7184 dockerd[511]: time="2022-05-11T23:15:21.991042600Z" level=info msg="ignoring event" container=1a6e3c895b281388a82578725431a660628a2ef732fb191777bcae11bd2409e8 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	May 11 23:15:21 functional-20220511231058-7184 dockerd[511]: time="2022-05-11T23:15:21.991085800Z" level=info msg="ignoring event" container=4917c52c05aedf627fa50bf2d549a5070e0be28363992ab60d160c83109eeb9d module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	May 11 23:15:22 functional-20220511231058-7184 dockerd[511]: time="2022-05-11T23:15:22.084176400Z" level=info msg="ignoring event" container=bb0212ae9a4f0846fc90e1c61d9b3264900b36b8b3eefbe2b1815dced3d8b8d6 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	May 11 23:15:22 functional-20220511231058-7184 dockerd[511]: time="2022-05-11T23:15:22.101679700Z" level=info msg="ignoring event" container=35d20597a3031cf7712c0fd37f094f0b1b0e73d1a8b3e808446e4bc82c309e96 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	May 11 23:15:23 functional-20220511231058-7184 dockerd[511]: time="2022-05-11T23:15:23.303724300Z" level=info msg="ignoring event" container=7250147c7b86e30fcb1b3a1ff069738a13ab18480ab607b1bf342c7ba726ea1f module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	May 11 23:15:23 functional-20220511231058-7184 dockerd[511]: time="2022-05-11T23:15:23.444949900Z" level=info msg="ignoring event" container=f6b592ff517b6c80718b42d2c2b0cb4915c33cd06e0547ff70bb368316762e18 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	May 11 23:15:26 functional-20220511231058-7184 dockerd[511]: time="2022-05-11T23:15:26.396920900Z" level=info msg="ignoring event" container=054b5a4259fc6c6b3078fe1463cdf144c5fb70cdb1ae26bce699f83ae80a8a46 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	May 11 23:15:26 functional-20220511231058-7184 dockerd[511]: time="2022-05-11T23:15:26.884275600Z" level=info msg="ignoring event" container=820bc561a4b95cd5d3c5e66a6e87a706208adf1710e10bf364addbfb9905eab3 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	May 11 23:15:37 functional-20220511231058-7184 dockerd[511]: time="2022-05-11T23:15:37.895345900Z" level=info msg="ignoring event" container=581faf92a79bc1afb71117206b24bb52343b9e3f07253179ae4b0bd50efcbf9b module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	May 11 23:15:38 functional-20220511231058-7184 dockerd[511]: time="2022-05-11T23:15:38.390974800Z" level=info msg="ignoring event" container=586a63724b4eab4e0886094bc28f383ce3be23b90d0e4a05b8ab05f7857af696 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	May 11 23:15:38 functional-20220511231058-7184 dockerd[511]: time="2022-05-11T23:15:38.485239400Z" level=info msg="ignoring event" container=cd253946b71d20ba6c9790c5c63663b315acd6385b962cc2e898a2af78011a7b module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	May 11 23:15:38 functional-20220511231058-7184 dockerd[511]: time="2022-05-11T23:15:38.922683100Z" level=info msg="ignoring event" container=0b073da11a84575abffd481ee4a9189ab0ea3ed5275edcb9bd260e2e6a69f683 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	May 11 23:15:46 functional-20220511231058-7184 dockerd[511]: time="2022-05-11T23:15:46.005625700Z" level=info msg="ignoring event" container=786a5a9d87a93c95097481a015911f9d2570282a7470c793903ed965c4867e82 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	May 11 23:17:25 functional-20220511231058-7184 dockerd[511]: time="2022-05-11T23:17:25.101446800Z" level=info msg="ignoring event" container=989cc49a569115d47f4b5af21843899ec02717efdb51fcc5a493944b4abcb8eb module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	May 11 23:17:25 functional-20220511231058-7184 dockerd[511]: time="2022-05-11T23:17:25.570631200Z" level=info msg="ignoring event" container=4554d1c83b89c3955db419414fc517accead4c678647f4be2667aae3404160ea module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	May 11 23:20:03 functional-20220511231058-7184 dockerd[511]: time="2022-05-11T23:20:03.639717500Z" level=info msg="ignoring event" container=d83413bface64a373bc0040660ced87c8eb5f7b4549f5d1ca6351bfed3e955e4 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	May 11 23:20:04 functional-20220511231058-7184 dockerd[511]: time="2022-05-11T23:20:04.266227700Z" level=info msg="Layer sha256:8d988d9cbd4c3812fb85f3c741a359985602af139e727005f4d4471ac42f9d1a cleaned up"
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                                                           CREATED             STATE               NAME                      ATTEMPT             POD ID
	b131f8171a5ce       82e4c8a736a4f                                                                                   32 minutes ago      Running             echoserver                0                   bcbdd1bb4acb8
	8b3160aeac55c       k8s.gcr.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969   32 minutes ago      Running             echoserver                0                   e766462e96943
	7822ed0d85399       mysql@sha256:16e159331007eccc069822f7b731272043ed572a79a196a05ffa2ea127caaf67                   32 minutes ago      Running             mysql                     0                   18bc3c9904b04
	31f7310ad3ac2       nginx@sha256:19da26bd6ef0468ac8ef5c03f01ce1569a4dbfb82d4d7b7ffbd7aed16ad3eb46                   33 minutes ago      Running             myfrontend                0                   f2b1f727d19a5
	c971251e39c4b       nginx@sha256:5a0df7fb7c8c03e4158ae9974bfbd6a15da2bdfdeded4fb694367ec812325d31                   34 minutes ago      Running             nginx                     0                   f111573c9fb7a
	cbc60d45890a7       6e38f40d628db                                                                                   35 minutes ago      Running             storage-provisioner       3                   6323969a7b288
	57d3d67362af7       b0c9e5e4dbb14                                                                                   35 minutes ago      Running             kube-controller-manager   2                   af21574f7d7d5
	4ef13dc1d0152       3fc1d62d65872                                                                                   35 minutes ago      Running             kube-apiserver            1                   3aadaf90b1667
	f18fca8e8bda6       a4ca41631cc7a                                                                                   35 minutes ago      Running             coredns                   1                   f97e337244440
	0b073da11a845       6e38f40d628db                                                                                   35 minutes ago      Exited              storage-provisioner       2                   6323969a7b288
	581faf92a79bc       3fc1d62d65872                                                                                   35 minutes ago      Exited              kube-apiserver            0                   3aadaf90b1667
	b592b11477259       884d49d6d8c9f                                                                                   35 minutes ago      Running             kube-scheduler            1                   90c7267330c22
	3d613c6cc10dc       25f8c7f3da61c                                                                                   35 minutes ago      Running             etcd                      1                   915e885bbb52e
	786a5a9d87a93       b0c9e5e4dbb14                                                                                   35 minutes ago      Exited              kube-controller-manager   1                   af21574f7d7d5
	c64417fcd163e       3c53fa8541f95                                                                                   35 minutes ago      Running             kube-proxy                1                   01f72a604ccd6
	820bc561a4b95       a4ca41631cc7a                                                                                   38 minutes ago      Exited              coredns                   0                   4917c52c05aed
	d1e04917fa9dd       3c53fa8541f95                                                                                   38 minutes ago      Exited              kube-proxy                0                   1a6e3c895b281
	f6b592ff517b6       884d49d6d8c9f                                                                                   38 minutes ago      Exited              kube-scheduler            0                   b980e1dc9dff7
	35d20597a3031       25f8c7f3da61c                                                                                   38 minutes ago      Exited              etcd                      0                   840d6d1e99cbc
	
	* 
	* ==> coredns [820bc561a4b9] <==
	* .:53
	[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	CoreDNS-1.8.6
	linux/amd64, go1.17.1, 13a9191
	[INFO] Reloading
	[INFO] plugin/health: Going into lameduck mode for 5s
	[INFO] plugin/reload: Running configuration MD5 = c23ed519c17e71ee396ed052e6209e94
	[INFO] Reloading complete
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	* 
	* ==> coredns [f18fca8e8bda] <==
	* [INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration MD5 = c23ed519c17e71ee396ed052e6209e94
	CoreDNS-1.8.6
	linux/amd64, go1.17.1, 13a9191
	
	* 
	* ==> describe nodes <==
	* Name:               functional-20220511231058-7184
	Roles:              control-plane,master
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=functional-20220511231058-7184
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=50a7977b568d2ad3e04003527a57f4502d6177a0
	                    minikube.k8s.io/name=functional-20220511231058-7184
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2022_05_11T23_12_46_0700
	                    minikube.k8s.io/version=v1.25.2
	                    node-role.kubernetes.io/control-plane=
	                    node-role.kubernetes.io/master=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 11 May 2022 23:12:42 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  functional-20220511231058-7184
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 11 May 2022 23:51:14 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 11 May 2022 23:46:09 +0000   Wed, 11 May 2022 23:12:39 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 11 May 2022 23:46:09 +0000   Wed, 11 May 2022 23:12:39 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 11 May 2022 23:46:09 +0000   Wed, 11 May 2022 23:12:39 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 11 May 2022 23:46:09 +0000   Wed, 11 May 2022 23:12:58 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    functional-20220511231058-7184
	Capacity:
	  cpu:                16
	  ephemeral-storage:  263174212Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             52638988Ki
	  pods:               110
	Allocatable:
	  cpu:                16
	  ephemeral-storage:  263174212Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             52638988Ki
	  pods:               110
	System Info:
	  Machine ID:                 8556a0a9a0e64ba4b825f672d2dce0b9
	  System UUID:                8556a0a9a0e64ba4b825f672d2dce0b9
	  Boot ID:                    10186544-b659-4889-afdb-c2512535b797
	  Kernel Version:             5.10.102.1-microsoft-standard-WSL2
	  OS Image:                   Ubuntu 20.04.4 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://20.10.15
	  Kubelet Version:            v1.23.5
	  Kube-Proxy Version:         v1.23.5
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (12 in total)
	  Namespace                   Name                                                      CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                      ------------  ----------  ---------------  -------------  ---
	  default                     hello-node-54fbb85-swswq                                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         32m
	  default                     hello-node-connect-74cf8bc446-45d4d                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         33m
	  default                     mysql-b87c45988-v7bjw                                     600m (3%!)(MISSING)     700m (4%!)(MISSING)   512Mi (0%!)(MISSING)       700Mi (1%!)(MISSING)     33m
	  default                     nginx-svc                                                 0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         34m
	  default                     sp-pod                                                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         33m
	  kube-system                 coredns-64897985d-cvj5g                                   100m (0%!)(MISSING)     0 (0%!)(MISSING)      70Mi (0%!)(MISSING)        170Mi (0%!)(MISSING)     38m
	  kube-system                 etcd-functional-20220511231058-7184                       100m (0%!)(MISSING)     0 (0%!)(MISSING)      100Mi (0%!)(MISSING)       0 (0%!)(MISSING)         38m
	  kube-system                 kube-apiserver-functional-20220511231058-7184             250m (1%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         35m
	  kube-system                 kube-controller-manager-functional-20220511231058-7184    200m (1%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         38m
	  kube-system                 kube-proxy-q6649                                          0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         38m
	  kube-system                 kube-scheduler-functional-20220511231058-7184             100m (0%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         38m
	  kube-system                 storage-provisioner                                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         38m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                1350m (8%!)(MISSING)  700m (4%!)(MISSING)
	  memory             682Mi (1%!)(MISSING)  870Mi (1%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From        Message
	  ----    ------                   ----               ----        -------
	  Normal  Starting                 35m                kube-proxy  
	  Normal  Starting                 38m                kube-proxy  
	  Normal  NodeHasNoDiskPressure    38m (x5 over 38m)  kubelet     Node functional-20220511231058-7184 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     38m (x5 over 38m)  kubelet     Node functional-20220511231058-7184 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientMemory  38m (x6 over 38m)  kubelet     Node functional-20220511231058-7184 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    38m                kubelet     Node functional-20220511231058-7184 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     38m                kubelet     Node functional-20220511231058-7184 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientMemory  38m                kubelet     Node functional-20220511231058-7184 status is now: NodeHasSufficientMemory
	  Normal  Starting                 38m                kubelet     Starting kubelet.
	  Normal  NodeAllocatableEnforced  38m                kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeReady                38m                kubelet     Node functional-20220511231058-7184 status is now: NodeReady
	  Normal  Starting                 35m                kubelet     Starting kubelet.
	  Normal  NodeHasSufficientMemory  35m (x8 over 35m)  kubelet     Node functional-20220511231058-7184 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    35m (x8 over 35m)  kubelet     Node functional-20220511231058-7184 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     35m (x7 over 35m)  kubelet     Node functional-20220511231058-7184 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  35m                kubelet     Updated Node Allocatable limit across pods
	
	* 
	* ==> dmesg <==
	* [May11 23:26] WSL2: Performing memory compaction.
	[May11 23:27] WSL2: Performing memory compaction.
	[May11 23:28] WSL2: Performing memory compaction.
	[May11 23:29] WSL2: Performing memory compaction.
	[May11 23:30] WSL2: Performing memory compaction.
	[May11 23:31] WSL2: Performing memory compaction.
	[May11 23:32] WSL2: Performing memory compaction.
	[May11 23:33] WSL2: Performing memory compaction.
	[May11 23:34] WSL2: Performing memory compaction.
	[May11 23:35] WSL2: Performing memory compaction.
	[May11 23:36] WSL2: Performing memory compaction.
	[May11 23:37] WSL2: Performing memory compaction.
	[May11 23:38] WSL2: Performing memory compaction.
	[May11 23:39] WSL2: Performing memory compaction.
	[May11 23:40] WSL2: Performing memory compaction.
	[May11 23:41] WSL2: Performing memory compaction.
	[May11 23:42] WSL2: Performing memory compaction.
	[May11 23:43] WSL2: Performing memory compaction.
	[May11 23:44] WSL2: Performing memory compaction.
	[May11 23:45] WSL2: Performing memory compaction.
	[May11 23:46] WSL2: Performing memory compaction.
	[May11 23:47] WSL2: Performing memory compaction.
	[May11 23:48] WSL2: Performing memory compaction.
	[May11 23:49] WSL2: Performing memory compaction.
	[May11 23:50] WSL2: Performing memory compaction.
	
	* 
	* ==> etcd [35d20597a303] <==
	* {"level":"info","ts":"2022-05-11T23:12:37.594Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2022-05-11T23:12:37.594Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2022-05-11T23:12:37.596Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2022-05-11T23:12:37.675Z","caller":"etcdmain/main.go:47","msg":"notifying init daemon"}
	{"level":"info","ts":"2022-05-11T23:12:37.675Z","caller":"etcdmain/main.go:53","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2022-05-11T23:12:37.675Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"192.168.49.2:2379"}
	{"level":"info","ts":"2022-05-11T23:12:37.676Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","cluster-version":"3.5"}
	{"level":"info","ts":"2022-05-11T23:12:37.677Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2022-05-11T23:12:37.678Z","caller":"etcdserver/server.go:2500","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"warn","ts":"2022-05-11T23:12:42.199Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"112.8011ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/flowschemas/system-node-high\" ","response":"range_response_count:0 size:4"}
	{"level":"info","ts":"2022-05-11T23:12:42.199Z","caller":"traceutil/trace.go:171","msg":"trace[799166586] range","detail":"{range_begin:/registry/flowschemas/system-node-high; range_end:; response_count:0; response_revision:16; }","duration":"112.9992ms","start":"2022-05-11T23:12:42.086Z","end":"2022-05-11T23:12:42.199Z","steps":["trace[799166586] 'agreement among raft nodes before linearized reading'  (duration: 98.8954ms)"],"step_count":1}
	{"level":"warn","ts":"2022-05-11T23:12:42.199Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"112.802ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/certificatesigningrequests/csr-2qrwc\" ","response":"range_response_count:1 size:942"}
	{"level":"info","ts":"2022-05-11T23:12:42.199Z","caller":"traceutil/trace.go:171","msg":"trace[765925878] range","detail":"{range_begin:/registry/certificatesigningrequests/csr-2qrwc; range_end:; response_count:1; response_revision:16; }","duration":"113.0938ms","start":"2022-05-11T23:12:42.086Z","end":"2022-05-11T23:12:42.199Z","steps":["trace[765925878] 'agreement among raft nodes before linearized reading'  (duration: 98.8066ms)"],"step_count":1}
	{"level":"warn","ts":"2022-05-11T23:12:42.199Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"112.9243ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/functional-20220511231058-7184\" ","response":"range_response_count:1 size:2902"}
	{"level":"info","ts":"2022-05-11T23:12:42.200Z","caller":"traceutil/trace.go:171","msg":"trace[1112470360] range","detail":"{range_begin:/registry/minions/functional-20220511231058-7184; range_end:; response_count:1; response_revision:16; }","duration":"113.4194ms","start":"2022-05-11T23:12:42.086Z","end":"2022-05-11T23:12:42.200Z","steps":["trace[1112470360] 'agreement among raft nodes before linearized reading'  (duration: 98.9282ms)"],"step_count":1}
	{"level":"warn","ts":"2022-05-11T23:13:06.604Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"116.4916ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/coredns-64897985d-cvj5g\" ","response":"range_response_count:1 size:4343"}
	{"level":"info","ts":"2022-05-11T23:13:06.604Z","caller":"traceutil/trace.go:171","msg":"trace[1742312913] range","detail":"{range_begin:/registry/pods/kube-system/coredns-64897985d-cvj5g; range_end:; response_count:1; response_revision:460; }","duration":"116.6522ms","start":"2022-05-11T23:13:06.488Z","end":"2022-05-11T23:13:06.604Z","steps":["trace[1742312913] 'agreement among raft nodes before linearized reading'  (duration: 97.7739ms)","trace[1742312913] 'range keys from in-memory index tree'  (duration: 18.6838ms)"],"step_count":2}
	{"level":"info","ts":"2022-05-11T23:15:21.495Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2022-05-11T23:15:21.495Z","caller":"embed/etcd.go:367","msg":"closing etcd server","name":"functional-20220511231058-7184","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
	WARNING: 2022/05/11 23:15:21 [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1:2379 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	WARNING: 2022/05/11 23:15:21 [core] grpc: addrConn.createTransport failed to connect to {192.168.49.2:2379 192.168.49.2:2379 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 192.168.49.2:2379: connect: connection refused". Reconnecting...
	{"level":"info","ts":"2022-05-11T23:15:21.683Z","caller":"etcdserver/server.go:1438","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"aec36adc501070cc","current-leader-member-id":"aec36adc501070cc"}
	{"level":"info","ts":"2022-05-11T23:15:21.883Z","caller":"embed/etcd.go:562","msg":"stopping serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2022-05-11T23:15:21.885Z","caller":"embed/etcd.go:567","msg":"stopped serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2022-05-11T23:15:21.885Z","caller":"embed/etcd.go:369","msg":"closed etcd server","name":"functional-20220511231058-7184","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
	
	* 
	* ==> etcd [3d613c6cc10d] <==
	* {"level":"warn","ts":"2022-05-11T23:18:17.295Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"163.5896ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/specs/default/nginx-svc\" ","response":"range_response_count:1 size:1130"}
	{"level":"warn","ts":"2022-05-11T23:18:17.295Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"1.1830982s","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/default/\" range_end:\"/registry/pods/default0\" ","response":"range_response_count:4 size:10950"}
	{"level":"info","ts":"2022-05-11T23:18:17.295Z","caller":"traceutil/trace.go:171","msg":"trace[676878884] range","detail":"{range_begin:/registry/services/specs/default/nginx-svc; range_end:; response_count:1; response_revision:850; }","duration":"163.6942ms","start":"2022-05-11T23:18:17.131Z","end":"2022-05-11T23:18:17.295Z","steps":["trace[676878884] 'agreement among raft nodes before linearized reading'  (duration: 163.6126ms)"],"step_count":1}
	{"level":"info","ts":"2022-05-11T23:18:17.295Z","caller":"traceutil/trace.go:171","msg":"trace[1720153301] range","detail":"{range_begin:/registry/pods/default/; range_end:/registry/pods/default0; response_count:4; response_revision:850; }","duration":"1.1831577s","start":"2022-05-11T23:18:16.112Z","end":"2022-05-11T23:18:17.295Z","steps":["trace[1720153301] 'agreement among raft nodes before linearized reading'  (duration: 1.1830238s)"],"step_count":1}
	{"level":"warn","ts":"2022-05-11T23:18:17.295Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2022-05-11T23:18:16.112Z","time spent":"1.183216s","remote":"127.0.0.1:41402","response type":"/etcdserverpb.KV/Range","request count":0,"request size":50,"response count":4,"response size":10974,"request content":"key:\"/registry/pods/default/\" range_end:\"/registry/pods/default0\" "}
	{"level":"warn","ts":"2022-05-11T23:18:17.295Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"1.2401692s","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2022-05-11T23:18:17.295Z","caller":"traceutil/trace.go:171","msg":"trace[1439899376] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:850; }","duration":"1.2402227s","start":"2022-05-11T23:18:16.055Z","end":"2022-05-11T23:18:17.295Z","steps":["trace[1439899376] 'agreement among raft nodes before linearized reading'  (duration: 1.2401354s)"],"step_count":1}
	{"level":"warn","ts":"2022-05-11T23:18:17.295Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2022-05-11T23:18:16.055Z","time spent":"1.24028s","remote":"127.0.0.1:41486","response type":"/etcdserverpb.KV/Range","request count":0,"request size":18,"response count":0,"response size":29,"request content":"key:\"/registry/health\" "}
	{"level":"warn","ts":"2022-05-11T23:18:17.295Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"1.0695196s","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/default/\" range_end:\"/registry/pods/default0\" ","response":"range_response_count:4 size:10950"}
	{"level":"info","ts":"2022-05-11T23:18:17.296Z","caller":"traceutil/trace.go:171","msg":"trace[170829236] range","detail":"{range_begin:/registry/pods/default/; range_end:/registry/pods/default0; response_count:4; response_revision:850; }","duration":"1.0696713s","start":"2022-05-11T23:18:16.226Z","end":"2022-05-11T23:18:17.295Z","steps":["trace[170829236] 'agreement among raft nodes before linearized reading'  (duration: 1.0693492s)"],"step_count":1}
	{"level":"warn","ts":"2022-05-11T23:18:17.296Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2022-05-11T23:18:16.226Z","time spent":"1.0698127s","remote":"127.0.0.1:41402","response type":"/etcdserverpb.KV/Range","request count":0,"request size":50,"response count":4,"response size":10974,"request content":"key:\"/registry/pods/default/\" range_end:\"/registry/pods/default0\" "}
	{"level":"warn","ts":"2022-05-11T23:18:18.409Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"178.1684ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/default/\" range_end:\"/registry/pods/default0\" ","response":"range_response_count:4 size:10950"}
	{"level":"info","ts":"2022-05-11T23:18:18.409Z","caller":"traceutil/trace.go:171","msg":"trace[1913611101] range","detail":"{range_begin:/registry/pods/default/; range_end:/registry/pods/default0; response_count:4; response_revision:852; }","duration":"178.4104ms","start":"2022-05-11T23:18:18.230Z","end":"2022-05-11T23:18:18.409Z","steps":["trace[1913611101] 'range keys from in-memory index tree'  (duration: 178.0301ms)"],"step_count":1}
	{"level":"info","ts":"2022-05-11T23:25:42.345Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":997}
	{"level":"info","ts":"2022-05-11T23:25:42.346Z","caller":"mvcc/kvstore_compaction.go:57","msg":"finished scheduled compaction","compact-revision":997,"took":"1.1418ms"}
	{"level":"info","ts":"2022-05-11T23:30:42.375Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1208}
	{"level":"info","ts":"2022-05-11T23:30:42.376Z","caller":"mvcc/kvstore_compaction.go:57","msg":"finished scheduled compaction","compact-revision":1208,"took":"572.6µs"}
	{"level":"info","ts":"2022-05-11T23:35:42.412Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1416}
	{"level":"info","ts":"2022-05-11T23:35:42.413Z","caller":"mvcc/kvstore_compaction.go:57","msg":"finished scheduled compaction","compact-revision":1416,"took":"557µs"}
	{"level":"info","ts":"2022-05-11T23:40:42.440Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1626}
	{"level":"info","ts":"2022-05-11T23:40:42.441Z","caller":"mvcc/kvstore_compaction.go:57","msg":"finished scheduled compaction","compact-revision":1626,"took":"665.9µs"}
	{"level":"info","ts":"2022-05-11T23:45:42.472Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1836}
	{"level":"info","ts":"2022-05-11T23:45:42.473Z","caller":"mvcc/kvstore_compaction.go:57","msg":"finished scheduled compaction","compact-revision":1836,"took":"613.5µs"}
	{"level":"info","ts":"2022-05-11T23:50:42.500Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":2046}
	{"level":"info","ts":"2022-05-11T23:50:42.501Z","caller":"mvcc/kvstore_compaction.go:57","msg":"finished scheduled compaction","compact-revision":2046,"took":"550.8µs"}
	
	* 
	* ==> kernel <==
	*  23:51:14 up 59 min,  0 users,  load average: 0.37, 0.30, 0.38
	Linux functional-20220511231058-7184 5.10.102.1-microsoft-standard-WSL2 #1 SMP Wed Mar 2 00:30:59 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 20.04.4 LTS"
	
	* 
	* ==> kube-apiserver [4ef13dc1d015] <==
	* Trace[1497027280]: [1.104645s] [1.104645s] END
	I0511 23:18:16.035673       1 trace.go:205] Trace[2037031136]: "List etcd3" key:/pods/default,resourceVersion:,resourceVersionMatch:,limit:0,continue: (11-May-2022 23:18:13.291) (total time: 2744ms):
	Trace[2037031136]: [2.7443687s] [2.7443687s] END
	I0511 23:18:16.036433       1 trace.go:205] Trace[629667156]: "List" url:/api/v1/namespaces/default/pods,user-agent:e2e-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format,audit-id:44094c38-f711-40de-9f45-07099914c476,client:192.168.49.1,accept:application/json, */*,protocol:HTTP/2.0 (11-May-2022 23:18:13.291) (total time: 2745ms):
	Trace[629667156]: ---"Listing from storage done" 2744ms (23:18:16.035)
	Trace[629667156]: [2.745172s] [2.745172s] END
	I0511 23:18:17.296385       1 trace.go:205] Trace[1756583222]: "GuaranteedUpdate etcd3" type:*core.Endpoints (11-May-2022 23:18:16.049) (total time: 1246ms):
	Trace[1756583222]: ---"Transaction committed" 1245ms (23:18:17.296)
	Trace[1756583222]: [1.2463771s] [1.2463771s] END
	I0511 23:18:17.296748       1 trace.go:205] Trace[1060657797]: "Update" url:/api/v1/namespaces/kube-system/endpoints/k8s.io-minikube-hostpath,user-agent:storage-provisioner/v0.0.0 (linux/amd64) kubernetes/$Format,audit-id:70d6ce70-4a47-4be5-89d9-9130eee5da57,client:192.168.49.2,accept:application/json, */*,protocol:HTTP/2.0 (11-May-2022 23:18:16.049) (total time: 1247ms):
	Trace[1060657797]: ---"Object stored in database" 1246ms (23:18:17.296)
	Trace[1060657797]: [1.2472534s] [1.2472534s] END
	I0511 23:18:17.297702       1 trace.go:205] Trace[1987176994]: "List etcd3" key:/pods/default,resourceVersion:,resourceVersionMatch:,limit:0,continue: (11-May-2022 23:18:16.111) (total time: 1186ms):
	Trace[1987176994]: [1.1864729s] [1.1864729s] END
	I0511 23:18:17.297713       1 trace.go:205] Trace[1545867522]: "List etcd3" key:/pods/default,resourceVersion:,resourceVersionMatch:,limit:0,continue: (11-May-2022 23:18:16.225) (total time: 1072ms):
	Trace[1545867522]: [1.0723974s] [1.0723974s] END
	I0511 23:18:17.298245       1 trace.go:205] Trace[1773512067]: "List" url:/api/v1/namespaces/default/pods,user-agent:e2e-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format,audit-id:1c48ec6c-8003-42f7-ae00-beaaf6f9840e,client:192.168.49.1,accept:application/json, */*,protocol:HTTP/2.0 (11-May-2022 23:18:16.111) (total time: 1187ms):
	Trace[1773512067]: ---"Listing from storage done" 1186ms (23:18:17.297)
	Trace[1773512067]: [1.1870671s] [1.1870671s] END
	I0511 23:18:17.299712       1 trace.go:205] Trace[2025221454]: "List" url:/api/v1/namespaces/default/pods,user-agent:e2e-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format,audit-id:3b14786b-32d7-4840-a3d1-8bbbccb80842,client:192.168.49.1,accept:application/json, */*,protocol:HTTP/2.0 (11-May-2022 23:18:16.225) (total time: 1074ms):
	Trace[2025221454]: ---"Listing from storage done" 1072ms (23:18:17.298)
	Trace[2025221454]: [1.0744501s] [1.0744501s] END
	I0511 23:18:39.813542       1 alloc.go:329] "allocated clusterIPs" service="default/hello-node" clusterIPs=map[IPv4:10.99.234.103]
	W0511 23:31:11.160042       1 watcher.go:229] watch chan error: etcdserver: mvcc: required revision has been compacted
	W0511 23:45:34.250494       1 watcher.go:229] watch chan error: etcdserver: mvcc: required revision has been compacted
	
	* 
	* ==> kube-apiserver [581faf92a79b] <==
	* I0511 23:15:37.790781       1 server.go:565] external host was not specified, using 192.168.49.2
	I0511 23:15:37.792318       1 server.go:172] Version: v1.23.5
	E0511 23:15:37.793020       1 run.go:74] "command failed" err="failed to create listener: failed to listen on 0.0.0.0:8441: listen tcp 0.0.0.0:8441: bind: address already in use"
	
	* 
	* ==> kube-controller-manager [57d3d67362af] <==
	* I0511 23:16:00.087753       1 node_lifecycle_controller.go:1213] Controller detected that zone  is now in state Normal.
	I0511 23:16:00.088103       1 shared_informer.go:247] Caches are synced for persistent volume 
	I0511 23:16:00.089337       1 event.go:294] "Event occurred" object="functional-20220511231058-7184" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node functional-20220511231058-7184 event: Registered Node functional-20220511231058-7184 in Controller"
	I0511 23:16:00.091226       1 shared_informer.go:247] Caches are synced for daemon sets 
	I0511 23:16:00.092020       1 shared_informer.go:247] Caches are synced for ReplicationController 
	I0511 23:16:00.108637       1 shared_informer.go:240] Waiting for caches to sync for garbage collector
	I0511 23:16:00.185042       1 shared_informer.go:247] Caches are synced for attach detach 
	I0511 23:16:00.195225       1 shared_informer.go:247] Caches are synced for cronjob 
	I0511 23:16:00.195402       1 shared_informer.go:247] Caches are synced for resource quota 
	I0511 23:16:00.198862       1 shared_informer.go:247] Caches are synced for job 
	I0511 23:16:00.199096       1 shared_informer.go:247] Caches are synced for TTL after finished 
	I0511 23:16:00.285661       1 shared_informer.go:247] Caches are synced for disruption 
	I0511 23:16:00.285826       1 disruption.go:371] Sending events to api server.
	I0511 23:16:00.285781       1 shared_informer.go:247] Caches are synced for resource quota 
	I0511 23:16:00.689474       1 shared_informer.go:247] Caches are synced for garbage collector 
	I0511 23:16:00.689578       1 garbagecollector.go:155] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I0511 23:16:00.709700       1 shared_informer.go:247] Caches are synced for garbage collector 
	I0511 23:16:57.190062       1 event.go:294] "Event occurred" object="default/myclaim" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="ExternalProvisioning" message="waiting for a volume to be created, either by external provisioner \"k8s.io/minikube-hostpath\" or manually created by system administrator"
	I0511 23:16:57.190231       1 event.go:294] "Event occurred" object="default/myclaim" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="ExternalProvisioning" message="waiting for a volume to be created, either by external provisioner \"k8s.io/minikube-hostpath\" or manually created by system administrator"
	I0511 23:17:30.215041       1 event.go:294] "Event occurred" object="default/mysql" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set mysql-b87c45988 to 1"
	I0511 23:17:30.401067       1 event.go:294] "Event occurred" object="default/mysql-b87c45988" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: mysql-b87c45988-v7bjw"
	I0511 23:17:39.191119       1 event.go:294] "Event occurred" object="default/hello-node-connect" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set hello-node-connect-74cf8bc446 to 1"
	I0511 23:17:39.292950       1 event.go:294] "Event occurred" object="default/hello-node-connect-74cf8bc446" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: hello-node-connect-74cf8bc446-45d4d"
	I0511 23:18:39.455195       1 event.go:294] "Event occurred" object="default/hello-node" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set hello-node-54fbb85 to 1"
	I0511 23:18:39.470795       1 event.go:294] "Event occurred" object="default/hello-node-54fbb85" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: hello-node-54fbb85-swswq"
	
	* 
	* ==> kube-controller-manager [786a5a9d87a9] <==
	* 	/usr/local/go/src/bytes/buffer.go:204 +0x98
	crypto/tls.(*Conn).readFromUntil(0xc00031d500, {0x4d4fe80, 0xc0006dc0a0}, 0x8ef)
		/usr/local/go/src/crypto/tls/conn.go:799 +0xe5
	crypto/tls.(*Conn).readRecordOrCCS(0xc00031d500, 0x0)
		/usr/local/go/src/crypto/tls/conn.go:606 +0x112
	crypto/tls.(*Conn).readRecord(...)
		/usr/local/go/src/crypto/tls/conn.go:574
	crypto/tls.(*Conn).Read(0xc00031d500, {0xc000d40000, 0x1000, 0x919560})
		/usr/local/go/src/crypto/tls/conn.go:1277 +0x16f
	bufio.(*Reader).Read(0xc0003b13e0, {0xc000d2a120, 0x9, 0x934bc2})
		/usr/local/go/src/bufio/bufio.go:227 +0x1b4
	io.ReadAtLeast({0x4d47860, 0xc0003b13e0}, {0xc000d2a120, 0x9, 0x9}, 0x9)
		/usr/local/go/src/io/io.go:328 +0x9a
	io.ReadFull(...)
		/usr/local/go/src/io/io.go:347
	k8s.io/kubernetes/vendor/golang.org/x/net/http2.readFrameHeader({0xc000d2a120, 0x9, 0xc001f7d3e0}, {0x4d47860, 0xc0003b13e0})
		/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/golang.org/x/net/http2/frame.go:237 +0x6e
	k8s.io/kubernetes/vendor/golang.org/x/net/http2.(*Framer).ReadFrame(0xc000d2a0e0)
		/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/golang.org/x/net/http2/frame.go:498 +0x95
	k8s.io/kubernetes/vendor/golang.org/x/net/http2.(*clientConnReadLoop).run(0xc000aaaf98)
		/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/golang.org/x/net/http2/transport.go:2101 +0x130
	k8s.io/kubernetes/vendor/golang.org/x/net/http2.(*ClientConn).readLoop(0xc000d3e000)
		/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/golang.org/x/net/http2/transport.go:1997 +0x6f
	created by k8s.io/kubernetes/vendor/golang.org/x/net/http2.(*Transport).newClientConn
		/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/golang.org/x/net/http2/transport.go:725 +0xac5
	
	* 
	* ==> kube-proxy [c64417fcd163] <==
	* E0511 23:15:25.807112       1 proxier.go:647] "Failed to read builtin modules file, you can ignore this message when kube-proxy is running inside container without mounting /lib/modules" err="open /lib/modules/5.10.102.1-microsoft-standard-WSL2/modules.builtin: no such file or directory" filePath="/lib/modules/5.10.102.1-microsoft-standard-WSL2/modules.builtin"
	I0511 23:15:25.887727       1 proxier.go:657] "Failed to load kernel module with modprobe, you can ignore this message when kube-proxy is running inside container without mounting /lib/modules" moduleName="ip_vs"
	I0511 23:15:25.891061       1 proxier.go:657] "Failed to load kernel module with modprobe, you can ignore this message when kube-proxy is running inside container without mounting /lib/modules" moduleName="ip_vs_rr"
	I0511 23:15:25.894054       1 proxier.go:657] "Failed to load kernel module with modprobe, you can ignore this message when kube-proxy is running inside container without mounting /lib/modules" moduleName="ip_vs_wrr"
	I0511 23:15:25.897705       1 proxier.go:657] "Failed to load kernel module with modprobe, you can ignore this message when kube-proxy is running inside container without mounting /lib/modules" moduleName="ip_vs_sh"
	I0511 23:15:25.900800       1 proxier.go:657] "Failed to load kernel module with modprobe, you can ignore this message when kube-proxy is running inside container without mounting /lib/modules" moduleName="nf_conntrack"
	E0511 23:15:25.903815       1 node.go:152] Failed to retrieve node info: Get "https://control-plane.minikube.internal:8441/api/v1/nodes/functional-20220511231058-7184": dial tcp 192.168.49.2:8441: connect: connection refused
	E0511 23:15:26.984233       1 node.go:152] Failed to retrieve node info: Get "https://control-plane.minikube.internal:8441/api/v1/nodes/functional-20220511231058-7184": dial tcp 192.168.49.2:8441: connect: connection refused
	E0511 23:15:36.087665       1 node.go:152] Failed to retrieve node info: nodes "functional-20220511231058-7184" is forbidden: User "system:serviceaccount:kube-system:kube-proxy" cannot get resource "nodes" in API group "" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io "system:discovery" not found, clusterrole.rbac.authorization.k8s.io "system:service-account-issuer-discovery" not found, clusterrole.rbac.authorization.k8s.io "system:node-proxier" not found, clusterrole.rbac.authorization.k8s.io "system:public-info-viewer" not found, clusterrole.rbac.authorization.k8s.io "system:basic-user" not found]
	E0511 23:15:40.222898       1 node.go:152] Failed to retrieve node info: Get "https://control-plane.minikube.internal:8441/api/v1/nodes/functional-20220511231058-7184": dial tcp 192.168.49.2:8441: connect: connection refused
	I0511 23:15:49.002000       1 node.go:163] Successfully retrieved node IP: 192.168.49.2
	I0511 23:15:49.002075       1 server_others.go:138] "Detected node IP" address="192.168.49.2"
	I0511 23:15:49.002248       1 server_others.go:561] "Unknown proxy mode, assuming iptables proxy" proxyMode=""
	I0511 23:15:49.210604       1 server_others.go:206] "Using iptables Proxier"
	I0511 23:15:49.210824       1 server_others.go:213] "kube-proxy running in dual-stack mode" ipFamily=IPv4
	I0511 23:15:49.210846       1 server_others.go:214] "Creating dualStackProxier for iptables"
	I0511 23:15:49.210954       1 server_others.go:491] "Detect-local-mode set to ClusterCIDR, but no IPv6 cluster CIDR defined, , defaulting to no-op detect-local for IPv6"
	I0511 23:15:49.211781       1 server.go:656] "Version info" version="v1.23.5"
	I0511 23:15:49.212709       1 config.go:317] "Starting service config controller"
	I0511 23:15:49.212838       1 config.go:226] "Starting endpoint slice config controller"
	I0511 23:15:49.212885       1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
	I0511 23:15:49.212886       1 shared_informer.go:240] Waiting for caches to sync for service config
	I0511 23:15:49.314174       1 shared_informer.go:247] Caches are synced for service config 
	I0511 23:15:49.314367       1 shared_informer.go:247] Caches are synced for endpoint slice config 
	
	* 
	* ==> kube-proxy [d1e04917fa9d] <==
	* E0511 23:13:03.998105       1 proxier.go:647] "Failed to read builtin modules file, you can ignore this message when kube-proxy is running inside container without mounting /lib/modules" err="open /lib/modules/5.10.102.1-microsoft-standard-WSL2/modules.builtin: no such file or directory" filePath="/lib/modules/5.10.102.1-microsoft-standard-WSL2/modules.builtin"
	I0511 23:13:04.076219       1 proxier.go:657] "Failed to load kernel module with modprobe, you can ignore this message when kube-proxy is running inside container without mounting /lib/modules" moduleName="ip_vs"
	I0511 23:13:04.084843       1 proxier.go:657] "Failed to load kernel module with modprobe, you can ignore this message when kube-proxy is running inside container without mounting /lib/modules" moduleName="ip_vs_rr"
	I0511 23:13:04.088049       1 proxier.go:657] "Failed to load kernel module with modprobe, you can ignore this message when kube-proxy is running inside container without mounting /lib/modules" moduleName="ip_vs_wrr"
	I0511 23:13:04.091094       1 proxier.go:657] "Failed to load kernel module with modprobe, you can ignore this message when kube-proxy is running inside container without mounting /lib/modules" moduleName="ip_vs_sh"
	I0511 23:13:04.094880       1 proxier.go:657] "Failed to load kernel module with modprobe, you can ignore this message when kube-proxy is running inside container without mounting /lib/modules" moduleName="nf_conntrack"
	I0511 23:13:04.378379       1 node.go:163] Successfully retrieved node IP: 192.168.49.2
	I0511 23:13:04.378561       1 server_others.go:138] "Detected node IP" address="192.168.49.2"
	I0511 23:13:04.378657       1 server_others.go:561] "Unknown proxy mode, assuming iptables proxy" proxyMode=""
	I0511 23:13:04.690252       1 server_others.go:206] "Using iptables Proxier"
	I0511 23:13:04.690427       1 server_others.go:213] "kube-proxy running in dual-stack mode" ipFamily=IPv4
	I0511 23:13:04.690442       1 server_others.go:214] "Creating dualStackProxier for iptables"
	I0511 23:13:04.690473       1 server_others.go:491] "Detect-local-mode set to ClusterCIDR, but no IPv6 cluster CIDR defined, , defaulting to no-op detect-local for IPv6"
	I0511 23:13:04.691292       1 server.go:656] "Version info" version="v1.23.5"
	I0511 23:13:04.692354       1 config.go:226] "Starting endpoint slice config controller"
	I0511 23:13:04.692504       1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
	I0511 23:13:04.692710       1 config.go:317] "Starting service config controller"
	I0511 23:13:04.692727       1 shared_informer.go:240] Waiting for caches to sync for service config
	I0511 23:13:04.792728       1 shared_informer.go:247] Caches are synced for endpoint slice config 
	I0511 23:13:04.793082       1 shared_informer.go:247] Caches are synced for service config 
	
	* 
	* ==> kube-scheduler [b592b1147725] <==
	* I0511 23:15:36.096210       1 secure_serving.go:200] Serving securely on 127.0.0.1:10259
	I0511 23:15:36.096798       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	E0511 23:15:36.188763       1 plugin.go:138] "getting namespace, assuming empty set of namespace labels" err="namespace \"kube-system\" not found" namespace="kube-system"
	E0511 23:15:36.188806       1 plugin.go:138] "getting namespace, assuming empty set of namespace labels" err="namespace \"kube-system\" not found" namespace="kube-system"
	E0511 23:15:36.191208       1 plugin.go:138] "getting namespace, assuming empty set of namespace labels" err="namespace \"kube-system\" not found" namespace="kube-system"
	E0511 23:15:36.191387       1 plugin.go:138] "getting namespace, assuming empty set of namespace labels" err="namespace \"kube-system\" not found" namespace="kube-system"
	E0511 23:15:36.191497       1 plugin.go:138] "getting namespace, assuming empty set of namespace labels" err="namespace \"kube-system\" not found" namespace="kube-system"
	E0511 23:15:36.191554       1 plugin.go:138] "getting namespace, assuming empty set of namespace labels" err="namespace \"kube-system\" not found" namespace="kube-system"
	E0511 23:15:36.191596       1 plugin.go:138] "getting namespace, assuming empty set of namespace labels" err="namespace \"kube-system\" not found" namespace="kube-system"
	I0511 23:15:36.196462       1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	E0511 23:15:45.391917       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: unknown (get persistentvolumeclaims)
	E0511 23:15:45.483432       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: unknown (get persistentvolumes)
	E0511 23:15:45.483551       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: unknown (get replicationcontrollers)
	E0511 23:15:45.483639       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Namespace: unknown (get namespaces)
	E0511 23:15:45.483821       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: unknown (get poddisruptionbudgets.policy)
	E0511 23:15:45.483936       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: unknown (get statefulsets.apps)
	E0511 23:15:45.484268       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: unknown (get replicasets.apps)
	E0511 23:15:45.484554       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: unknown (get csidrivers.storage.k8s.io)
	E0511 23:15:45.484687       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: unknown (get nodes)
	E0511 23:15:45.484858       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: unknown (get services)
	E0511 23:15:45.485098       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.CSIStorageCapacity: unknown (get csistoragecapacities.storage.k8s.io)
	E0511 23:15:45.485223       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: unknown (get storageclasses.storage.k8s.io)
	E0511 23:15:45.485386       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: unknown (get pods)
	E0511 23:15:45.485529       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: unknown (get csinodes.storage.k8s.io)
	E0511 23:15:45.489100       1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:205: Failed to watch *v1.ConfigMap: unknown (get configmaps)
	
	* 
	* ==> kube-scheduler [f6b592ff517b] <==
	* E0511 23:12:43.322282       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0511 23:12:43.377562       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0511 23:12:43.377698       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0511 23:12:43.413404       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0511 23:12:43.413530       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0511 23:12:43.478082       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0511 23:12:43.478132       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0511 23:12:43.530843       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0511 23:12:43.531072       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0511 23:12:43.577224       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0511 23:12:43.577365       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0511 23:12:43.597323       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1beta1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0511 23:12:43.598137       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.CSIStorageCapacity: failed to list *v1beta1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0511 23:12:43.610261       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0511 23:12:43.610393       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0511 23:12:43.627096       1 reflector.go:324] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:205: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0511 23:12:43.627235       1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:205: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0511 23:12:43.777146       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0511 23:12:43.777263       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0511 23:12:43.778001       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0511 23:12:43.778129       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	I0511 23:12:46.484430       1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	I0511 23:15:21.688634       1 configmap_cafile_content.go:222] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0511 23:15:21.689615       1 tlsconfig.go:255] "Shutting down DynamicServingCertificateController"
	I0511 23:15:21.690032       1 secure_serving.go:311] Stopped listening on 127.0.0.1:10259
	
	* 
	* ==> kubelet <==
	* -- Logs begin at Wed 2022-05-11 23:11:55 UTC, end at Wed 2022-05-11 23:51:15 UTC. --
	May 11 23:17:30 functional-20220511231058-7184 kubelet[6148]: I0511 23:17:30.593324    6148 reconciler.go:221] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g7tcf\" (UniqueName: \"kubernetes.io/projected/13b3d752-6f39-45ca-88ec-1269924c718e-kube-api-access-g7tcf\") pod \"mysql-b87c45988-v7bjw\" (UID: \"13b3d752-6f39-45ca-88ec-1269924c718e\") " pod="default/mysql-b87c45988-v7bjw"
	May 11 23:17:31 functional-20220511231058-7184 kubelet[6148]: I0511 23:17:31.994227    6148 docker_sandbox.go:402] "Failed to read pod IP from plugin/docker" err="Couldn't find network status for default/mysql-b87c45988-v7bjw through plugin: invalid network status for"
	May 11 23:17:31 functional-20220511231058-7184 kubelet[6148]: I0511 23:17:31.994792    6148 pod_container_deletor.go:79] "Container not found in pod's containers" containerID="18bc3c9904b049acbe9709de816ea0c14a58a05cddad165d5da363cb5bd26c70"
	May 11 23:17:33 functional-20220511231058-7184 kubelet[6148]: I0511 23:17:33.013745    6148 docker_sandbox.go:402] "Failed to read pod IP from plugin/docker" err="Couldn't find network status for default/mysql-b87c45988-v7bjw through plugin: invalid network status for"
	May 11 23:17:39 functional-20220511231058-7184 kubelet[6148]: I0511 23:17:39.392560    6148 topology_manager.go:200] "Topology Admit Handler"
	May 11 23:17:39 functional-20220511231058-7184 kubelet[6148]: I0511 23:17:39.505583    6148 reconciler.go:221] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cdxjs\" (UniqueName: \"kubernetes.io/projected/d6e35845-c328-4eda-87b4-8ae2f5d132bf-kube-api-access-cdxjs\") pod \"hello-node-connect-74cf8bc446-45d4d\" (UID: \"d6e35845-c328-4eda-87b4-8ae2f5d132bf\") " pod="default/hello-node-connect-74cf8bc446-45d4d"
	May 11 23:17:41 functional-20220511231058-7184 kubelet[6148]: E0511 23:17:41.600535    6148 kuberuntime_manager.go:1065] "PodSandboxStatus of sandbox for pod" err="rpc error: code = Unknown desc = Error: No such container: e766462e969432e74e6b9c6a506c1f9622a64f74d0019272f20d3b4d22dc0ad3" podSandboxID="e766462e969432e74e6b9c6a506c1f9622a64f74d0019272f20d3b4d22dc0ad3" pod="default/hello-node-connect-74cf8bc446-45d4d"
	May 11 23:17:45 functional-20220511231058-7184 kubelet[6148]: I0511 23:17:45.407427    6148 pod_container_deletor.go:79] "Container not found in pod's containers" containerID="e766462e969432e74e6b9c6a506c1f9622a64f74d0019272f20d3b4d22dc0ad3"
	May 11 23:17:45 functional-20220511231058-7184 kubelet[6148]: I0511 23:17:45.407953    6148 docker_sandbox.go:402] "Failed to read pod IP from plugin/docker" err="Couldn't find network status for default/hello-node-connect-74cf8bc446-45d4d through plugin: invalid network status for"
	May 11 23:17:46 functional-20220511231058-7184 kubelet[6148]: I0511 23:17:46.502450    6148 docker_sandbox.go:402] "Failed to read pod IP from plugin/docker" err="Couldn't find network status for default/hello-node-connect-74cf8bc446-45d4d through plugin: invalid network status for"
	May 11 23:18:19 functional-20220511231058-7184 kubelet[6148]: I0511 23:18:19.117657    6148 docker_sandbox.go:402] "Failed to read pod IP from plugin/docker" err="Couldn't find network status for default/mysql-b87c45988-v7bjw through plugin: invalid network status for"
	May 11 23:18:20 functional-20220511231058-7184 kubelet[6148]: I0511 23:18:20.306238    6148 docker_sandbox.go:402] "Failed to read pod IP from plugin/docker" err="Couldn't find network status for default/mysql-b87c45988-v7bjw through plugin: invalid network status for"
	May 11 23:18:34 functional-20220511231058-7184 kubelet[6148]: I0511 23:18:34.003546    6148 docker_sandbox.go:402] "Failed to read pod IP from plugin/docker" err="Couldn't find network status for default/hello-node-connect-74cf8bc446-45d4d through plugin: invalid network status for"
	May 11 23:18:39 functional-20220511231058-7184 kubelet[6148]: I0511 23:18:39.501209    6148 topology_manager.go:200] "Topology Admit Handler"
	May 11 23:18:39 functional-20220511231058-7184 kubelet[6148]: I0511 23:18:39.692275    6148 reconciler.go:221] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cbt42\" (UniqueName: \"kubernetes.io/projected/0c2db6df-37b9-4201-b3a9-44e6d839ff68-kube-api-access-cbt42\") pod \"hello-node-54fbb85-swswq\" (UID: \"0c2db6df-37b9-4201-b3a9-44e6d839ff68\") " pod="default/hello-node-54fbb85-swswq"
	May 11 23:18:41 functional-20220511231058-7184 kubelet[6148]: I0511 23:18:41.896771    6148 pod_container_deletor.go:79] "Container not found in pod's containers" containerID="bcbdd1bb4acb8e7c121f11d4744ce17b9f8e50f836089e58b1fa1e8b5735ff29"
	May 11 23:18:41 functional-20220511231058-7184 kubelet[6148]: I0511 23:18:41.897080    6148 docker_sandbox.go:402] "Failed to read pod IP from plugin/docker" err="Couldn't find network status for default/hello-node-54fbb85-swswq through plugin: invalid network status for"
	May 11 23:18:42 functional-20220511231058-7184 kubelet[6148]: I0511 23:18:42.924303    6148 docker_sandbox.go:402] "Failed to read pod IP from plugin/docker" err="Couldn't find network status for default/hello-node-54fbb85-swswq through plugin: invalid network status for"
	May 11 23:20:34 functional-20220511231058-7184 kubelet[6148]: W0511 23:20:34.705922    6148 sysinfo.go:203] Nodes topology is not available, providing CPU topology
	May 11 23:25:34 functional-20220511231058-7184 kubelet[6148]: W0511 23:25:34.720662    6148 sysinfo.go:203] Nodes topology is not available, providing CPU topology
	May 11 23:30:34 functional-20220511231058-7184 kubelet[6148]: W0511 23:30:34.735541    6148 sysinfo.go:203] Nodes topology is not available, providing CPU topology
	May 11 23:35:34 functional-20220511231058-7184 kubelet[6148]: W0511 23:35:34.751063    6148 sysinfo.go:203] Nodes topology is not available, providing CPU topology
	May 11 23:40:34 functional-20220511231058-7184 kubelet[6148]: W0511 23:40:34.765005    6148 sysinfo.go:203] Nodes topology is not available, providing CPU topology
	May 11 23:45:34 functional-20220511231058-7184 kubelet[6148]: W0511 23:45:34.779955    6148 sysinfo.go:203] Nodes topology is not available, providing CPU topology
	May 11 23:50:34 functional-20220511231058-7184 kubelet[6148]: W0511 23:50:34.796315    6148 sysinfo.go:203] Nodes topology is not available, providing CPU topology
	
	* 
	* ==> storage-provisioner [0b073da11a84] <==
	* I0511 23:15:38.888473       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F0511 23:15:38.891816       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: connect: connection refused
	
	* 
	* ==> storage-provisioner [cbc60d45890a] <==
	* I0511 23:15:52.429194       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0511 23:15:52.455864       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0511 23:15:52.455991       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0511 23:16:10.010847       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0511 23:16:10.011353       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"f16db442-9f3a-4556-9cd7-8f812974ae3f", APIVersion:"v1", ResourceVersion:"671", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' functional-20220511231058-7184_e4da1a1e-b843-4d40-bf0b-49dd82e2a059 became leader
	I0511 23:16:10.011542       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_functional-20220511231058-7184_e4da1a1e-b843-4d40-bf0b-49dd82e2a059!
	I0511 23:16:10.112171       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_functional-20220511231058-7184_e4da1a1e-b843-4d40-bf0b-49dd82e2a059!
	I0511 23:16:57.189570       1 controller.go:1332] provision "default/myclaim" class "standard": started
	I0511 23:16:57.189876       1 storage_provisioner.go:61] Provisioning volume {&StorageClass{ObjectMeta:{standard    6daa51e5-b71b-4ede-a6c6-744ebb56aad5 460 0 2022-05-11 23:13:06 +0000 UTC <nil> <nil> map[addonmanager.kubernetes.io/mode:EnsureExists] map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"storage.k8s.io/v1","kind":"StorageClass","metadata":{"annotations":{"storageclass.kubernetes.io/is-default-class":"true"},"labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"name":"standard"},"provisioner":"k8s.io/minikube-hostpath"}
	 storageclass.kubernetes.io/is-default-class:true] [] []  [{kubectl-client-side-apply Update storage.k8s.io/v1 2022-05-11 23:13:06 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storageclass.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanager.kubernetes.io/mode":{}}},"f:provisioner":{},"f:reclaimPolicy":{},"f:volumeBindingMode":{}}}]},Provisioner:k8s.io/minikube-hostpath,Parameters:map[string]string{},ReclaimPolicy:*Delete,MountOptions:[],AllowVolumeExpansion:nil,VolumeBindingMode:*Immediate,AllowedTopologies:[]TopologySelectorTerm{},} pvc-5ab57200-48da-4413-b811-626ed007f66e &PersistentVolumeClaim{ObjectMeta:{myclaim  default  5ab57200-48da-4413-b811-626ed007f66e 723 0 2022-05-11 23:16:57 +0000 UTC <nil> <nil> map[] map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"v1","kind":"PersistentVolumeClaim","metadata":{"annotations":{},"name":"myclaim","namespace":"default"},"spec":{"accessModes":["Rea
dWriteOnce"],"resources":{"requests":{"storage":"500Mi"}},"volumeMode":"Filesystem"}}
	 volume.beta.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath volume.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath] [] [kubernetes.io/pvc-protection]  [{kube-controller-manager Update v1 2022-05-11 23:16:57 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:volume.beta.kubernetes.io/storage-provisioner":{},"f:volume.kubernetes.io/storage-provisioner":{}}}}} {kubectl.exe Update v1 2022-05-11 23:16:57 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{}}},"f:spec":{"f:accessModes":{},"f:resources":{"f:requests":{".":{},"f:storage":{}}},"f:volumeMode":{}}}}]},Spec:PersistentVolumeClaimSpec{AccessModes:[ReadWriteOnce],Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{storage: {{524288000 0} {<nil>} 500Mi BinarySI},},},VolumeName:,Selector:nil,StorageClassName:*standard,VolumeMode:*Filesystem,DataSource:nil,},Status:PersistentVolumeClaimStatus{Phase:Pending,AccessModes:[],Capacity:ResourceList{}
,Conditions:[]PersistentVolumeClaimCondition{},},} nil} to /tmp/hostpath-provisioner/default/myclaim
	I0511 23:16:57.192717       1 event.go:282] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"default", Name:"myclaim", UID:"5ab57200-48da-4413-b811-626ed007f66e", APIVersion:"v1", ResourceVersion:"723", FieldPath:""}): type: 'Normal' reason: 'Provisioning' External provisioner is provisioning volume for claim "default/myclaim"
	I0511 23:16:57.192906       1 controller.go:1439] provision "default/myclaim" class "standard": volume "pvc-5ab57200-48da-4413-b811-626ed007f66e" provisioned
	I0511 23:16:57.193054       1 controller.go:1456] provision "default/myclaim" class "standard": succeeded
	I0511 23:16:57.193275       1 volume_store.go:212] Trying to save persistentvolume "pvc-5ab57200-48da-4413-b811-626ed007f66e"
	I0511 23:16:57.208460       1 volume_store.go:219] persistentvolume "pvc-5ab57200-48da-4413-b811-626ed007f66e" saved
	I0511 23:16:57.208684       1 event.go:282] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"default", Name:"myclaim", UID:"5ab57200-48da-4413-b811-626ed007f66e", APIVersion:"v1", ResourceVersion:"723", FieldPath:""}): type: 'Normal' reason: 'ProvisioningSucceeded' Successfully provisioned volume pvc-5ab57200-48da-4413-b811-626ed007f66e
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p functional-20220511231058-7184 -n functional-20220511231058-7184
helpers_test.go:254: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p functional-20220511231058-7184 -n functional-20220511231058-7184: (6.5100218s)
helpers_test.go:261: (dbg) Run:  kubectl --context functional-20220511231058-7184 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:270: non-running pods: 
helpers_test.go:272: ======> post-mortem[TestFunctional/parallel/ServiceCmd]: describe non-running pods <======
helpers_test.go:275: (dbg) Run:  kubectl --context functional-20220511231058-7184 describe pod 
helpers_test.go:275: (dbg) Non-zero exit: kubectl --context functional-20220511231058-7184 describe pod : exit status 1 (248.7123ms)

                                                
                                                
** stderr ** 
	error: resource name may not be empty

                                                
                                                
** /stderr **
helpers_test.go:277: kubectl --context functional-20220511231058-7184 describe pod : exit status 1
--- FAIL: TestFunctional/parallel/ServiceCmd (1963.92s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (180.93s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:169: (dbg) Run:  kubectl --context functional-20220511231058-7184 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}

                                                
                                                
=== CONT  TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:169: (dbg) Run:  kubectl --context functional-20220511231058-7184 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}

                                                
                                                
=== CONT  TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:169: (dbg) Run:  kubectl --context functional-20220511231058-7184 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}

                                                
                                                
=== CONT  TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:169: (dbg) Run:  kubectl --context functional-20220511231058-7184 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}

                                                
                                                
=== CONT  TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:169: (dbg) Run:  kubectl --context functional-20220511231058-7184 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}

                                                
                                                
=== CONT  TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:169: (dbg) Run:  kubectl --context functional-20220511231058-7184 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}

                                                
                                                
=== CONT  TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:169: (dbg) Run:  kubectl --context functional-20220511231058-7184 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}

                                                
                                                
=== CONT  TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:169: (dbg) Run:  kubectl --context functional-20220511231058-7184 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}

                                                
                                                
=== CONT  TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:169: (dbg) Run:  kubectl --context functional-20220511231058-7184 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}

                                                
                                                
=== CONT  TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:169: (dbg) Run:  kubectl --context functional-20220511231058-7184 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}

                                                
                                                
=== CONT  TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:169: (dbg) Run:  kubectl --context functional-20220511231058-7184 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}

                                                
                                                
=== CONT  TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:169: (dbg) Run:  kubectl --context functional-20220511231058-7184 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}

                                                
                                                
=== CONT  TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:169: (dbg) Run:  kubectl --context functional-20220511231058-7184 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}

                                                
                                                
=== CONT  TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:169: (dbg) Run:  kubectl --context functional-20220511231058-7184 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}

                                                
                                                
=== CONT  TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:169: (dbg) Run:  kubectl --context functional-20220511231058-7184 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
functional_test_tunnel_test.go:169: (dbg) Run:  kubectl --context functional-20220511231058-7184 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
functional_test_tunnel_test.go:169: (dbg) Run:  kubectl --context functional-20220511231058-7184 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}

                                                
                                                
=== CONT  TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:169: (dbg) Run:  kubectl --context functional-20220511231058-7184 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}

                                                
                                                
=== CONT  TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:169: (dbg) Run:  kubectl --context functional-20220511231058-7184 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
functional_test_tunnel_test.go:169: (dbg) Run:  kubectl --context functional-20220511231058-7184 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}

                                                
                                                
=== CONT  TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:169: (dbg) Run:  kubectl --context functional-20220511231058-7184 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}

                                                
                                                
=== CONT  TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:169: (dbg) Run:  kubectl --context functional-20220511231058-7184 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}

                                                
                                                
=== CONT  TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:169: (dbg) Run:  kubectl --context functional-20220511231058-7184 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}

                                                
                                                
=== CONT  TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:169: (dbg) Run:  kubectl --context functional-20220511231058-7184 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}

                                                
                                                
=== CONT  TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:169: (dbg) Run:  kubectl --context functional-20220511231058-7184 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}

                                                
                                                
=== CONT  TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:169: (dbg) Run:  kubectl --context functional-20220511231058-7184 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}

                                                
                                                
=== CONT  TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:169: (dbg) Run:  kubectl --context functional-20220511231058-7184 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}

                                                
                                                
=== CONT  TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:169: (dbg) Run:  kubectl --context functional-20220511231058-7184 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}

                                                
                                                
=== CONT  TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:169: (dbg) Run:  kubectl --context functional-20220511231058-7184 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}

                                                
                                                
=== CONT  TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:169: (dbg) Run:  kubectl --context functional-20220511231058-7184 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
E0511 23:19:08.205700    7184 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\addons-20220511225738-7184\client.crt: The system cannot find the path specified.
functional_test_tunnel_test.go:169: (dbg) Run:  kubectl --context functional-20220511231058-7184 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}

                                                
                                                
=== CONT  TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:169: (dbg) Run:  kubectl --context functional-20220511231058-7184 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}

                                                
                                                
=== CONT  TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:169: (dbg) Run:  kubectl --context functional-20220511231058-7184 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}

                                                
                                                
=== CONT  TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:169: (dbg) Run:  kubectl --context functional-20220511231058-7184 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
functional_test_tunnel_test.go:169: (dbg) Run:  kubectl --context functional-20220511231058-7184 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}

                                                
                                                
=== CONT  TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:169: (dbg) Run:  kubectl --context functional-20220511231058-7184 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}

                                                
                                                
=== CONT  TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:169: (dbg) Run:  kubectl --context functional-20220511231058-7184 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
functional_test_tunnel_test.go:180: nginx-svc svc.status.loadBalancer.ingress never got an IP: timed out waiting for the condition
functional_test_tunnel_test.go:181: (dbg) Run:  kubectl --context functional-20220511231058-7184 get svc nginx-svc
functional_test_tunnel_test.go:185: failed to kubectl get svc nginx-svc:

                                                
                                                
-- stdout --
	NAME        TYPE           CLUSTER-IP      EXTERNAL-IP   PORT(S)        AGE
	nginx-svc   LoadBalancer   10.105.81.136   <pending>     80:31225/TCP   3m17s

                                                
                                                
-- /stdout --
--- FAIL: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (180.93s)

                                                
                                    
x
+
TestSkaffold (178.25s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:56: (dbg) Run:  C:\Users\jenkins.minikube4\AppData\Local\Temp\skaffold.exe1032344016 version
skaffold_test.go:60: skaffold version: v1.38.0
skaffold_test.go:63: (dbg) Run:  out/minikube-windows-amd64.exe start -p skaffold-20220512004259-7184 --memory=2600 --driver=docker
E0512 00:44:08.465914    7184 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\addons-20220511225738-7184\client.crt: The system cannot find the path specified.
E0512 00:44:52.854038    7184 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\ingress-addon-legacy-20220511235145-7184\client.crt: The system cannot find the path specified.
skaffold_test.go:63: (dbg) Done: out/minikube-windows-amd64.exe start -p skaffold-20220512004259-7184 --memory=2600 --driver=docker: (1m56.7458805s)
skaffold_test.go:83: copying out/minikube-windows-amd64.exe to C:\jenkins\workspace\Docker_Windows_integration\out\minikube.exe
skaffold_test.go:107: (dbg) Run:  C:\Users\jenkins.minikube4\AppData\Local\Temp\skaffold.exe1032344016 run --minikube-profile skaffold-20220512004259-7184 --kube-context skaffold-20220512004259-7184 --status-check=true --port-forward=false --interactive=false
skaffold_test.go:107: (dbg) Non-zero exit: C:\Users\jenkins.minikube4\AppData\Local\Temp\skaffold.exe1032344016 run --minikube-profile skaffold-20220512004259-7184 --kube-context skaffold-20220512004259-7184 --status-check=true --port-forward=false --interactive=false: exit status 1 (16.0211937s)

                                                
                                                
-- stdout --
	Generating tags...
	 - leeroy-web -> leeroy-web:latest
	 - leeroy-app -> leeroy-app:latest
	Some taggers failed. Rerun with -vdebug for errors.
	Checking cache...
	 - leeroy-web: Not found. Building
	 - leeroy-app: Not found. Building
	Starting build...
	Found [skaffold-20220512004259-7184] context, using local docker daemon.
	Building [leeroy-app]...
	Target platforms: [linux/amd64]
	#1 [internal] load build definition from Dockerfile
	#1 sha256:2cdca283227f2769af5919553907959a58ada2b8499237df3ceeb40af670fc76
	#1 transferring dockerfile:
	#1 transferring dockerfile: 345B 0.0s done
	#1 DONE 0.2s
	
	#2 [internal] load .dockerignore
	#2 sha256:bd0b65684ba9d2de97a0fcfe576ad4dca2a57bb3f1bbef30df96980c2c9b7f09
	#2 transferring context: 2B 0.0s done
	#2 DONE 0.2s
	
	#4 [internal] load metadata for docker.io/library/golang:1.12.9-alpine3.10
	#4 sha256:3e6280708dea593be8ec70e0050e1a81cce57ccd8855e8cbe6de9abfeed8cee7
	#4 ERROR: rpc error: code = Unknown desc = error getting credentials - err: exit status 1, out: `error getting credentials - err: exit status 1, out: `A specified logon session does not exist. It may already have been terminated.``
	
	#3 [internal] load metadata for docker.io/library/alpine:3.10
	#3 sha256:ac8c9d4b8fc421ddf809bac2b79af6ebec0aa591815b5d2abf229ccdfba18d01
	#3 ERROR: rpc error: code = Unknown desc = error getting credentials - err: exit status 1, out: `error getting credentials - err: exit status 1, out: `A specified logon session does not exist. It may already have been terminated.``
	------
	 > [internal] load metadata for docker.io/library/alpine:3.10:
	------
	------
	 > [internal] load metadata for docker.io/library/golang:1.12.9-alpine3.10:
	------
	failed to solve with frontend dockerfile.v0: failed to create LLB definition: rpc error: code = Unknown desc = error getting credentials - err: exit status 1, out: `error getting credentials - err: exit status 1, out: `A specified logon session does not exist. It may already have been terminated.``
	Building [leeroy-web]...
	Target platforms: [linux/amd64]
	Build [leeroy-web] was canceled

                                                
                                                
-- /stdout --
** stderr ** 
	build [leeroy-app] failed: exit status 1. Docker build ran into internal error. Please retry.
	If this keeps happening, please open an issue..

                                                
                                                
** /stderr **
skaffold_test.go:109: error running skaffold: exit status 1

                                                
                                                
-- stdout --
	Generating tags...
	 - leeroy-web -> leeroy-web:latest
	 - leeroy-app -> leeroy-app:latest
	Some taggers failed. Rerun with -vdebug for errors.
	Checking cache...
	 - leeroy-web: Not found. Building
	 - leeroy-app: Not found. Building
	Starting build...
	Found [skaffold-20220512004259-7184] context, using local docker daemon.
	Building [leeroy-app]...
	Target platforms: [linux/amd64]
	#1 [internal] load build definition from Dockerfile
	#1 sha256:2cdca283227f2769af5919553907959a58ada2b8499237df3ceeb40af670fc76
	#1 transferring dockerfile:
	#1 transferring dockerfile: 345B 0.0s done
	#1 DONE 0.2s
	
	#2 [internal] load .dockerignore
	#2 sha256:bd0b65684ba9d2de97a0fcfe576ad4dca2a57bb3f1bbef30df96980c2c9b7f09
	#2 transferring context: 2B 0.0s done
	#2 DONE 0.2s
	
	#4 [internal] load metadata for docker.io/library/golang:1.12.9-alpine3.10
	#4 sha256:3e6280708dea593be8ec70e0050e1a81cce57ccd8855e8cbe6de9abfeed8cee7
	#4 ERROR: rpc error: code = Unknown desc = error getting credentials - err: exit status 1, out: `error getting credentials - err: exit status 1, out: `A specified logon session does not exist. It may already have been terminated.``
	
	#3 [internal] load metadata for docker.io/library/alpine:3.10
	#3 sha256:ac8c9d4b8fc421ddf809bac2b79af6ebec0aa591815b5d2abf229ccdfba18d01
	#3 ERROR: rpc error: code = Unknown desc = error getting credentials - err: exit status 1, out: `error getting credentials - err: exit status 1, out: `A specified logon session does not exist. It may already have been terminated.``
	------
	 > [internal] load metadata for docker.io/library/alpine:3.10:
	------
	------
	 > [internal] load metadata for docker.io/library/golang:1.12.9-alpine3.10:
	------
	failed to solve with frontend dockerfile.v0: failed to create LLB definition: rpc error: code = Unknown desc = error getting credentials - err: exit status 1, out: `error getting credentials - err: exit status 1, out: `A specified logon session does not exist. It may already have been terminated.``
	Building [leeroy-web]...
	Target platforms: [linux/amd64]
	Build [leeroy-web] was canceled

                                                
                                                
-- /stdout --
** stderr ** 
	build [leeroy-app] failed: exit status 1. Docker build ran into internal error. Please retry.
	If this keeps happening, please open an issue..

                                                
                                                
** /stderr **
panic.go:482: *** TestSkaffold FAILED at 2022-05-12 00:45:12.8963546 +0000 GMT m=+6589.337877301
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestSkaffold]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect skaffold-20220512004259-7184
helpers_test.go:231: (dbg) Done: docker inspect skaffold-20220512004259-7184: (1.0754918s)
helpers_test.go:235: (dbg) docker inspect skaffold-20220512004259-7184:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "0efbe11f31aa85a7eb973619ec37eac36331fc28544d5652ab12972aa6d4dc14",
	        "Created": "2022-05-12T00:43:56.5094192Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 120670,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2022-05-12T00:43:57.47985Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:8a42e1145657f551cd435eddb43b96ab44d0facbe44106da934225366eeb7757",
	        "ResolvConfPath": "/var/lib/docker/containers/0efbe11f31aa85a7eb973619ec37eac36331fc28544d5652ab12972aa6d4dc14/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/0efbe11f31aa85a7eb973619ec37eac36331fc28544d5652ab12972aa6d4dc14/hostname",
	        "HostsPath": "/var/lib/docker/containers/0efbe11f31aa85a7eb973619ec37eac36331fc28544d5652ab12972aa6d4dc14/hosts",
	        "LogPath": "/var/lib/docker/containers/0efbe11f31aa85a7eb973619ec37eac36331fc28544d5652ab12972aa6d4dc14/0efbe11f31aa85a7eb973619ec37eac36331fc28544d5652ab12972aa6d4dc14-json.log",
	        "Name": "/skaffold-20220512004259-7184",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "skaffold-20220512004259-7184:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "skaffold-20220512004259-7184",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2726297600,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 2726297600,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/2d4748689afd948f3a5b6da8fac6447c1c6c010963b9513d0f87ff18af244eef-init/diff:/var/lib/docker/overlay2/f5e9ce82d74c36e0b6de7ac5d28dc4eacb2abae094b3550db66974795ad74446/diff:/var/lib/docker/overlay2/63fa2edc88f635760dd847938696c8fc50aad8a0c51ab6c6f93b0aa9a6fcefe6/diff:/var/lib/docker/overlay2/5fcaace21e215fd120a775470b4a78ef06d9f466e024f0b1baef234ddf87f04f/diff:/var/lib/docker/overlay2/6c9accb62919ca088333f8b3a193f43886e2c3042c5ec726e1c7fd19ee183024/diff:/var/lib/docker/overlay2/a9a1aea640018bd9e524c36422ea97cc97a391601b718810fed779260e48c87a/diff:/var/lib/docker/overlay2/2476b6e8d656e43d76d4288098236bc765cb37fa9dde696f09bfce485e99229e/diff:/var/lib/docker/overlay2/6cdf08ddc61561e961de67f3e14478397868c53a4d8552909a5b84e35b28db1f/diff:/var/lib/docker/overlay2/6f6f8b6686cc7838a52ce30fdc4e938cde2fb68b161e09a9bc81fa11011619a6/diff:/var/lib/docker/overlay2/9f55b91f762ea60dc3da71cf56e5ca24181e712389d99cb8d972bba474f5d6a4/diff:/var/lib/docker/overlay2/4c269b
df57eae0d21d2034dc086308278f9b00f2301c726b6de876b9ff97298d/diff:/var/lib/docker/overlay2/5fb8ed9b9e765df8150f27b7847ec7333b2c7d978dbc1161da97d630ec7e43e2/diff:/var/lib/docker/overlay2/3a297e9f6ab51d930ef61c49a0fea772cdc2a2704a077db6adb142eb044d9a93/diff:/var/lib/docker/overlay2/2068464f4655627fb513b31660ab34c938e559da10d44fd723ce9e1d744a037d/diff:/var/lib/docker/overlay2/f783596106daebadefbb7774015c369d757665d434c96581b426b2e5f5b453c4/diff:/var/lib/docker/overlay2/ac8927d3cc7829cc82e4c0214dd4adee97aedbe2b7d992cbbc08288443c8e320/diff:/var/lib/docker/overlay2/91eff4612dd56b2c82f4357b879f9cdcbc13704bf8f6dcbfc56febb104774843/diff:/var/lib/docker/overlay2/ee7366acc162efb7b878c4c56df021a8ef756fa595230ffe898cd0dd0355eb44/diff:/var/lib/docker/overlay2/ab5df115d2ec8cd71172942a2e449de32b93a3b6b5d90122c0e734c5e11d6bb2/diff:/var/lib/docker/overlay2/59fcbe9b552129cdc5c96e9e8ff27f4b88a12645aec3cf8f48f28d91521760e0/diff:/var/lib/docker/overlay2/0684681eb5880654b43e8803ef8f17b85e6129c85ff81c13b509563184a77625/diff:/var/lib/d
ocker/overlay2/8bbd24801b480df6ca8545e8f8bb09c17b7598c2868fb94ea5b8775ce2f311b4/diff:/var/lib/docker/overlay2/f28553cc59fccccfdfc5c24b7b8dfe4055c625d0a004731911c34b4ba32a9dfb/diff:/var/lib/docker/overlay2/2e47a8ef6e4481885d71f57a1d9ef99898b741644addd2796de5c2f4c696edb0/diff:/var/lib/docker/overlay2/e1f1eaaa809c974dabd197f590d19be05325f506a53a9a1f8ba29defd7096f60/diff:/var/lib/docker/overlay2/83ce12af60df76f98283ed8f3450cd5727b42d06055b18f04a18068b105ae128/diff:/var/lib/docker/overlay2/5fd34820f54e7f8f0c898c21b5d9d030e5b82c65c901897306c3db475481167f/diff:/var/lib/docker/overlay2/3ecb6f46fa47a8906ff5de1da5a63be9c664ff5bc66faf870126868d36bb77c6/diff:/var/lib/docker/overlay2/ccb92f12dd3e84b11b2c9b1ef6a0581ad5894648432ebe7cb5d16d48c7aacf6e/diff:/var/lib/docker/overlay2/7c6d11dc9abdd4916f3759c8ae4db8c3011cff872f2fd3cc502e7f663e496765/diff:/var/lib/docker/overlay2/b865b0351704115fa113e25f7651d1dc1e2f0348c332552e555e898094f34802/diff:/var/lib/docker/overlay2/bbcf207462c3f88368214d8e4ca222f28a828bd30661741d421665b4d10
80f07/diff:/var/lib/docker/overlay2/b554a32e9a2e4d3773e918754c27a1b32bc7ec5327d3bd1f52d7a146a07fa2c5/diff:/var/lib/docker/overlay2/d0a997bacfa9b1b54f61c62f00ad2797616ea9bb55182aad68ed805f96f5f72b/diff:/var/lib/docker/overlay2/e0c168ecfe6a93618f4f653c1aba422023114f242ab1045591d0c8454573d5c2/diff:/var/lib/docker/overlay2/fb67af38a46ef55935fcfb4f1be5f34b45b3d0e1c571538828117f23eedea417/diff:/var/lib/docker/overlay2/e96ed0776e5f27ef225469ac5f5e8ed2e299c72d5db88782599c0fdd1cec2fe3/diff:/var/lib/docker/overlay2/91b77e60e0a7864ace4f5a4d65f465bd7fe862616a87a74ee9fee21dc5dceb07/diff:/var/lib/docker/overlay2/9829211293f70b356dfa8d07b5dbbc3a6d05415cbd2840fd9dd948b8b315bf18/diff:/var/lib/docker/overlay2/dc35dda36e34a2f4f3a5d958b1a7d4d75db8655c4bc7b4b3d9591f43f9a645fc/diff:/var/lib/docker/overlay2/968c2bb04f641a9c8bd30d38659dc28973b31bfd577bb1aa355ae6c2ab4a0d34/diff:/var/lib/docker/overlay2/37432c6ae0b10a52e95b215fdd2e256362060f32c4a52f0d2021b6e10b3ed77b/diff:/var/lib/docker/overlay2/77687f9734b19f3e8a5bb08b07067e0b572775
20867b7a6ad80b67ffebe332d7/diff",
	                "MergedDir": "/var/lib/docker/overlay2/2d4748689afd948f3a5b6da8fac6447c1c6c010963b9513d0f87ff18af244eef/merged",
	                "UpperDir": "/var/lib/docker/overlay2/2d4748689afd948f3a5b6da8fac6447c1c6c010963b9513d0f87ff18af244eef/diff",
	                "WorkDir": "/var/lib/docker/overlay2/2d4748689afd948f3a5b6da8fac6447c1c6c010963b9513d0f87ff18af244eef/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "skaffold-20220512004259-7184",
	                "Source": "/var/lib/docker/volumes/skaffold-20220512004259-7184/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "skaffold-20220512004259-7184",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1652251400-14138@sha256:8c847a4aa2afc5a7fc659f9731046bf9cc7e788283deecc83c8633014fb0828a",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "skaffold-20220512004259-7184",
	                "name.minikube.sigs.k8s.io": "skaffold-20220512004259-7184",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "eb34885c0e7ea33b7f5a373189b3bab45cb67ecd87a18b21a887f6474b2f473d",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49216"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49212"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49213"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49214"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49215"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/eb34885c0e7e",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "skaffold-20220512004259-7184": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "0efbe11f31aa",
	                        "skaffold-20220512004259-7184"
	                    ],
	                    "NetworkID": "0b479757a77ca8d07db34665a7cd5cee6392727c48c0ed0ff283c5b9674db5d6",
	                    "EndpointID": "4bd800ba91eec03e37ad379638d5e53ec0b6535eb2eaa683a5482f4547e993e1",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p skaffold-20220512004259-7184 -n skaffold-20220512004259-7184
helpers_test.go:239: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.Host}} -p skaffold-20220512004259-7184 -n skaffold-20220512004259-7184: (6.5502066s)
helpers_test.go:244: <<< TestSkaffold FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestSkaffold]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-windows-amd64.exe -p skaffold-20220512004259-7184 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-windows-amd64.exe -p skaffold-20220512004259-7184 logs -n 25: (7.8471916s)
helpers_test.go:252: TestSkaffold logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |------------|--------------------------------------------------------------------------------------------------------------------------------|------------------------------------|-------------------|---------|---------------------|---------------------|
	|  Command   |                                                              Args                                                              |              Profile               |       User        | Version |     Start Time      |      End Time       |
	|------------|--------------------------------------------------------------------------------------------------------------------------------|------------------------------------|-------------------|---------|---------------------|---------------------|
	| cp         | multinode-20220512001153-7184 cp multinode-20220512001153-7184-m03:/home/docker/cp-test.txt                                    | multinode-20220512001153-7184      | minikube4\jenkins | v1.25.2 | 12 May 22 00:22 GMT | 12 May 22 00:22 GMT |
	|            | multinode-20220512001153-7184-m02:/home/docker/cp-test_multinode-20220512001153-7184-m03_multinode-20220512001153-7184-m02.txt |                                    |                   |         |                     |                     |
	| ssh        | multinode-20220512001153-7184                                                                                                  | multinode-20220512001153-7184      | minikube4\jenkins | v1.25.2 | 12 May 22 00:22 GMT | 12 May 22 00:22 GMT |
	|            | ssh -n                                                                                                                         |                                    |                   |         |                     |                     |
	|            | multinode-20220512001153-7184-m03                                                                                              |                                    |                   |         |                     |                     |
	|            | sudo cat /home/docker/cp-test.txt                                                                                              |                                    |                   |         |                     |                     |
	| ssh        | multinode-20220512001153-7184 ssh -n multinode-20220512001153-7184-m02 sudo cat                                                | multinode-20220512001153-7184      | minikube4\jenkins | v1.25.2 | 12 May 22 00:22 GMT | 12 May 22 00:22 GMT |
	|            | /home/docker/cp-test_multinode-20220512001153-7184-m03_multinode-20220512001153-7184-m02.txt                                   |                                    |                   |         |                     |                     |
	| node       | multinode-20220512001153-7184                                                                                                  | multinode-20220512001153-7184      | minikube4\jenkins | v1.25.2 | 12 May 22 00:22 GMT | 12 May 22 00:22 GMT |
	|            | node stop m03                                                                                                                  |                                    |                   |         |                     |                     |
	| node       | multinode-20220512001153-7184                                                                                                  | multinode-20220512001153-7184      | minikube4\jenkins | v1.25.2 | 12 May 22 00:22 GMT | 12 May 22 00:23 GMT |
	|            | node start m03                                                                                                                 |                                    |                   |         |                     |                     |
	|            | --alsologtostderr                                                                                                              |                                    |                   |         |                     |                     |
	| stop       | -p                                                                                                                             | multinode-20220512001153-7184      | minikube4\jenkins | v1.25.2 | 12 May 22 00:23 GMT | 12 May 22 00:24 GMT |
	|            | multinode-20220512001153-7184                                                                                                  |                                    |                   |         |                     |                     |
	| start      | -p                                                                                                                             | multinode-20220512001153-7184      | minikube4\jenkins | v1.25.2 | 12 May 22 00:24 GMT | 12 May 22 00:27 GMT |
	|            | multinode-20220512001153-7184                                                                                                  |                                    |                   |         |                     |                     |
	|            | --wait=true -v=8                                                                                                               |                                    |                   |         |                     |                     |
	|            | --alsologtostderr                                                                                                              |                                    |                   |         |                     |                     |
	| node       | multinode-20220512001153-7184                                                                                                  | multinode-20220512001153-7184      | minikube4\jenkins | v1.25.2 | 12 May 22 00:27 GMT | 12 May 22 00:27 GMT |
	|            | node delete m03                                                                                                                |                                    |                   |         |                     |                     |
	| stop       | multinode-20220512001153-7184                                                                                                  | multinode-20220512001153-7184      | minikube4\jenkins | v1.25.2 | 12 May 22 00:27 GMT | 12 May 22 00:28 GMT |
	|            | stop                                                                                                                           |                                    |                   |         |                     |                     |
	| start      | -p                                                                                                                             | multinode-20220512001153-7184      | minikube4\jenkins | v1.25.2 | 12 May 22 00:28 GMT | 12 May 22 00:30 GMT |
	|            | multinode-20220512001153-7184                                                                                                  |                                    |                   |         |                     |                     |
	|            | --wait=true -v=8                                                                                                               |                                    |                   |         |                     |                     |
	|            | --alsologtostderr                                                                                                              |                                    |                   |         |                     |                     |
	|            | --driver=docker                                                                                                                |                                    |                   |         |                     |                     |
	| start      | -p                                                                                                                             | multinode-20220512001153-7184-m03  | minikube4\jenkins | v1.25.2 | 12 May 22 00:30 GMT | 12 May 22 00:32 GMT |
	|            | multinode-20220512001153-7184-m03                                                                                              |                                    |                   |         |                     |                     |
	|            | --driver=docker                                                                                                                |                                    |                   |         |                     |                     |
	| delete     | -p                                                                                                                             | multinode-20220512001153-7184-m03  | minikube4\jenkins | v1.25.2 | 12 May 22 00:32 GMT | 12 May 22 00:33 GMT |
	|            | multinode-20220512001153-7184-m03                                                                                              |                                    |                   |         |                     |                     |
	| delete     | -p                                                                                                                             | multinode-20220512001153-7184      | minikube4\jenkins | v1.25.2 | 12 May 22 00:33 GMT | 12 May 22 00:33 GMT |
	|            | multinode-20220512001153-7184                                                                                                  |                                    |                   |         |                     |                     |
	| start      | -p                                                                                                                             | test-preload-20220512003344-7184   | minikube4\jenkins | v1.25.2 | 12 May 22 00:33 GMT | 12 May 22 00:36 GMT |
	|            | test-preload-20220512003344-7184                                                                                               |                                    |                   |         |                     |                     |
	|            | --memory=2200 --alsologtostderr                                                                                                |                                    |                   |         |                     |                     |
	|            | --wait=true --preload=false                                                                                                    |                                    |                   |         |                     |                     |
	|            | --driver=docker                                                                                                                |                                    |                   |         |                     |                     |
	|            | --kubernetes-version=v1.17.0                                                                                                   |                                    |                   |         |                     |                     |
	| ssh        | -p                                                                                                                             | test-preload-20220512003344-7184   | minikube4\jenkins | v1.25.2 | 12 May 22 00:36 GMT | 12 May 22 00:36 GMT |
	|            | test-preload-20220512003344-7184                                                                                               |                                    |                   |         |                     |                     |
	|            | -- docker pull                                                                                                                 |                                    |                   |         |                     |                     |
	|            | gcr.io/k8s-minikube/busybox                                                                                                    |                                    |                   |         |                     |                     |
	| start      | -p                                                                                                                             | test-preload-20220512003344-7184   | minikube4\jenkins | v1.25.2 | 12 May 22 00:36 GMT | 12 May 22 00:38 GMT |
	|            | test-preload-20220512003344-7184                                                                                               |                                    |                   |         |                     |                     |
	|            | --memory=2200 --alsologtostderr                                                                                                |                                    |                   |         |                     |                     |
	|            | -v=1 --wait=true --driver=docker                                                                                               |                                    |                   |         |                     |                     |
	|            | --kubernetes-version=v1.17.3                                                                                                   |                                    |                   |         |                     |                     |
	| ssh        | -p                                                                                                                             | test-preload-20220512003344-7184   | minikube4\jenkins | v1.25.2 | 12 May 22 00:38 GMT | 12 May 22 00:38 GMT |
	|            | test-preload-20220512003344-7184                                                                                               |                                    |                   |         |                     |                     |
	|            | -- docker images                                                                                                               |                                    |                   |         |                     |                     |
	| delete     | -p                                                                                                                             | test-preload-20220512003344-7184   | minikube4\jenkins | v1.25.2 | 12 May 22 00:38 GMT | 12 May 22 00:39 GMT |
	|            | test-preload-20220512003344-7184                                                                                               |                                    |                   |         |                     |                     |
	| start      | -p                                                                                                                             | scheduled-stop-20220512003922-7184 | minikube4\jenkins | v1.25.2 | 12 May 22 00:39 GMT | 12 May 22 00:41 GMT |
	|            | scheduled-stop-20220512003922-7184                                                                                             |                                    |                   |         |                     |                     |
	|            | --memory=2048 --driver=docker                                                                                                  |                                    |                   |         |                     |                     |
	| stop       | -p                                                                                                                             | scheduled-stop-20220512003922-7184 | minikube4\jenkins | v1.25.2 | 12 May 22 00:41 GMT | 12 May 22 00:41 GMT |
	|            | scheduled-stop-20220512003922-7184                                                                                             |                                    |                   |         |                     |                     |
	|            | --schedule 5m                                                                                                                  |                                    |                   |         |                     |                     |
	| ssh        | -p                                                                                                                             | scheduled-stop-20220512003922-7184 | minikube4\jenkins | v1.25.2 | 12 May 22 00:41 GMT | 12 May 22 00:41 GMT |
	|            | scheduled-stop-20220512003922-7184                                                                                             |                                    |                   |         |                     |                     |
	|            | -- sudo systemctl show                                                                                                         |                                    |                   |         |                     |                     |
	|            | minikube-scheduled-stop --no-page                                                                                              |                                    |                   |         |                     |                     |
	| stop       | -p                                                                                                                             | scheduled-stop-20220512003922-7184 | minikube4\jenkins | v1.25.2 | 12 May 22 00:41 GMT | 12 May 22 00:41 GMT |
	|            | scheduled-stop-20220512003922-7184                                                                                             |                                    |                   |         |                     |                     |
	|            | --schedule 5s                                                                                                                  |                                    |                   |         |                     |                     |
	| delete     | -p                                                                                                                             | scheduled-stop-20220512003922-7184 | minikube4\jenkins | v1.25.2 | 12 May 22 00:42 GMT | 12 May 22 00:42 GMT |
	|            | scheduled-stop-20220512003922-7184                                                                                             |                                    |                   |         |                     |                     |
	| start      | -p                                                                                                                             | skaffold-20220512004259-7184       | minikube4\jenkins | v1.25.2 | 12 May 22 00:43 GMT | 12 May 22 00:44 GMT |
	|            | skaffold-20220512004259-7184                                                                                                   |                                    |                   |         |                     |                     |
	|            | --memory=2600 --driver=docker                                                                                                  |                                    |                   |         |                     |                     |
	| docker-env | --shell none -p                                                                                                                | skaffold-20220512004259-7184       | skaffold          | v1.25.2 | 12 May 22 00:44 GMT | 12 May 22 00:45 GMT |
	|            | skaffold-20220512004259-7184                                                                                                   |                                    |                   |         |                     |                     |
	|            | --user=skaffold                                                                                                                |                                    |                   |         |                     |                     |
	|------------|--------------------------------------------------------------------------------------------------------------------------------|------------------------------------|-------------------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2022/05/12 00:43:00
	Running on machine: minikube4
	Binary: Built with gc go1.18.1 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0512 00:43:00.237544    7916 out.go:296] Setting OutFile to fd 1428 ...
	I0512 00:43:00.301200    7916 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0512 00:43:00.301200    7916 out.go:309] Setting ErrFile to fd 1432...
	I0512 00:43:00.301200    7916 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0512 00:43:00.315752    7916 out.go:303] Setting JSON to false
	I0512 00:43:00.317786    7916 start.go:115] hostinfo: {"hostname":"minikube4","uptime":14633,"bootTime":1652301547,"procs":162,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19042 Build 19042","kernelVersion":"10.0.19042 Build 19042","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"df6bfb5f-73f2-4acb-9365-df7854ecbb28"}
	W0512 00:43:00.317786    7916 start.go:123] gopshost.Virtualization returned error: not implemented yet
	I0512 00:43:00.665670    7916 out.go:177] * [skaffold-20220512004259-7184] minikube v1.25.2 on Microsoft Windows 10 Enterprise N 10.0.19042 Build 19042
	I0512 00:43:00.669871    7916 notify.go:193] Checking for updates...
	I0512 00:43:00.672103    7916 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	I0512 00:43:00.677547    7916 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube4\minikube-integration\.minikube
	I0512 00:43:00.679997    7916 out.go:177]   - MINIKUBE_LOCATION=13639
	I0512 00:43:00.682681    7916 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0512 00:43:00.685572    7916 driver.go:358] Setting default libvirt URI to qemu:///system
	I0512 00:43:03.352156    7916 docker.go:137] docker version: linux-20.10.14
	I0512 00:43:03.360759    7916 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0512 00:43:05.493364    7916 cli_runner.go:217] Completed: docker system info --format "{{json .}}": (2.1324648s)
	I0512 00:43:05.494555    7916 info.go:265] docker info: {ID:5MOX:W55Z:6RSS:V5PU:46KT:D723:NTM4:N7FK:USOO:URA3:TW6J:2PNT Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:45 OomKillDisable:true NGoroutines:47 SystemTime:2022-05-12 00:43:04.3920243 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.102.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_
64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:53902323712 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.14 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3df54a852345ae127d1fa3092b95168e4a88e2f8 Expected:3df54a852345ae127d1fa3092b95168e4a88e2f8} RuncCommit:{ID:v1.0.3-0-gf46b6ba Expected:v1.0.3-0-gf46b6ba} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,
profile=default] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.5.0] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:C:\Program Files\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc.
Version:v0.17.0]] Warnings:<nil>}}
	I0512 00:43:05.496768    7916 out.go:177] * Using the docker driver based on user configuration
	I0512 00:43:05.504009    7916 start.go:284] selected driver: docker
	I0512 00:43:05.504009    7916 start.go:801] validating driver "docker" against <nil>
	I0512 00:43:05.504009    7916 start.go:812] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0512 00:43:05.593259    7916 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0512 00:43:07.739180    7916 cli_runner.go:217] Completed: docker system info --format "{{json .}}": (2.1456852s)
	I0512 00:43:07.739720    7916 info.go:265] docker info: {ID:5MOX:W55Z:6RSS:V5PU:46KT:D723:NTM4:N7FK:USOO:URA3:TW6J:2PNT Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:45 OomKillDisable:true NGoroutines:47 SystemTime:2022-05-12 00:43:06.6256231 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.102.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_
64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:53902323712 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.14 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3df54a852345ae127d1fa3092b95168e4a88e2f8 Expected:3df54a852345ae127d1fa3092b95168e4a88e2f8} RuncCommit:{ID:v1.0.3-0-gf46b6ba Expected:v1.0.3-0-gf46b6ba} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,
profile=default] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.5.0] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:C:\Program Files\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc.
Version:v0.17.0]] Warnings:<nil>}}
	I0512 00:43:07.739720    7916 start_flags.go:292] no existing cluster config was found, will generate one from the flags 
	I0512 00:43:07.740960    7916 start_flags.go:829] Wait components to verify : map[apiserver:true system_pods:true]
	I0512 00:43:07.744167    7916 out.go:177] * Using Docker Desktop driver with the root privilege
	I0512 00:43:07.746063    7916 cni.go:95] Creating CNI manager for ""
	I0512 00:43:07.746101    7916 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0512 00:43:07.746128    7916 start_flags.go:306] config:
	{Name:skaffold-20220512004259-7184 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1652251400-14138@sha256:8c847a4aa2afc5a7fc659f9731046bf9cc7e788283deecc83c8633014fb0828a Memory:2600 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.5 ClusterName:skaffold-20220512004259-7184 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.l
ocal ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube4:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0512 00:43:07.748480    7916 out.go:177] * Starting control plane node skaffold-20220512004259-7184 in cluster skaffold-20220512004259-7184
	I0512 00:43:07.751794    7916 cache.go:120] Beginning downloading kic base image for docker with docker
	I0512 00:43:07.753809    7916 out.go:177] * Pulling base image ...
	I0512 00:43:07.755907    7916 preload.go:132] Checking if preload exists for k8s version v1.23.5 and runtime docker
	I0512 00:43:07.755907    7916 preload.go:148] Found local preload: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.23.5-docker-overlay2-amd64.tar.lz4
	I0512 00:43:07.755907    7916 cache.go:57] Caching tarball of preloaded images
	I0512 00:43:07.755907    7916 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1652251400-14138@sha256:8c847a4aa2afc5a7fc659f9731046bf9cc7e788283deecc83c8633014fb0828a in local docker daemon
	I0512 00:43:07.757450    7916 preload.go:174] Found C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.23.5-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0512 00:43:07.757450    7916 cache.go:60] Finished verifying existence of preloaded tar for  v1.23.5 on docker
	I0512 00:43:07.757939    7916 profile.go:148] Saving config to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\skaffold-20220512004259-7184\config.json ...
	I0512 00:43:07.757939    7916 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\skaffold-20220512004259-7184\config.json: {Name:mk9569e43cf936d6ca5f6e168e446f2d712de2b0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0512 00:43:08.835305    7916 image.go:79] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1652251400-14138@sha256:8c847a4aa2afc5a7fc659f9731046bf9cc7e788283deecc83c8633014fb0828a in local docker daemon, skipping pull
	I0512 00:43:08.835441    7916 cache.go:141] gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1652251400-14138@sha256:8c847a4aa2afc5a7fc659f9731046bf9cc7e788283deecc83c8633014fb0828a exists in daemon, skipping load
	I0512 00:43:08.835441    7916 cache.go:206] Successfully downloaded all kic artifacts
	I0512 00:43:08.835441    7916 start.go:352] acquiring machines lock for skaffold-20220512004259-7184: {Name:mk915fc60c1e249c4ee1c7e6052808fcb2e736a0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0512 00:43:08.835441    7916 start.go:356] acquired machines lock for "skaffold-20220512004259-7184" in 0s
	I0512 00:43:08.835441    7916 start.go:91] Provisioning new machine with config: &{Name:skaffold-20220512004259-7184 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1652251400-14138@sha256:8c847a4aa2afc5a7fc659f9731046bf9cc7e788283deecc83c8633014fb0828a Memory:2600 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.5 ClusterName:skaffold-20220512004259-7184 Namespace:defa
ult APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.23.5 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube4:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false} &{Name: IP: Port:8443 Kubernete
sVersion:v1.23.5 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0512 00:43:08.835441    7916 start.go:131] createHost starting for "" (driver="docker")
	I0512 00:43:08.841355    7916 out.go:204] * Creating docker container (CPUs=2, Memory=2600MB) ...
	I0512 00:43:08.841914    7916 start.go:165] libmachine.API.Create for "skaffold-20220512004259-7184" (driver="docker")
	I0512 00:43:08.841914    7916 client.go:168] LocalClient.Create starting
	I0512 00:43:08.842481    7916 main.go:134] libmachine: Reading certificate data from C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem
	I0512 00:43:08.842561    7916 main.go:134] libmachine: Decoding PEM data...
	I0512 00:43:08.842561    7916 main.go:134] libmachine: Parsing certificate...
	I0512 00:43:08.842561    7916 main.go:134] libmachine: Reading certificate data from C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem
	I0512 00:43:08.842561    7916 main.go:134] libmachine: Decoding PEM data...
	I0512 00:43:08.842561    7916 main.go:134] libmachine: Parsing certificate...
	I0512 00:43:08.851803    7916 cli_runner.go:164] Run: docker network inspect skaffold-20220512004259-7184 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0512 00:43:09.933901    7916 cli_runner.go:211] docker network inspect skaffold-20220512004259-7184 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0512 00:43:09.934468    7916 cli_runner.go:217] Completed: docker network inspect skaffold-20220512004259-7184 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}": (1.0820426s)
	I0512 00:43:09.943452    7916 network_create.go:272] running [docker network inspect skaffold-20220512004259-7184] to gather additional debugging logs...
	I0512 00:43:09.943452    7916 cli_runner.go:164] Run: docker network inspect skaffold-20220512004259-7184
	W0512 00:43:11.025550    7916 cli_runner.go:211] docker network inspect skaffold-20220512004259-7184 returned with exit code 1
	I0512 00:43:11.025580    7916 cli_runner.go:217] Completed: docker network inspect skaffold-20220512004259-7184: (1.0819135s)
	I0512 00:43:11.025620    7916 network_create.go:275] error running [docker network inspect skaffold-20220512004259-7184]: docker network inspect skaffold-20220512004259-7184: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: skaffold-20220512004259-7184
	I0512 00:43:11.025620    7916 network_create.go:277] output of [docker network inspect skaffold-20220512004259-7184]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: skaffold-20220512004259-7184
	
	** /stderr **
	I0512 00:43:11.034561    7916 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0512 00:43:12.103513    7916 cli_runner.go:217] Completed: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}": (1.0688973s)
	I0512 00:43:12.126309    7916 network.go:288] reserving subnet 192.168.49.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[] amended:true}} dirty:map[192.168.49.0:0xc00060a358] misses:0}
	I0512 00:43:12.126309    7916 network.go:235] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0512 00:43:12.126309    7916 network_create.go:115] attempt to create docker network skaffold-20220512004259-7184 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0512 00:43:12.133305    7916 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true skaffold-20220512004259-7184
	I0512 00:43:13.352733    7916 cli_runner.go:217] Completed: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true skaffold-20220512004259-7184: (1.2193663s)
	I0512 00:43:13.352733    7916 network_create.go:99] docker network skaffold-20220512004259-7184 192.168.49.0/24 created
	I0512 00:43:13.352733    7916 kic.go:106] calculated static IP "192.168.49.2" for the "skaffold-20220512004259-7184" container
	I0512 00:43:13.368835    7916 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0512 00:43:14.478479    7916 cli_runner.go:217] Completed: docker ps -a --format {{.Names}}: (1.1095872s)
	I0512 00:43:14.486760    7916 cli_runner.go:164] Run: docker volume create skaffold-20220512004259-7184 --label name.minikube.sigs.k8s.io=skaffold-20220512004259-7184 --label created_by.minikube.sigs.k8s.io=true
	I0512 00:43:15.582295    7916 cli_runner.go:217] Completed: docker volume create skaffold-20220512004259-7184 --label name.minikube.sigs.k8s.io=skaffold-20220512004259-7184 --label created_by.minikube.sigs.k8s.io=true: (1.0954789s)
	I0512 00:43:15.582606    7916 oci.go:103] Successfully created a docker volume skaffold-20220512004259-7184
	I0512 00:43:15.591790    7916 cli_runner.go:164] Run: docker run --rm --name skaffold-20220512004259-7184-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=skaffold-20220512004259-7184 --entrypoint /usr/bin/test -v skaffold-20220512004259-7184:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1652251400-14138@sha256:8c847a4aa2afc5a7fc659f9731046bf9cc7e788283deecc83c8633014fb0828a -d /var/lib
	I0512 00:43:18.133105    7916 cli_runner.go:217] Completed: docker run --rm --name skaffold-20220512004259-7184-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=skaffold-20220512004259-7184 --entrypoint /usr/bin/test -v skaffold-20220512004259-7184:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1652251400-14138@sha256:8c847a4aa2afc5a7fc659f9731046bf9cc7e788283deecc83c8633014fb0828a -d /var/lib: (2.5411849s)
	I0512 00:43:18.133105    7916 oci.go:107] Successfully prepared a docker volume skaffold-20220512004259-7184
	I0512 00:43:18.133105    7916 preload.go:132] Checking if preload exists for k8s version v1.23.5 and runtime docker
	I0512 00:43:18.133105    7916 kic.go:179] Starting extracting preloaded images to volume ...
	I0512 00:43:18.143542    7916 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.23.5-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v skaffold-20220512004259-7184:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1652251400-14138@sha256:8c847a4aa2afc5a7fc659f9731046bf9cc7e788283deecc83c8633014fb0828a -I lz4 -xf /preloaded.tar -C /extractDir
	I0512 00:43:51.316255    7916 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.23.5-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v skaffold-20220512004259-7184:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1652251400-14138@sha256:8c847a4aa2afc5a7fc659f9731046bf9cc7e788283deecc83c8633014fb0828a -I lz4 -xf /preloaded.tar -C /extractDir: (33.1708993s)
	I0512 00:43:51.316255    7916 kic.go:188] duration metric: took 33.181451 seconds to extract preloaded images to volume
	I0512 00:43:51.324328    7916 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0512 00:43:53.402949    7916 cli_runner.go:217] Completed: docker system info --format "{{json .}}": (2.0784097s)
	I0512 00:43:53.403215    7916 info.go:265] docker info: {ID:5MOX:W55Z:6RSS:V5PU:46KT:D723:NTM4:N7FK:USOO:URA3:TW6J:2PNT Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:47 OomKillDisable:true NGoroutines:47 SystemTime:2022-05-12 00:43:52.376113 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.102.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_6
4 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:53902323712 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.14 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3df54a852345ae127d1fa3092b95168e4a88e2f8 Expected:3df54a852345ae127d1fa3092b95168e4a88e2f8} RuncCommit:{ID:v1.0.3-0-gf46b6ba Expected:v1.0.3-0-gf46b6ba} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,p
rofile=default] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.5.0] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:C:\Program Files\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc.
Version:v0.17.0]] Warnings:<nil>}}
	I0512 00:43:53.411546    7916 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0512 00:43:55.462525    7916 cli_runner.go:217] Completed: docker info --format "'{{json .SecurityOptions}}'": (2.0508204s)
	I0512 00:43:55.470842    7916 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname skaffold-20220512004259-7184 --name skaffold-20220512004259-7184 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=skaffold-20220512004259-7184 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=skaffold-20220512004259-7184 --network skaffold-20220512004259-7184 --ip 192.168.49.2 --volume skaffold-20220512004259-7184:/var --security-opt apparmor=unconfined --memory=2600mb --memory-swap=2600mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1652251400-14138@sha256:8c847a4aa2afc5a7fc659f9731046bf9cc7e788283deecc83c8633014fb0828a
	I0512 00:43:57.554811    7916 cli_runner.go:217] Completed: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname skaffold-20220512004259-7184 --name skaffold-20220512004259-7184 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=skaffold-20220512004259-7184 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=skaffold-20220512004259-7184 --network skaffold-20220512004259-7184 --ip 192.168.49.2 --volume skaffold-20220512004259-7184:/var --security-opt apparmor=unconfined --memory=2600mb --memory-swap=2600mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1652251400-14138@sha256:8c847a4aa2afc5a7fc659f9731046bf9cc7e788283deecc83c8633014fb0828a: (2.0830719s)
	I0512 00:43:57.562789    7916 cli_runner.go:164] Run: docker container inspect skaffold-20220512004259-7184 --format={{.State.Running}}
	I0512 00:43:58.656541    7916 cli_runner.go:217] Completed: docker container inspect skaffold-20220512004259-7184 --format={{.State.Running}}: (1.0936958s)
	I0512 00:43:58.668400    7916 cli_runner.go:164] Run: docker container inspect skaffold-20220512004259-7184 --format={{.State.Status}}
	I0512 00:43:59.755332    7916 cli_runner.go:217] Completed: docker container inspect skaffold-20220512004259-7184 --format={{.State.Status}}: (1.086876s)
	I0512 00:43:59.767486    7916 cli_runner.go:164] Run: docker exec skaffold-20220512004259-7184 stat /var/lib/dpkg/alternatives/iptables
	I0512 00:44:01.040137    7916 cli_runner.go:217] Completed: docker exec skaffold-20220512004259-7184 stat /var/lib/dpkg/alternatives/iptables: (1.2724562s)
	I0512 00:44:01.040220    7916 oci.go:247] the created container "skaffold-20220512004259-7184" has a running status.
	I0512 00:44:01.040280    7916 kic.go:210] Creating ssh key for kic: C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\skaffold-20220512004259-7184\id_rsa...
	I0512 00:44:01.200068    7916 kic_runner.go:191] docker (temp): C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\skaffold-20220512004259-7184\id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0512 00:44:02.399380    7916 cli_runner.go:164] Run: docker container inspect skaffold-20220512004259-7184 --format={{.State.Status}}
	I0512 00:44:03.477174    7916 cli_runner.go:217] Completed: docker container inspect skaffold-20220512004259-7184 --format={{.State.Status}}: (1.0777384s)
	I0512 00:44:03.496605    7916 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0512 00:44:03.496605    7916 kic_runner.go:114] Args: [docker exec --privileged skaffold-20220512004259-7184 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0512 00:44:04.805754    7916 kic_runner.go:123] Done: [docker exec --privileged skaffold-20220512004259-7184 chown docker:docker /home/docker/.ssh/authorized_keys]: (1.3089708s)
	I0512 00:44:04.810593    7916 kic.go:250] ensuring only current user has permissions to key file located at : C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\skaffold-20220512004259-7184\id_rsa...
	I0512 00:44:05.324997    7916 cli_runner.go:164] Run: docker container inspect skaffold-20220512004259-7184 --format={{.State.Status}}
	I0512 00:44:06.404117    7916 cli_runner.go:217] Completed: docker container inspect skaffold-20220512004259-7184 --format={{.State.Status}}: (1.0789334s)
	I0512 00:44:06.404160    7916 machine.go:88] provisioning docker machine ...
	I0512 00:44:06.404225    7916 ubuntu.go:169] provisioning hostname "skaffold-20220512004259-7184"
	I0512 00:44:06.413182    7916 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" skaffold-20220512004259-7184
	I0512 00:44:07.478071    7916 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" skaffold-20220512004259-7184: (1.0647231s)
	I0512 00:44:07.483812    7916 main.go:134] libmachine: Using SSH client type: native
	I0512 00:44:07.489137    7916 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x422e20] 0x425c80 <nil>  [] 0s} 127.0.0.1 49216 <nil> <nil>}
	I0512 00:44:07.489137    7916 main.go:134] libmachine: About to run SSH command:
	sudo hostname skaffold-20220512004259-7184 && echo "skaffold-20220512004259-7184" | sudo tee /etc/hostname
	I0512 00:44:07.654571    7916 main.go:134] libmachine: SSH cmd err, output: <nil>: skaffold-20220512004259-7184
	
	I0512 00:44:07.665267    7916 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" skaffold-20220512004259-7184
	I0512 00:44:08.732686    7916 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" skaffold-20220512004259-7184: (1.0673641s)
	I0512 00:44:08.742241    7916 main.go:134] libmachine: Using SSH client type: native
	I0512 00:44:08.742241    7916 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x422e20] 0x425c80 <nil>  [] 0s} 127.0.0.1 49216 <nil> <nil>}
	I0512 00:44:08.742241    7916 main.go:134] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sskaffold-20220512004259-7184' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 skaffold-20220512004259-7184/g' /etc/hosts;
				else 
					echo '127.0.1.1 skaffold-20220512004259-7184' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0512 00:44:08.893882    7916 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I0512 00:44:08.893937    7916 ubuntu.go:175] set auth options {CertDir:C:\Users\jenkins.minikube4\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube4\minikube-integration\.minikube}
	I0512 00:44:08.893983    7916 ubuntu.go:177] setting up certificates
	I0512 00:44:08.894008    7916 provision.go:83] configureAuth start
	I0512 00:44:08.901549    7916 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" skaffold-20220512004259-7184
	I0512 00:44:09.965031    7916 cli_runner.go:217] Completed: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" skaffold-20220512004259-7184: (1.0632756s)
	I0512 00:44:09.965217    7916 provision.go:138] copyHostCerts
	I0512 00:44:09.965581    7916 exec_runner.go:144] found C:\Users\jenkins.minikube4\minikube-integration\.minikube/key.pem, removing ...
	I0512 00:44:09.965581    7916 exec_runner.go:207] rm: C:\Users\jenkins.minikube4\minikube-integration\.minikube\key.pem
	I0512 00:44:09.966044    7916 exec_runner.go:151] cp: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube4\minikube-integration\.minikube/key.pem (1675 bytes)
	I0512 00:44:09.967321    7916 exec_runner.go:144] found C:\Users\jenkins.minikube4\minikube-integration\.minikube/ca.pem, removing ...
	I0512 00:44:09.967358    7916 exec_runner.go:207] rm: C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.pem
	I0512 00:44:09.967582    7916 exec_runner.go:151] cp: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube4\minikube-integration\.minikube/ca.pem (1078 bytes)
	I0512 00:44:09.968281    7916 exec_runner.go:144] found C:\Users\jenkins.minikube4\minikube-integration\.minikube/cert.pem, removing ...
	I0512 00:44:09.968281    7916 exec_runner.go:207] rm: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cert.pem
	I0512 00:44:09.968281    7916 exec_runner.go:151] cp: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube4\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0512 00:44:09.969301    7916 provision.go:112] generating server cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.skaffold-20220512004259-7184 san=[192.168.49.2 127.0.0.1 localhost 127.0.0.1 minikube skaffold-20220512004259-7184]
	I0512 00:44:10.329317    7916 provision.go:172] copyRemoteCerts
	I0512 00:44:10.339917    7916 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0512 00:44:10.349315    7916 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" skaffold-20220512004259-7184
	I0512 00:44:11.443650    7916 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" skaffold-20220512004259-7184: (1.0942794s)
	I0512 00:44:11.443650    7916 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49216 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\skaffold-20220512004259-7184\id_rsa Username:docker}
	I0512 00:44:11.604634    7916 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (1.2646525s)
	I0512 00:44:11.606093    7916 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0512 00:44:11.663955    7916 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1257 bytes)
	I0512 00:44:11.716351    7916 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0512 00:44:11.769163    7916 provision.go:86] duration metric: configureAuth took 2.8750073s
	I0512 00:44:11.769163    7916 ubuntu.go:193] setting minikube options for container-runtime
	I0512 00:44:11.769832    7916 config.go:178] Loaded profile config "skaffold-20220512004259-7184": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.5
	I0512 00:44:11.778507    7916 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" skaffold-20220512004259-7184
	I0512 00:44:12.871767    7916 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" skaffold-20220512004259-7184: (1.0928383s)
	I0512 00:44:12.875604    7916 main.go:134] libmachine: Using SSH client type: native
	I0512 00:44:12.876266    7916 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x422e20] 0x425c80 <nil>  [] 0s} 127.0.0.1 49216 <nil> <nil>}
	I0512 00:44:12.876266    7916 main.go:134] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0512 00:44:13.044339    7916 main.go:134] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0512 00:44:13.044339    7916 ubuntu.go:71] root file system type: overlay
	I0512 00:44:13.044339    7916 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0512 00:44:13.052696    7916 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" skaffold-20220512004259-7184
	I0512 00:44:14.089895    7916 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" skaffold-20220512004259-7184: (1.0370578s)
	I0512 00:44:14.094056    7916 main.go:134] libmachine: Using SSH client type: native
	I0512 00:44:14.094619    7916 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x422e20] 0x425c80 <nil>  [] 0s} 127.0.0.1 49216 <nil> <nil>}
	I0512 00:44:14.094743    7916 main.go:134] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0512 00:44:14.319737    7916 main.go:134] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0512 00:44:14.340026    7916 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" skaffold-20220512004259-7184
	I0512 00:44:15.415666    7916 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" skaffold-20220512004259-7184: (1.0754938s)
	I0512 00:44:15.419233    7916 main.go:134] libmachine: Using SSH client type: native
	I0512 00:44:15.419964    7916 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x422e20] 0x425c80 <nil>  [] 0s} 127.0.0.1 49216 <nil> <nil>}
	I0512 00:44:15.419964    7916 main.go:134] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0512 00:44:16.691768    7916 main.go:134] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2022-05-05 13:17:28.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2022-05-12 00:44:14.301734000 +0000
	@@ -1,30 +1,32 @@
	 [Unit]
	 Description=Docker Application Container Engine
	 Documentation=https://docs.docker.com
	-After=network-online.target docker.socket firewalld.service containerd.service
	+BindsTo=containerd.service
	+After=network-online.target firewalld.service containerd.service
	 Wants=network-online.target
	-Requires=docker.socket containerd.service
	+Requires=docker.socket
	+StartLimitBurst=3
	+StartLimitIntervalSec=60
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutSec=0
	-RestartSec=2
	-Restart=always
	-
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	+Restart=on-failure
	 
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP $MAINPID
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	@@ -32,16 +34,16 @@
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	 
	 # kill only the docker process, not all processes in the cgroup
	 KillMode=process
	-OOMScoreAdjust=-500
	 
	 [Install]
	 WantedBy=multi-user.target
	Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install enable docker
	
	I0512 00:44:16.691768    7916 machine.go:91] provisioned docker machine in 10.2870818s
	I0512 00:44:16.691768    7916 client.go:171] LocalClient.Create took 1m7.8463799s
	I0512 00:44:16.691768    7916 start.go:173] duration metric: libmachine.API.Create for "skaffold-20220512004259-7184" took 1m7.8463799s
	I0512 00:44:16.691768    7916 start.go:306] post-start starting for "skaffold-20220512004259-7184" (driver="docker")
	I0512 00:44:16.691768    7916 start.go:316] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0512 00:44:16.703381    7916 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0512 00:44:16.711079    7916 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" skaffold-20220512004259-7184
	I0512 00:44:17.788270    7916 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" skaffold-20220512004259-7184: (1.0769899s)
	I0512 00:44:17.788723    7916 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49216 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\skaffold-20220512004259-7184\id_rsa Username:docker}
	I0512 00:44:17.938757    7916 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (1.2353126s)
	I0512 00:44:17.951713    7916 ssh_runner.go:195] Run: cat /etc/os-release
	I0512 00:44:17.965485    7916 main.go:134] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0512 00:44:17.965485    7916 main.go:134] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0512 00:44:17.965485    7916 main.go:134] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0512 00:44:17.965485    7916 info.go:137] Remote host: Ubuntu 20.04.4 LTS
	I0512 00:44:17.965485    7916 filesync.go:126] Scanning C:\Users\jenkins.minikube4\minikube-integration\.minikube\addons for local assets ...
	I0512 00:44:17.965485    7916 filesync.go:126] Scanning C:\Users\jenkins.minikube4\minikube-integration\.minikube\files for local assets ...
	I0512 00:44:17.965485    7916 filesync.go:149] local asset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\71842.pem -> 71842.pem in /etc/ssl/certs
	I0512 00:44:17.979428    7916 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0512 00:44:18.006476    7916 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\71842.pem --> /etc/ssl/certs/71842.pem (1708 bytes)
	I0512 00:44:18.061594    7916 start.go:309] post-start completed in 1.3697553s
	I0512 00:44:18.072899    7916 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" skaffold-20220512004259-7184
	I0512 00:44:19.143818    7916 cli_runner.go:217] Completed: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" skaffold-20220512004259-7184: (1.0708635s)
	I0512 00:44:19.144098    7916 profile.go:148] Saving config to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\skaffold-20220512004259-7184\config.json ...
	I0512 00:44:19.160943    7916 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0512 00:44:19.171901    7916 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" skaffold-20220512004259-7184
	I0512 00:44:20.223200    7916 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" skaffold-20220512004259-7184: (1.0511209s)
	I0512 00:44:20.223712    7916 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49216 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\skaffold-20220512004259-7184\id_rsa Username:docker}
	I0512 00:44:20.368758    7916 ssh_runner.go:235] Completed: sh -c "df -h /var | awk 'NR==2{print $5}'": (1.2077527s)
	I0512 00:44:20.379924    7916 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0512 00:44:20.404608    7916 start.go:134] duration metric: createHost completed in 1m11.5655029s
	I0512 00:44:20.404608    7916 start.go:81] releasing machines lock for "skaffold-20220512004259-7184", held for 1m11.5655029s
	I0512 00:44:20.416046    7916 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" skaffold-20220512004259-7184
	I0512 00:44:21.499983    7916 cli_runner.go:217] Completed: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" skaffold-20220512004259-7184: (1.0837569s)
	I0512 00:44:21.502527    7916 ssh_runner.go:195] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0512 00:44:21.510831    7916 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" skaffold-20220512004259-7184
	I0512 00:44:21.512816    7916 ssh_runner.go:195] Run: systemctl --version
	I0512 00:44:21.519882    7916 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" skaffold-20220512004259-7184
	I0512 00:44:22.620448    7916 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" skaffold-20220512004259-7184: (1.1095605s)
	I0512 00:44:22.620767    7916 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49216 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\skaffold-20220512004259-7184\id_rsa Username:docker}
	I0512 00:44:22.635687    7916 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" skaffold-20220512004259-7184: (1.1157481s)
	I0512 00:44:22.635687    7916 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49216 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\skaffold-20220512004259-7184\id_rsa Username:docker}
	I0512 00:44:22.845044    7916 ssh_runner.go:235] Completed: curl -sS -m 2 https://k8s.gcr.io/: (1.3424483s)
	I0512 00:44:22.845044    7916 ssh_runner.go:235] Completed: systemctl --version: (1.3321598s)
	I0512 00:44:22.856741    7916 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0512 00:44:22.901130    7916 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0512 00:44:22.937652    7916 cruntime.go:273] skipping containerd shutdown because we are bound to it
	I0512 00:44:22.949754    7916 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0512 00:44:22.981332    7916 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/dockershim.sock
	image-endpoint: unix:///var/run/dockershim.sock
	" | sudo tee /etc/crictl.yaml"
	I0512 00:44:23.036050    7916 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0512 00:44:23.215751    7916 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0512 00:44:23.375856    7916 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0512 00:44:23.417686    7916 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0512 00:44:23.586993    7916 ssh_runner.go:195] Run: sudo systemctl start docker
	I0512 00:44:23.622931    7916 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0512 00:44:23.721929    7916 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0512 00:44:23.807297    7916 out.go:204] * Preparing Kubernetes v1.23.5 on Docker 20.10.15 ...
	I0512 00:44:23.816757    7916 cli_runner.go:164] Run: docker exec -t skaffold-20220512004259-7184 dig +short host.docker.internal
	I0512 00:44:25.078485    7916 cli_runner.go:217] Completed: docker exec -t skaffold-20220512004259-7184 dig +short host.docker.internal: (1.2615623s)
	I0512 00:44:25.078485    7916 network.go:96] got host ip for mount in container by digging dns: 192.168.65.2
	I0512 00:44:25.088954    7916 ssh_runner.go:195] Run: grep 192.168.65.2	host.minikube.internal$ /etc/hosts
	I0512 00:44:25.102959    7916 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.65.2	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0512 00:44:25.138986    7916 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" skaffold-20220512004259-7184
	I0512 00:44:26.201128    7916 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" skaffold-20220512004259-7184: (1.0620339s)
	I0512 00:44:26.201855    7916 preload.go:132] Checking if preload exists for k8s version v1.23.5 and runtime docker
	I0512 00:44:26.213336    7916 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0512 00:44:26.282284    7916 docker.go:610] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.23.5
	k8s.gcr.io/kube-proxy:v1.23.5
	k8s.gcr.io/kube-controller-manager:v1.23.5
	k8s.gcr.io/kube-scheduler:v1.23.5
	k8s.gcr.io/etcd:3.5.1-0
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0512 00:44:26.282284    7916 docker.go:541] Images already preloaded, skipping extraction
	I0512 00:44:26.292155    7916 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0512 00:44:26.372856    7916 docker.go:610] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.23.5
	k8s.gcr.io/kube-proxy:v1.23.5
	k8s.gcr.io/kube-scheduler:v1.23.5
	k8s.gcr.io/kube-controller-manager:v1.23.5
	k8s.gcr.io/etcd:3.5.1-0
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0512 00:44:26.372856    7916 cache_images.go:84] Images are preloaded, skipping loading
	I0512 00:44:26.380974    7916 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0512 00:44:26.548147    7916 cni.go:95] Creating CNI manager for ""
	I0512 00:44:26.548147    7916 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0512 00:44:26.548147    7916 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0512 00:44:26.548147    7916 kubeadm.go:158] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.23.5 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:skaffold-20220512004259-7184 NodeName:skaffold-20220512004259-7184 DNSDomain:cluster.local CRISocket:/var/run/dockershim.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/li
b/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	I0512 00:44:26.548147    7916 kubeadm.go:162] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/dockershim.sock
	  name: "skaffold-20220512004259-7184"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.23.5
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0512 00:44:26.548147    7916 kubeadm.go:936] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.23.5/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=docker --hostname-override=skaffold-20220512004259-7184 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.23.5 ClusterName:skaffold-20220512004259-7184 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0512 00:44:26.559168    7916 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.23.5
	I0512 00:44:26.589463    7916 binaries.go:44] Found k8s binaries, skipping transfer
	I0512 00:44:26.602206    7916 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0512 00:44:26.626300    7916 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (354 bytes)
	I0512 00:44:26.657885    7916 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0512 00:44:26.699499    7916 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2050 bytes)
	I0512 00:44:26.743643    7916 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0512 00:44:26.755951    7916 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0512 00:44:26.782067    7916 certs.go:54] Setting up C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\skaffold-20220512004259-7184 for IP: 192.168.49.2
	I0512 00:44:26.782723    7916 certs.go:182] skipping minikubeCA CA generation: C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.key
	I0512 00:44:26.782723    7916 certs.go:182] skipping proxyClientCA CA generation: C:\Users\jenkins.minikube4\minikube-integration\.minikube\proxy-client-ca.key
	I0512 00:44:26.783445    7916 certs.go:302] generating minikube-user signed cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\skaffold-20220512004259-7184\client.key
	I0512 00:44:26.783612    7916 crypto.go:68] Generating cert C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\skaffold-20220512004259-7184\client.crt with IP's: []
	I0512 00:44:27.551551    7916 crypto.go:156] Writing cert to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\skaffold-20220512004259-7184\client.crt ...
	I0512 00:44:27.551551    7916 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\skaffold-20220512004259-7184\client.crt: {Name:mke643db7bc893733e3634c07fa77937b8fb52b4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0512 00:44:27.552524    7916 crypto.go:164] Writing key to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\skaffold-20220512004259-7184\client.key ...
	I0512 00:44:27.552524    7916 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\skaffold-20220512004259-7184\client.key: {Name:mk5b7f316852517d10ee8c7639e098ee6eb5eaaf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0512 00:44:27.553511    7916 certs.go:302] generating minikube signed cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\skaffold-20220512004259-7184\apiserver.key.dd3b5fb2
	I0512 00:44:27.553511    7916 crypto.go:68] Generating cert C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\skaffold-20220512004259-7184\apiserver.crt.dd3b5fb2 with IP's: [192.168.49.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I0512 00:44:28.107552    7916 crypto.go:156] Writing cert to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\skaffold-20220512004259-7184\apiserver.crt.dd3b5fb2 ...
	I0512 00:44:28.107552    7916 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\skaffold-20220512004259-7184\apiserver.crt.dd3b5fb2: {Name:mk24c6b37ab06e63d42dc991f5d23311bf791b72 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0512 00:44:28.108557    7916 crypto.go:164] Writing key to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\skaffold-20220512004259-7184\apiserver.key.dd3b5fb2 ...
	I0512 00:44:28.108557    7916 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\skaffold-20220512004259-7184\apiserver.key.dd3b5fb2: {Name:mk3072a0a5a3e0f15d15aae78932e98fe2efed58 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0512 00:44:28.109547    7916 certs.go:320] copying C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\skaffold-20220512004259-7184\apiserver.crt.dd3b5fb2 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\skaffold-20220512004259-7184\apiserver.crt
	I0512 00:44:28.115547    7916 certs.go:324] copying C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\skaffold-20220512004259-7184\apiserver.key.dd3b5fb2 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\skaffold-20220512004259-7184\apiserver.key
	I0512 00:44:28.116573    7916 certs.go:302] generating aggregator signed cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\skaffold-20220512004259-7184\proxy-client.key
	I0512 00:44:28.116573    7916 crypto.go:68] Generating cert C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\skaffold-20220512004259-7184\proxy-client.crt with IP's: []
	I0512 00:44:28.566227    7916 crypto.go:156] Writing cert to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\skaffold-20220512004259-7184\proxy-client.crt ...
	I0512 00:44:28.566227    7916 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\skaffold-20220512004259-7184\proxy-client.crt: {Name:mk135080787c2303bac4fc92b674aa14ff89af7f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0512 00:44:28.567284    7916 crypto.go:164] Writing key to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\skaffold-20220512004259-7184\proxy-client.key ...
	I0512 00:44:28.567284    7916 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\skaffold-20220512004259-7184\proxy-client.key: {Name:mk42390a836ddf97ce43c958575a01b6ca9c6b0c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0512 00:44:28.575330    7916 certs.go:388] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\7184.pem (1338 bytes)
	W0512 00:44:28.575330    7916 certs.go:384] ignoring C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\7184_empty.pem, impossibly tiny 0 bytes
	I0512 00:44:28.575330    7916 certs.go:388] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca-key.pem (1675 bytes)
	I0512 00:44:28.575330    7916 certs.go:388] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem (1078 bytes)
	I0512 00:44:28.575330    7916 certs.go:388] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0512 00:44:28.575330    7916 certs.go:388] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\key.pem (1675 bytes)
	I0512 00:44:28.576353    7916 certs.go:388] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\71842.pem (1708 bytes)
	I0512 00:44:28.577354    7916 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\skaffold-20220512004259-7184\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0512 00:44:28.636941    7916 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\skaffold-20220512004259-7184\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0512 00:44:28.688923    7916 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\skaffold-20220512004259-7184\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0512 00:44:28.751935    7916 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\skaffold-20220512004259-7184\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0512 00:44:28.805797    7916 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0512 00:44:28.861753    7916 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0512 00:44:28.925479    7916 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0512 00:44:28.988058    7916 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0512 00:44:29.041349    7916 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\71842.pem --> /usr/share/ca-certificates/71842.pem (1708 bytes)
	I0512 00:44:29.093826    7916 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0512 00:44:29.148456    7916 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\7184.pem --> /usr/share/ca-certificates/7184.pem (1338 bytes)
	I0512 00:44:29.198016    7916 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0512 00:44:29.247490    7916 ssh_runner.go:195] Run: openssl version
	I0512 00:44:29.275974    7916 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/7184.pem && ln -fs /usr/share/ca-certificates/7184.pem /etc/ssl/certs/7184.pem"
	I0512 00:44:29.314440    7916 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/7184.pem
	I0512 00:44:29.328845    7916 certs.go:431] hashing: -rw-r--r-- 1 root root 1338 May 11 23:10 /usr/share/ca-certificates/7184.pem
	I0512 00:44:29.339341    7916 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/7184.pem
	I0512 00:44:29.368877    7916 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/7184.pem /etc/ssl/certs/51391683.0"
	I0512 00:44:29.404460    7916 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/71842.pem && ln -fs /usr/share/ca-certificates/71842.pem /etc/ssl/certs/71842.pem"
	I0512 00:44:29.438522    7916 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/71842.pem
	I0512 00:44:29.453322    7916 certs.go:431] hashing: -rw-r--r-- 1 root root 1708 May 11 23:10 /usr/share/ca-certificates/71842.pem
	I0512 00:44:29.463744    7916 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/71842.pem
	I0512 00:44:29.493181    7916 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/71842.pem /etc/ssl/certs/3ec20f2e.0"
	I0512 00:44:29.528740    7916 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0512 00:44:29.566286    7916 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0512 00:44:29.580338    7916 certs.go:431] hashing: -rw-r--r-- 1 root root 1111 May 11 23:00 /usr/share/ca-certificates/minikubeCA.pem
	I0512 00:44:29.591344    7916 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0512 00:44:29.619471    7916 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0512 00:44:29.644561    7916 kubeadm.go:391] StartCluster: {Name:skaffold-20220512004259-7184 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1652251400-14138@sha256:8c847a4aa2afc5a7fc659f9731046bf9cc7e788283deecc83c8633014fb0828a Memory:2600 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.5 ClusterName:skaffold-20220512004259-7184 Namespace:default APIServerName:mini
kubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.23.5 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube4:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0512 00:44:29.653508    7916 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0512 00:44:29.737116    7916 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0512 00:44:29.774100    7916 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0512 00:44:29.799866    7916 kubeadm.go:221] ignoring SystemVerification for kubeadm because of docker driver
	I0512 00:44:29.811842    7916 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0512 00:44:29.840217    7916 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0512 00:44:29.840217    7916 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.5:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0512 00:44:50.065100    7916 out.go:204]   - Generating certificates and keys ...
	I0512 00:44:50.073923    7916 out.go:204]   - Booting up control plane ...
	I0512 00:44:50.080060    7916 out.go:204]   - Configuring RBAC rules ...
	I0512 00:44:50.084289    7916 cni.go:95] Creating CNI manager for ""
	I0512 00:44:50.084289    7916 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0512 00:44:50.084289    7916 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0512 00:44:50.100064    7916 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0512 00:44:50.100064    7916 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl label nodes minikube.k8s.io/version=v1.25.2 minikube.k8s.io/commit=50a7977b568d2ad3e04003527a57f4502d6177a0 minikube.k8s.io/name=skaffold-20220512004259-7184 minikube.k8s.io/updated_at=2022_05_12T00_44_50_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0512 00:44:50.160805    7916 ops.go:34] apiserver oom_adj: -16
	I0512 00:44:50.670432    7916 kubeadm.go:1020] duration metric: took 586.033ms to wait for elevateKubeSystemPrivileges.
	I0512 00:44:52.089097    7916 ssh_runner.go:235] Completed: sudo /var/lib/minikube/binaries/v1.23.5/kubectl label nodes minikube.k8s.io/version=v1.25.2 minikube.k8s.io/commit=50a7977b568d2ad3e04003527a57f4502d6177a0 minikube.k8s.io/name=skaffold-20220512004259-7184 minikube.k8s.io/updated_at=2022_05_12T00_44_50_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig: (1.9888771s)
	I0512 00:44:52.089125    7916 kubeadm.go:393] StartCluster complete in 22.4434146s
	I0512 00:44:52.089219    7916 settings.go:142] acquiring lock: {Name:mk5d8710830d010adb6db61f855b0ef766a8622c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0512 00:44:52.089333    7916 settings.go:150] Updating kubeconfig:  C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	I0512 00:44:52.090821    7916 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\kubeconfig: {Name:mk97c09b788e5010ffd4c9dd9525f9245d5edd25 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0512 00:44:52.657808    7916 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "skaffold-20220512004259-7184" rescaled to 1
	I0512 00:44:52.658377    7916 start.go:208] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.23.5 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0512 00:44:52.660967    7916 out.go:177] * Verifying Kubernetes components...
	I0512 00:44:52.658406    7916 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.23.5/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0512 00:44:52.658406    7916 addons.go:415] enableAddons start: toEnable=map[], additional=[]
	I0512 00:44:52.658950    7916 config.go:178] Loaded profile config "skaffold-20220512004259-7184": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.5
	I0512 00:44:52.664741    7916 addons.go:65] Setting default-storageclass=true in profile "skaffold-20220512004259-7184"
	I0512 00:44:52.664741    7916 addons.go:65] Setting storage-provisioner=true in profile "skaffold-20220512004259-7184"
	I0512 00:44:52.664822    7916 addons.go:153] Setting addon storage-provisioner=true in "skaffold-20220512004259-7184"
	I0512 00:44:52.664822    7916 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "skaffold-20220512004259-7184"
	W0512 00:44:52.664822    7916 addons.go:165] addon storage-provisioner should already be in state true
	I0512 00:44:52.664926    7916 host.go:66] Checking if "skaffold-20220512004259-7184" exists ...
	I0512 00:44:52.682493    7916 cli_runner.go:164] Run: docker container inspect skaffold-20220512004259-7184 --format={{.State.Status}}
	I0512 00:44:52.687471    7916 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0512 00:44:52.695477    7916 cli_runner.go:164] Run: docker container inspect skaffold-20220512004259-7184 --format={{.State.Status}}
	I0512 00:44:52.811401    7916 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.23.5/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.65.2 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.23.5/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0512 00:44:52.820403    7916 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" skaffold-20220512004259-7184
	I0512 00:44:53.800794    7916 cli_runner.go:217] Completed: docker container inspect skaffold-20220512004259-7184 --format={{.State.Status}}: (1.1051684s)
	I0512 00:44:53.803308    7916 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0512 00:44:53.805665    7916 addons.go:348] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0512 00:44:53.805665    7916 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0512 00:44:53.813612    7916 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" skaffold-20220512004259-7184
	I0512 00:44:53.816740    7916 cli_runner.go:217] Completed: docker container inspect skaffold-20220512004259-7184 --format={{.State.Status}}: (1.1341883s)
	I0512 00:44:53.826611    7916 addons.go:153] Setting addon default-storageclass=true in "skaffold-20220512004259-7184"
	W0512 00:44:53.826611    7916 addons.go:165] addon default-storageclass should already be in state true
	I0512 00:44:53.826611    7916 host.go:66] Checking if "skaffold-20220512004259-7184" exists ...
	I0512 00:44:53.840612    7916 cli_runner.go:164] Run: docker container inspect skaffold-20220512004259-7184 --format={{.State.Status}}
	I0512 00:44:53.928610    7916 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" skaffold-20220512004259-7184: (1.1081499s)
	I0512 00:44:53.929620    7916 api_server.go:51] waiting for apiserver process to appear ...
	I0512 00:44:53.944030    7916 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0512 00:44:54.204628    7916 api_server.go:71] duration metric: took 1.5461427s to wait for apiserver process to appear ...
	I0512 00:44:54.204628    7916 api_server.go:87] waiting for apiserver healthz status ...
	I0512 00:44:54.204628    7916 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.23.5/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.65.2 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.23.5/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.3931548s)
	I0512 00:44:54.204628    7916 start.go:815] {"host.minikube.internal": 192.168.65.2} host record injected into CoreDNS
	I0512 00:44:54.204628    7916 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:49215/healthz ...
	I0512 00:44:54.229800    7916 api_server.go:266] https://127.0.0.1:49215/healthz returned 200:
	ok
	I0512 00:44:54.234413    7916 api_server.go:140] control plane version: v1.23.5
	I0512 00:44:54.234413    7916 api_server.go:130] duration metric: took 29.7836ms to wait for apiserver health ...
	I0512 00:44:54.234413    7916 system_pods.go:43] waiting for kube-system pods to appear ...
	I0512 00:44:54.252394    7916 system_pods.go:59] 4 kube-system pods found
	I0512 00:44:54.252432    7916 system_pods.go:61] "etcd-skaffold-20220512004259-7184" [5af07d92-26d2-4501-bb1c-ddc64dbf28fa] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0512 00:44:54.252454    7916 system_pods.go:61] "kube-apiserver-skaffold-20220512004259-7184" [cb4c240e-0c94-479e-b2ce-a5657d5f2d72] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0512 00:44:54.252468    7916 system_pods.go:61] "kube-controller-manager-skaffold-20220512004259-7184" [a9206641-6f76-4d5b-842a-dd093142440b] Pending
	I0512 00:44:54.252468    7916 system_pods.go:61] "kube-scheduler-skaffold-20220512004259-7184" [38b18062-13b0-4abc-b86a-651ea3c18c7a] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0512 00:44:54.252468    7916 system_pods.go:74] duration metric: took 18.0539ms to wait for pod list to return data ...
	I0512 00:44:54.252468    7916 kubeadm.go:548] duration metric: took 1.5939802s to wait for : map[apiserver:true system_pods:true] ...
	I0512 00:44:54.252468    7916 node_conditions.go:102] verifying NodePressure condition ...
	I0512 00:44:54.266339    7916 node_conditions.go:122] node storage ephemeral capacity is 263174212Ki
	I0512 00:44:54.266339    7916 node_conditions.go:123] node cpu capacity is 16
	I0512 00:44:54.266339    7916 node_conditions.go:105] duration metric: took 13.8711ms to run NodePressure ...
	I0512 00:44:54.266339    7916 start.go:213] waiting for startup goroutines ...
	I0512 00:44:54.894170    7916 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" skaffold-20220512004259-7184: (1.0805028s)
	I0512 00:44:54.894170    7916 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49216 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\skaffold-20220512004259-7184\id_rsa Username:docker}
	I0512 00:44:54.909142    7916 cli_runner.go:217] Completed: docker container inspect skaffold-20220512004259-7184 --format={{.State.Status}}: (1.0684757s)
	I0512 00:44:54.909142    7916 addons.go:348] installing /etc/kubernetes/addons/storageclass.yaml
	I0512 00:44:54.909142    7916 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0512 00:44:54.916171    7916 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" skaffold-20220512004259-7184
	I0512 00:44:55.009250    7916 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.5/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0512 00:44:55.998329    7916 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" skaffold-20220512004259-7184: (1.0819793s)
	I0512 00:44:55.998329    7916 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49216 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\skaffold-20220512004259-7184\id_rsa Username:docker}
	I0512 00:44:56.162603    7916 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.5/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0512 00:44:56.461843    7916 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0512 00:44:56.464903    7916 addons.go:417] enableAddons completed in 3.8063021s
	I0512 00:44:56.691520    7916 start.go:499] kubectl: 1.18.2, cluster: 1.23.5 (minor skew: 5)
	I0512 00:44:56.692982    7916 out.go:177] 
	W0512 00:44:56.696342    7916 out.go:239] ! C:\ProgramData\chocolatey\bin\kubectl.exe is version 1.18.2, which may have incompatibilites with Kubernetes 1.23.5.
	I0512 00:44:56.698630    7916 out.go:177]   - Want kubectl v1.23.5? Try 'minikube kubectl -- get pods -A'
	I0512 00:44:56.700891    7916 out.go:177] * Done! kubectl is now configured to use "skaffold-20220512004259-7184" cluster and "default" namespace by default
	
	* 
	* ==> Docker <==
	* -- Logs begin at Thu 2022-05-12 00:43:58 UTC, end at Thu 2022-05-12 00:45:27 UTC. --
	May 12 00:44:16 skaffold-20220512004259-7184 dockerd[507]: time="2022-05-12T00:44:16.295528700Z" level=info msg="parsed scheme: \"unix\"" module=grpc
	May 12 00:44:16 skaffold-20220512004259-7184 dockerd[507]: time="2022-05-12T00:44:16.295627000Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc
	May 12 00:44:16 skaffold-20220512004259-7184 dockerd[507]: time="2022-05-12T00:44:16.295651500Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///run/containerd/containerd.sock  <nil> 0 <nil>}] <nil> <nil>}" module=grpc
	May 12 00:44:16 skaffold-20220512004259-7184 dockerd[507]: time="2022-05-12T00:44:16.295662600Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc
	May 12 00:44:16 skaffold-20220512004259-7184 dockerd[507]: time="2022-05-12T00:44:16.319073400Z" level=info msg="[graphdriver] using prior storage driver: overlay2"
	May 12 00:44:16 skaffold-20220512004259-7184 dockerd[507]: time="2022-05-12T00:44:16.337368100Z" level=warning msg="Your kernel does not support cgroup blkio weight"
	May 12 00:44:16 skaffold-20220512004259-7184 dockerd[507]: time="2022-05-12T00:44:16.337477000Z" level=warning msg="Your kernel does not support cgroup blkio weight_device"
	May 12 00:44:16 skaffold-20220512004259-7184 dockerd[507]: time="2022-05-12T00:44:16.337492800Z" level=warning msg="Your kernel does not support cgroup blkio throttle.read_bps_device"
	May 12 00:44:16 skaffold-20220512004259-7184 dockerd[507]: time="2022-05-12T00:44:16.337500500Z" level=warning msg="Your kernel does not support cgroup blkio throttle.write_bps_device"
	May 12 00:44:16 skaffold-20220512004259-7184 dockerd[507]: time="2022-05-12T00:44:16.337507900Z" level=warning msg="Your kernel does not support cgroup blkio throttle.read_iops_device"
	May 12 00:44:16 skaffold-20220512004259-7184 dockerd[507]: time="2022-05-12T00:44:16.337515100Z" level=warning msg="Your kernel does not support cgroup blkio throttle.write_iops_device"
	May 12 00:44:16 skaffold-20220512004259-7184 dockerd[507]: time="2022-05-12T00:44:16.337841300Z" level=info msg="Loading containers: start."
	May 12 00:44:16 skaffold-20220512004259-7184 dockerd[507]: time="2022-05-12T00:44:16.516365400Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	May 12 00:44:16 skaffold-20220512004259-7184 dockerd[507]: time="2022-05-12T00:44:16.593510000Z" level=info msg="Loading containers: done."
	May 12 00:44:16 skaffold-20220512004259-7184 dockerd[507]: time="2022-05-12T00:44:16.631222800Z" level=info msg="Docker daemon" commit=4433bf6 graphdriver(s)=overlay2 version=20.10.15
	May 12 00:44:16 skaffold-20220512004259-7184 dockerd[507]: time="2022-05-12T00:44:16.631408500Z" level=info msg="Daemon has completed initialization"
	May 12 00:44:16 skaffold-20220512004259-7184 systemd[1]: Started Docker Application Container Engine.
	May 12 00:44:16 skaffold-20220512004259-7184 dockerd[507]: time="2022-05-12T00:44:16.685112200Z" level=info msg="API listen on [::]:2376"
	May 12 00:44:16 skaffold-20220512004259-7184 dockerd[507]: time="2022-05-12T00:44:16.703312700Z" level=info msg="API listen on /var/run/docker.sock"
	May 12 00:45:10 skaffold-20220512004259-7184 dockerd[507]: time="2022-05-12T00:45:10.599473400Z" level=info msg="parsed scheme: \"\"" module=grpc
	May 12 00:45:10 skaffold-20220512004259-7184 dockerd[507]: time="2022-05-12T00:45:10.599856700Z" level=info msg="scheme \"\" not registered, fallback to default scheme" module=grpc
	May 12 00:45:10 skaffold-20220512004259-7184 dockerd[507]: time="2022-05-12T00:45:10.600100900Z" level=info msg="ccResolverWrapper: sending update to cc: {[{localhost  <nil> 0 <nil>}] <nil> <nil>}" module=grpc
	May 12 00:45:10 skaffold-20220512004259-7184 dockerd[507]: time="2022-05-12T00:45:10.600183400Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc
	May 12 00:45:12 skaffold-20220512004259-7184 dockerd[507]: time="2022-05-12T00:45:12.811774000Z" level=warning msg="grpc: addrConn.createTransport failed to connect to {localhost  <nil> 0 <nil>}. Err :connection error: desc = \"transport: Error while dialing only one connection allowed\". Reconnecting..." module=grpc
	May 12 00:45:25 skaffold-20220512004259-7184 dockerd[507]: time="2022-05-12T00:45:25.842269200Z" level=info msg="ignoring event" container=3043d951b3c4103fe3c035ac98ded2fb89242bd7c48f6b2021a73d0f5a5d73d8 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID
	2da3fb3ac2ab3       6e38f40d628db       1 second ago        Running             storage-provisioner       1                   933ec2ffa07e5
	704018d8a990c       a4ca41631cc7a       23 seconds ago      Running             coredns                   0                   4321b7d9cb4c1
	3043d951b3c41       6e38f40d628db       24 seconds ago      Exited              storage-provisioner       0                   933ec2ffa07e5
	fe74031095887       3c53fa8541f95       24 seconds ago      Running             kube-proxy                0                   a1923fad43972
	14578b32dcdcf       3fc1d62d65872       49 seconds ago      Running             kube-apiserver            0                   62324f4f3ef5e
	4a52fd1aace95       25f8c7f3da61c       49 seconds ago      Running             etcd                      0                   a53dcedcadb2f
	3225fd046c672       b0c9e5e4dbb14       49 seconds ago      Running             kube-controller-manager   0                   354ea5e06c93a
	cf7325e028e7e       884d49d6d8c9f       49 seconds ago      Running             kube-scheduler            0                   0b569db6b98d5
	
	* 
	* ==> coredns [704018d8a990] <==
	* .:53
	[INFO] plugin/reload: Running configuration MD5 = c23ed519c17e71ee396ed052e6209e94
	CoreDNS-1.8.6
	linux/amd64, go1.17.1, 13a9191
	
	* 
	* ==> describe nodes <==
	* Name:               skaffold-20220512004259-7184
	Roles:              control-plane,master
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=skaffold-20220512004259-7184
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=50a7977b568d2ad3e04003527a57f4502d6177a0
	                    minikube.k8s.io/name=skaffold-20220512004259-7184
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2022_05_12T00_44_50_0700
	                    minikube.k8s.io/version=v1.25.2
	                    node-role.kubernetes.io/control-plane=
	                    node-role.kubernetes.io/master=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 12 May 2022 00:44:44 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  skaffold-20220512004259-7184
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 12 May 2022 00:45:22 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 12 May 2022 00:45:01 +0000   Thu, 12 May 2022 00:44:42 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 12 May 2022 00:45:01 +0000   Thu, 12 May 2022 00:44:42 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 12 May 2022 00:45:01 +0000   Thu, 12 May 2022 00:44:42 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 12 May 2022 00:45:01 +0000   Thu, 12 May 2022 00:45:01 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    skaffold-20220512004259-7184
	Capacity:
	  cpu:                16
	  ephemeral-storage:  263174212Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             52638988Ki
	  pods:               110
	Allocatable:
	  cpu:                16
	  ephemeral-storage:  263174212Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             52638988Ki
	  pods:               110
	System Info:
	  Machine ID:                 8556a0a9a0e64ba4b825f672d2dce0b9
	  System UUID:                8556a0a9a0e64ba4b825f672d2dce0b9
	  Boot ID:                    10186544-b659-4889-afdb-c2512535b797
	  Kernel Version:             5.10.102.1-microsoft-standard-WSL2
	  OS Image:                   Ubuntu 20.04.4 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://20.10.15
	  Kubelet Version:            v1.23.5
	  Kube-Proxy Version:         v1.23.5
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-64897985d-qkxxq                                 100m (0%!)(MISSING)     0 (0%!)(MISSING)      70Mi (0%!)(MISSING)        170Mi (0%!)(MISSING)     25s
	  kube-system                 etcd-skaffold-20220512004259-7184                       100m (0%!)(MISSING)     0 (0%!)(MISSING)      100Mi (0%!)(MISSING)       0 (0%!)(MISSING)         36s
	  kube-system                 kube-apiserver-skaffold-20220512004259-7184             250m (1%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         36s
	  kube-system                 kube-controller-manager-skaffold-20220512004259-7184    200m (1%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         36s
	  kube-system                 kube-proxy-8mzgr                                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         26s
	  kube-system                 kube-scheduler-skaffold-20220512004259-7184             100m (0%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         41s
	  kube-system                 storage-provisioner                                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         32s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (4%!)(MISSING)   0 (0%!)(MISSING)
	  memory             170Mi (0%!)(MISSING)  170Mi (0%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From        Message
	  ----    ------                   ----               ----        -------
	  Normal  Starting                 22s                kube-proxy  
	  Normal  NodeHasSufficientMemory  51s (x5 over 51s)  kubelet     Node skaffold-20220512004259-7184 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    51s (x5 over 51s)  kubelet     Node skaffold-20220512004259-7184 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     51s (x4 over 51s)  kubelet     Node skaffold-20220512004259-7184 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  51s                kubelet     Updated Node Allocatable limit across pods
	  Normal  Starting                 37s                kubelet     Starting kubelet.
	  Normal  NodeHasSufficientMemory  36s                kubelet     Node skaffold-20220512004259-7184 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    36s                kubelet     Node skaffold-20220512004259-7184 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     36s                kubelet     Node skaffold-20220512004259-7184 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  36s                kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeReady                26s                kubelet     Node skaffold-20220512004259-7184 status is now: NodeReady
	
	* 
	* ==> dmesg <==
	* [May12 00:19] WSL2: Performing memory compaction.
	[May12 00:20] WSL2: Performing memory compaction.
	[May12 00:21] WSL2: Performing memory compaction.
	[May12 00:22] WSL2: Performing memory compaction.
	[May12 00:23] WSL2: Performing memory compaction.
	[May12 00:24] WSL2: Performing memory compaction.
	[May12 00:25] WSL2: Performing memory compaction.
	[May12 00:26] WSL2: Performing memory compaction.
	[May12 00:27] WSL2: Performing memory compaction.
	[May12 00:28] WSL2: Performing memory compaction.
	[May12 00:29] WSL2: Performing memory compaction.
	[May12 00:30] WSL2: Performing memory compaction.
	[May12 00:32] WSL2: Performing memory compaction.
	[May12 00:33] WSL2: Performing memory compaction.
	[May12 00:34] WSL2: Performing memory compaction.
	[May12 00:35] WSL2: Performing memory compaction.
	[May12 00:36] WSL2: Performing memory compaction.
	[May12 00:37] WSL2: Performing memory compaction.
	[May12 00:38] WSL2: Performing memory compaction.
	[May12 00:39] WSL2: Performing memory compaction.
	[May12 00:41] WSL2: Performing memory compaction.
	[May12 00:42] WSL2: Performing memory compaction.
	[May12 00:43] WSL2: Performing memory compaction.
	[May12 00:44] WSL2: Performing memory compaction.
	[May12 00:45] WSL2: Performing memory compaction.
	
	* 
	* ==> etcd [4a52fd1aace9] <==
	* {"level":"warn","ts":"2022-05-12T00:45:01.879Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"113.9617ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/replicaset-controller\" ","response":"range_response_count:1 size:260"}
	{"level":"info","ts":"2022-05-12T00:45:01.879Z","caller":"traceutil/trace.go:171","msg":"trace[1957737352] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/replicaset-controller; range_end:; response_count:1; response_revision:413; }","duration":"114.0293ms","start":"2022-05-12T00:45:01.765Z","end":"2022-05-12T00:45:01.879Z","steps":["trace[1957737352] 'agreement among raft nodes before linearized reading'  (duration: 113.647ms)"],"step_count":1}
	{"level":"info","ts":"2022-05-12T00:45:02.051Z","caller":"traceutil/trace.go:171","msg":"trace[1071325988] linearizableReadLoop","detail":"{readStateIndex:422; appliedIndex:422; }","duration":"172.4932ms","start":"2022-05-12T00:45:01.879Z","end":"2022-05-12T00:45:02.051Z","steps":["trace[1071325988] 'read index received'  (duration: 172.4809ms)","trace[1071325988] 'applied index is now lower than readState.Index'  (duration: 9µs)"],"step_count":2}
	{"level":"warn","ts":"2022-05-12T00:45:02.064Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"185.3268ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-public/default\" ","response":"range_response_count:1 size:181"}
	{"level":"info","ts":"2022-05-12T00:45:02.064Z","caller":"traceutil/trace.go:171","msg":"trace[335223217] range","detail":"{range_begin:/registry/serviceaccounts/kube-public/default; range_end:; response_count:1; response_revision:413; }","duration":"185.4677ms","start":"2022-05-12T00:45:01.879Z","end":"2022-05-12T00:45:02.064Z","steps":["trace[335223217] 'agreement among raft nodes before linearized reading'  (duration: 172.7962ms)"],"step_count":1}
	{"level":"info","ts":"2022-05-12T00:45:02.065Z","caller":"traceutil/trace.go:171","msg":"trace[2130459207] transaction","detail":"{read_only:false; response_revision:414; number_of_response:1; }","duration":"184.8312ms","start":"2022-05-12T00:45:01.880Z","end":"2022-05-12T00:45:02.065Z","steps":["trace[2130459207] 'process raft request'  (duration: 171.567ms)","trace[2130459207] 'compare'  (duration: 12.3505ms)"],"step_count":2}
	{"level":"info","ts":"2022-05-12T00:45:02.065Z","caller":"traceutil/trace.go:171","msg":"trace[790228756] transaction","detail":"{read_only:false; response_revision:417; number_of_response:1; }","duration":"182.9409ms","start":"2022-05-12T00:45:01.882Z","end":"2022-05-12T00:45:02.065Z","steps":["trace[790228756] 'process raft request'  (duration: 182.7762ms)"],"step_count":1}
	{"level":"info","ts":"2022-05-12T00:45:02.065Z","caller":"traceutil/trace.go:171","msg":"trace[744576523] transaction","detail":"{read_only:false; response_revision:415; number_of_response:1; }","duration":"182.9225ms","start":"2022-05-12T00:45:01.882Z","end":"2022-05-12T00:45:02.065Z","steps":["trace[744576523] 'process raft request'  (duration: 182.2865ms)"],"step_count":1}
	{"level":"info","ts":"2022-05-12T00:45:02.065Z","caller":"traceutil/trace.go:171","msg":"trace[1491748253] transaction","detail":"{read_only:false; response_revision:416; number_of_response:1; }","duration":"182.8849ms","start":"2022-05-12T00:45:01.882Z","end":"2022-05-12T00:45:02.065Z","steps":["trace[1491748253] 'process raft request'  (duration: 182.6521ms)"],"step_count":1}
	{"level":"info","ts":"2022-05-12T00:45:02.065Z","caller":"traceutil/trace.go:171","msg":"trace[401713686] transaction","detail":"{read_only:false; response_revision:421; number_of_response:1; }","duration":"109.7324ms","start":"2022-05-12T00:45:01.955Z","end":"2022-05-12T00:45:02.065Z","steps":["trace[401713686] 'process raft request'  (duration: 109.6169ms)"],"step_count":1}
	{"level":"info","ts":"2022-05-12T00:45:02.065Z","caller":"traceutil/trace.go:171","msg":"trace[893991146] transaction","detail":"{read_only:false; response_revision:419; number_of_response:1; }","duration":"181.536ms","start":"2022-05-12T00:45:01.884Z","end":"2022-05-12T00:45:02.065Z","steps":["trace[893991146] 'process raft request'  (duration: 181.1248ms)"],"step_count":1}
	{"level":"info","ts":"2022-05-12T00:45:02.065Z","caller":"traceutil/trace.go:171","msg":"trace[2021967861] transaction","detail":"{read_only:false; response_revision:418; number_of_response:1; }","duration":"182.7961ms","start":"2022-05-12T00:45:01.883Z","end":"2022-05-12T00:45:02.065Z","steps":["trace[2021967861] 'process raft request'  (duration: 182.3212ms)"],"step_count":1}
	{"level":"info","ts":"2022-05-12T00:45:02.065Z","caller":"traceutil/trace.go:171","msg":"trace[734632205] transaction","detail":"{read_only:false; response_revision:420; number_of_response:1; }","duration":"110.1941ms","start":"2022-05-12T00:45:01.955Z","end":"2022-05-12T00:45:02.065Z","steps":["trace[734632205] 'process raft request'  (duration: 109.9091ms)"],"step_count":1}
	{"level":"warn","ts":"2022-05-12T00:45:02.067Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"185.4302ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2022-05-12T00:45:02.067Z","caller":"traceutil/trace.go:171","msg":"trace[1397578731] range","detail":"{range_begin:/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath; range_end:; response_count:0; response_revision:421; }","duration":"185.7138ms","start":"2022-05-12T00:45:01.882Z","end":"2022-05-12T00:45:02.067Z","steps":["trace[1397578731] 'agreement among raft nodes before linearized reading'  (duration: 185.4004ms)"],"step_count":1}
	{"level":"warn","ts":"2022-05-12T00:45:02.067Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"112.1467ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/deployments/kube-system/coredns\" ","response":"range_response_count:1 size:3652"}
	{"level":"warn","ts":"2022-05-12T00:45:02.067Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"183.4207ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/replicaset-controller\" ","response":"range_response_count:1 size:260"}
	{"level":"info","ts":"2022-05-12T00:45:02.068Z","caller":"traceutil/trace.go:171","msg":"trace[1705654357] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/replicaset-controller; range_end:; response_count:1; response_revision:421; }","duration":"184.113ms","start":"2022-05-12T00:45:01.884Z","end":"2022-05-12T00:45:02.068Z","steps":["trace[1705654357] 'agreement among raft nodes before linearized reading'  (duration: 183.3868ms)"],"step_count":1}
	{"level":"info","ts":"2022-05-12T00:45:02.068Z","caller":"traceutil/trace.go:171","msg":"trace[1943318615] range","detail":"{range_begin:/registry/deployments/kube-system/coredns; range_end:; response_count:1; response_revision:421; }","duration":"112.5741ms","start":"2022-05-12T00:45:01.955Z","end":"2022-05-12T00:45:02.068Z","steps":["trace[1943318615] 'agreement among raft nodes before linearized reading'  (duration: 112.1035ms)"],"step_count":1}
	{"level":"warn","ts":"2022-05-12T00:45:02.067Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"184.7843ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/storage-provisioner\" ","response":"range_response_count:1 size:2797"}
	{"level":"info","ts":"2022-05-12T00:45:02.069Z","caller":"traceutil/trace.go:171","msg":"trace[1631301502] range","detail":"{range_begin:/registry/pods/kube-system/storage-provisioner; range_end:; response_count:1; response_revision:421; }","duration":"186.3995ms","start":"2022-05-12T00:45:01.883Z","end":"2022-05-12T00:45:02.069Z","steps":["trace[1631301502] 'agreement among raft nodes before linearized reading'  (duration: 184.7524ms)"],"step_count":1}
	{"level":"warn","ts":"2022-05-12T00:45:02.183Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"103.1954ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/default/default\" ","response":"range_response_count:1 size:173"}
	{"level":"info","ts":"2022-05-12T00:45:02.183Z","caller":"traceutil/trace.go:171","msg":"trace[1745403854] range","detail":"{range_begin:/registry/serviceaccounts/default/default; range_end:; response_count:1; response_revision:431; }","duration":"103.3446ms","start":"2022-05-12T00:45:02.079Z","end":"2022-05-12T00:45:02.183Z","steps":["trace[1745403854] 'agreement among raft nodes before linearized reading'  (duration: 88.0626ms)","trace[1745403854] 'range keys from in-memory index tree'  (duration: 15.1083ms)"],"step_count":2}
	{"level":"warn","ts":"2022-05-12T00:45:02.375Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"100.2172ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/kube-proxy-8mzgr\" ","response":"range_response_count:1 size:4449"}
	{"level":"info","ts":"2022-05-12T00:45:02.375Z","caller":"traceutil/trace.go:171","msg":"trace[1509226253] range","detail":"{range_begin:/registry/pods/kube-system/kube-proxy-8mzgr; range_end:; response_count:1; response_revision:444; }","duration":"100.3607ms","start":"2022-05-12T00:45:02.275Z","end":"2022-05-12T00:45:02.375Z","steps":["trace[1509226253] 'agreement among raft nodes before linearized reading'  (duration: 79.2023ms)","trace[1509226253] 'range keys from in-memory index tree'  (duration: 20.9783ms)"],"step_count":2}
	
	* 
	* ==> kernel <==
	*  00:45:27 up  1:53,  0 users,  load average: 0.92, 1.16, 1.18
	Linux skaffold-20220512004259-7184 5.10.102.1-microsoft-standard-WSL2 #1 SMP Wed Mar 2 00:30:59 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 20.04.4 LTS"
	
	* 
	* ==> kube-apiserver [14578b32dcdc] <==
	* I0512 00:44:44.455815       1 apf_controller.go:322] Running API Priority and Fairness config worker
	I0512 00:44:44.456727       1 shared_informer.go:247] Caches are synced for crd-autoregister 
	I0512 00:44:44.554777       1 shared_informer.go:247] Caches are synced for cluster_authentication_trust_controller 
	I0512 00:44:44.555096       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0512 00:44:44.555216       1 shared_informer.go:247] Caches are synced for node_authorizer 
	I0512 00:44:44.663869       1 controller.go:611] quota admission added evaluator for: namespaces
	I0512 00:44:45.354736       1 controller.go:132] OpenAPI AggregationController: action for item : Nothing (removed from the queue).
	I0512 00:44:45.354865       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0512 00:44:45.361498       1 storage_scheduling.go:93] created PriorityClass system-node-critical with value 2000001000
	I0512 00:44:45.367122       1 storage_scheduling.go:93] created PriorityClass system-cluster-critical with value 2000000000
	I0512 00:44:45.367222       1 storage_scheduling.go:109] all system priority classes are created successfully or already exist.
	I0512 00:44:47.006991       1 controller.go:611] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0512 00:44:47.100252       1 controller.go:611] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0512 00:44:47.290931       1 alloc.go:329] "allocated clusterIPs" service="default/kubernetes" clusterIPs=map[IPv4:10.96.0.1]
	W0512 00:44:47.305443       1 lease.go:233] Resetting endpoints for master service "kubernetes" to [192.168.49.2]
	I0512 00:44:47.307028       1 controller.go:611] quota admission added evaluator for: endpoints
	I0512 00:44:47.314871       1 controller.go:611] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0512 00:44:47.573659       1 controller.go:611] quota admission added evaluator for: serviceaccounts
	I0512 00:44:49.659799       1 controller.go:611] quota admission added evaluator for: deployments.apps
	I0512 00:44:49.682795       1 alloc.go:329] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs=map[IPv4:10.96.0.10]
	I0512 00:44:49.859975       1 controller.go:611] quota admission added evaluator for: daemonsets.apps
	I0512 00:44:51.058569       1 controller.go:611] quota admission added evaluator for: leases.coordination.k8s.io
	I0512 00:45:01.658287       1 controller.go:611] quota admission added evaluator for: controllerrevisions.apps
	I0512 00:45:01.666119       1 controller.go:611] quota admission added evaluator for: replicasets.apps
	I0512 00:45:04.871295       1 controller.go:611] quota admission added evaluator for: events.events.k8s.io
	
	* 
	* ==> kube-controller-manager [3225fd046c67] <==
	* I0512 00:45:01.258366       1 shared_informer.go:247] Caches are synced for GC 
	I0512 00:45:01.258351       1 shared_informer.go:247] Caches are synced for ephemeral 
	I0512 00:45:01.262076       1 shared_informer.go:247] Caches are synced for TTL 
	I0512 00:45:01.262077       1 shared_informer.go:247] Caches are synced for namespace 
	I0512 00:45:01.262096       1 shared_informer.go:247] Caches are synced for HPA 
	I0512 00:45:01.262699       1 shared_informer.go:247] Caches are synced for ReplicationController 
	I0512 00:45:01.263169       1 node_lifecycle_controller.go:1397] Initializing eviction metric for zone: 
	W0512 00:45:01.263272       1 node_lifecycle_controller.go:1012] Missing timestamp for Node skaffold-20220512004259-7184. Assuming now as a timestamp.
	I0512 00:45:01.263331       1 node_lifecycle_controller.go:1163] Controller detected that all Nodes are not-Ready. Entering master disruption mode.
	I0512 00:45:01.263387       1 event.go:294] "Event occurred" object="skaffold-20220512004259-7184" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node skaffold-20220512004259-7184 event: Registered Node skaffold-20220512004259-7184 in Controller"
	I0512 00:45:01.268128       1 shared_informer.go:247] Caches are synced for disruption 
	I0512 00:45:01.268281       1 disruption.go:371] Sending events to api server.
	I0512 00:45:01.268412       1 range_allocator.go:374] Set node skaffold-20220512004259-7184 PodCIDR to [10.244.0.0/24]
	I0512 00:45:01.354866       1 shared_informer.go:247] Caches are synced for deployment 
	I0512 00:45:01.362592       1 shared_informer.go:247] Caches are synced for resource quota 
	I0512 00:45:01.369087       1 shared_informer.go:240] Waiting for caches to sync for garbage collector
	I0512 00:45:01.454689       1 shared_informer.go:247] Caches are synced for resource quota 
	I0512 00:45:01.454901       1 shared_informer.go:247] Caches are synced for persistent volume 
	I0512 00:45:01.760301       1 event.go:294] "Event occurred" object="kube-system/coredns" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-64897985d to 1"
	I0512 00:45:01.870022       1 shared_informer.go:247] Caches are synced for garbage collector 
	I0512 00:45:01.878777       1 event.go:294] "Event occurred" object="kube-system/kube-proxy" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-8mzgr"
	I0512 00:45:01.954997       1 shared_informer.go:247] Caches are synced for garbage collector 
	I0512 00:45:01.955120       1 garbagecollector.go:155] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I0512 00:45:02.155943       1 event.go:294] "Event occurred" object="kube-system/coredns-64897985d" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-64897985d-qkxxq"
	I0512 00:45:06.264143       1 node_lifecycle_controller.go:1190] Controller detected that some Nodes are Ready. Exiting master disruption mode.
	
	* 
	* ==> kube-proxy [fe7403109588] <==
	* E0512 00:45:04.270476       1 proxier.go:647] "Failed to read builtin modules file, you can ignore this message when kube-proxy is running inside container without mounting /lib/modules" err="open /lib/modules/5.10.102.1-microsoft-standard-WSL2/modules.builtin: no such file or directory" filePath="/lib/modules/5.10.102.1-microsoft-standard-WSL2/modules.builtin"
	I0512 00:45:04.276837       1 proxier.go:657] "Failed to load kernel module with modprobe, you can ignore this message when kube-proxy is running inside container without mounting /lib/modules" moduleName="ip_vs"
	I0512 00:45:04.360541       1 proxier.go:657] "Failed to load kernel module with modprobe, you can ignore this message when kube-proxy is running inside container without mounting /lib/modules" moduleName="ip_vs_rr"
	I0512 00:45:04.368935       1 proxier.go:657] "Failed to load kernel module with modprobe, you can ignore this message when kube-proxy is running inside container without mounting /lib/modules" moduleName="ip_vs_wrr"
	I0512 00:45:04.373319       1 proxier.go:657] "Failed to load kernel module with modprobe, you can ignore this message when kube-proxy is running inside container without mounting /lib/modules" moduleName="ip_vs_sh"
	I0512 00:45:04.379261       1 proxier.go:657] "Failed to load kernel module with modprobe, you can ignore this message when kube-proxy is running inside container without mounting /lib/modules" moduleName="nf_conntrack"
	I0512 00:45:04.555406       1 node.go:163] Successfully retrieved node IP: 192.168.49.2
	I0512 00:45:04.555628       1 server_others.go:138] "Detected node IP" address="192.168.49.2"
	I0512 00:45:04.555795       1 server_others.go:561] "Unknown proxy mode, assuming iptables proxy" proxyMode=""
	I0512 00:45:04.864458       1 server_others.go:206] "Using iptables Proxier"
	I0512 00:45:04.864586       1 server_others.go:213] "kube-proxy running in dual-stack mode" ipFamily=IPv4
	I0512 00:45:04.864601       1 server_others.go:214] "Creating dualStackProxier for iptables"
	I0512 00:45:04.864687       1 server_others.go:491] "Detect-local-mode set to ClusterCIDR, but no IPv6 cluster CIDR defined, , defaulting to no-op detect-local for IPv6"
	I0512 00:45:04.865633       1 server.go:656] "Version info" version="v1.23.5"
	I0512 00:45:04.866634       1 config.go:226] "Starting endpoint slice config controller"
	I0512 00:45:04.866764       1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
	I0512 00:45:04.866830       1 config.go:317] "Starting service config controller"
	I0512 00:45:04.866842       1 shared_informer.go:240] Waiting for caches to sync for service config
	I0512 00:45:04.968290       1 shared_informer.go:247] Caches are synced for service config 
	I0512 00:45:04.968390       1 shared_informer.go:247] Caches are synced for endpoint slice config 
	
	* 
	* ==> kube-scheduler [cf7325e028e7] <==
	* W0512 00:44:45.786945       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0512 00:44:45.787054       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0512 00:44:45.824915       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0512 00:44:45.825025       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0512 00:44:45.856424       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0512 00:44:45.856541       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0512 00:44:45.856585       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1beta1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0512 00:44:45.856608       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.CSIStorageCapacity: failed to list *v1beta1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0512 00:44:45.862373       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0512 00:44:45.862477       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0512 00:44:46.056732       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0512 00:44:46.056800       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0512 00:44:46.056770       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0512 00:44:46.056848       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0512 00:44:46.057266       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0512 00:44:46.057421       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0512 00:44:46.057331       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0512 00:44:46.057466       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0512 00:44:46.157251       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0512 00:44:46.157406       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0512 00:44:46.199701       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0512 00:44:46.199844       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0512 00:44:46.356597       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0512 00:44:46.356710       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	I0512 00:44:48.461660       1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	
	* 
	* ==> kubelet <==
	* -- Logs begin at Thu 2022-05-12 00:43:58 UTC, end at Thu 2022-05-12 00:45:28 UTC. --
	May 12 00:44:51 skaffold-20220512004259-7184 kubelet[1994]: I0512 00:44:51.973792    1994 reconciler.go:221] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/04859fe58ecf649d56687dd3b3766882-k8s-certs\") pod \"kube-controller-manager-skaffold-20220512004259-7184\" (UID: \"04859fe58ecf649d56687dd3b3766882\") " pod="kube-system/kube-controller-manager-skaffold-20220512004259-7184"
	May 12 00:44:51 skaffold-20220512004259-7184 kubelet[1994]: I0512 00:44:51.973855    1994 reconciler.go:221] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/04859fe58ecf649d56687dd3b3766882-usr-local-share-ca-certificates\") pod \"kube-controller-manager-skaffold-20220512004259-7184\" (UID: \"04859fe58ecf649d56687dd3b3766882\") " pod="kube-system/kube-controller-manager-skaffold-20220512004259-7184"
	May 12 00:44:51 skaffold-20220512004259-7184 kubelet[1994]: I0512 00:44:51.974019    1994 reconciler.go:221] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/04859fe58ecf649d56687dd3b3766882-usr-share-ca-certificates\") pod \"kube-controller-manager-skaffold-20220512004259-7184\" (UID: \"04859fe58ecf649d56687dd3b3766882\") " pod="kube-system/kube-controller-manager-skaffold-20220512004259-7184"
	May 12 00:44:51 skaffold-20220512004259-7184 kubelet[1994]: I0512 00:44:51.974173    1994 reconciler.go:221] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-data\" (UniqueName: \"kubernetes.io/host-path/28a0724ada9e8148bdd09195b54ffda1-etcd-data\") pod \"etcd-skaffold-20220512004259-7184\" (UID: \"28a0724ada9e8148bdd09195b54ffda1\") " pod="kube-system/etcd-skaffold-20220512004259-7184"
	May 12 00:44:51 skaffold-20220512004259-7184 kubelet[1994]: I0512 00:44:51.974297    1994 reconciler.go:221] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/7428cc88dcbba5705c06096f4216ea6e-ca-certs\") pod \"kube-apiserver-skaffold-20220512004259-7184\" (UID: \"7428cc88dcbba5705c06096f4216ea6e\") " pod="kube-system/kube-apiserver-skaffold-20220512004259-7184"
	May 12 00:44:51 skaffold-20220512004259-7184 kubelet[1994]: I0512 00:44:51.974556    1994 reconciler.go:157] "Reconciler: start to sync state"
	May 12 00:44:51 skaffold-20220512004259-7184 kubelet[1994]: E0512 00:44:51.994739    1994 kubelet.go:1711] "Failed creating a mirror pod for" err="pods \"kube-scheduler-skaffold-20220512004259-7184\" already exists" pod="kube-system/kube-scheduler-skaffold-20220512004259-7184"
	May 12 00:45:01 skaffold-20220512004259-7184 kubelet[1994]: I0512 00:45:01.360147    1994 kuberuntime_manager.go:1105] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	May 12 00:45:01 skaffold-20220512004259-7184 kubelet[1994]: I0512 00:45:01.361740    1994 docker_service.go:364] "Docker cri received runtime config" runtimeConfig="&RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:10.244.0.0/24,},}"
	May 12 00:45:01 skaffold-20220512004259-7184 kubelet[1994]: I0512 00:45:01.362431    1994 kubelet_network.go:76] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	May 12 00:45:01 skaffold-20220512004259-7184 kubelet[1994]: I0512 00:45:01.878798    1994 topology_manager.go:200] "Topology Admit Handler"
	May 12 00:45:02 skaffold-20220512004259-7184 kubelet[1994]: I0512 00:45:02.064532    1994 reconciler.go:221] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zxpb5\" (UniqueName: \"kubernetes.io/projected/487ef925-b9c5-4791-b243-829b7d59ce3f-kube-api-access-zxpb5\") pod \"storage-provisioner\" (UID: \"487ef925-b9c5-4791-b243-829b7d59ce3f\") " pod="kube-system/storage-provisioner"
	May 12 00:45:02 skaffold-20220512004259-7184 kubelet[1994]: I0512 00:45:02.064681    1994 reconciler.go:221] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/487ef925-b9c5-4791-b243-829b7d59ce3f-tmp\") pod \"storage-provisioner\" (UID: \"487ef925-b9c5-4791-b243-829b7d59ce3f\") " pod="kube-system/storage-provisioner"
	May 12 00:45:02 skaffold-20220512004259-7184 kubelet[1994]: I0512 00:45:02.067383    1994 topology_manager.go:200] "Topology Admit Handler"
	May 12 00:45:02 skaffold-20220512004259-7184 kubelet[1994]: I0512 00:45:02.165218    1994 reconciler.go:221] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/4c1e39e3-f12b-4efe-8dab-42590203feb2-kube-proxy\") pod \"kube-proxy-8mzgr\" (UID: \"4c1e39e3-f12b-4efe-8dab-42590203feb2\") " pod="kube-system/kube-proxy-8mzgr"
	May 12 00:45:02 skaffold-20220512004259-7184 kubelet[1994]: I0512 00:45:02.165357    1994 reconciler.go:221] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rnfg7\" (UniqueName: \"kubernetes.io/projected/4c1e39e3-f12b-4efe-8dab-42590203feb2-kube-api-access-rnfg7\") pod \"kube-proxy-8mzgr\" (UID: \"4c1e39e3-f12b-4efe-8dab-42590203feb2\") " pod="kube-system/kube-proxy-8mzgr"
	May 12 00:45:02 skaffold-20220512004259-7184 kubelet[1994]: I0512 00:45:02.165410    1994 reconciler.go:221] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/4c1e39e3-f12b-4efe-8dab-42590203feb2-xtables-lock\") pod \"kube-proxy-8mzgr\" (UID: \"4c1e39e3-f12b-4efe-8dab-42590203feb2\") " pod="kube-system/kube-proxy-8mzgr"
	May 12 00:45:02 skaffold-20220512004259-7184 kubelet[1994]: I0512 00:45:02.165477    1994 reconciler.go:221] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/4c1e39e3-f12b-4efe-8dab-42590203feb2-lib-modules\") pod \"kube-proxy-8mzgr\" (UID: \"4c1e39e3-f12b-4efe-8dab-42590203feb2\") " pod="kube-system/kube-proxy-8mzgr"
	May 12 00:45:02 skaffold-20220512004259-7184 kubelet[1994]: I0512 00:45:02.188017    1994 topology_manager.go:200] "Topology Admit Handler"
	May 12 00:45:02 skaffold-20220512004259-7184 kubelet[1994]: I0512 00:45:02.366375    1994 reconciler.go:221] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hpdzf\" (UniqueName: \"kubernetes.io/projected/5a38a5d8-ea06-4b10-9f9d-202279d54499-kube-api-access-hpdzf\") pod \"coredns-64897985d-qkxxq\" (UID: \"5a38a5d8-ea06-4b10-9f9d-202279d54499\") " pod="kube-system/coredns-64897985d-qkxxq"
	May 12 00:45:02 skaffold-20220512004259-7184 kubelet[1994]: I0512 00:45:02.366531    1994 reconciler.go:221] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5a38a5d8-ea06-4b10-9f9d-202279d54499-config-volume\") pod \"coredns-64897985d-qkxxq\" (UID: \"5a38a5d8-ea06-4b10-9f9d-202279d54499\") " pod="kube-system/coredns-64897985d-qkxxq"
	May 12 00:45:04 skaffold-20220512004259-7184 kubelet[1994]: I0512 00:45:04.657435    1994 pod_container_deletor.go:79] "Container not found in pod's containers" containerID="4321b7d9cb4c129eedb714d144e3c852fc7503ed0cff533421888436ca0b72a6"
	May 12 00:45:04 skaffold-20220512004259-7184 kubelet[1994]: I0512 00:45:04.658723    1994 docker_sandbox.go:402] "Failed to read pod IP from plugin/docker" err="Couldn't find network status for kube-system/coredns-64897985d-qkxxq through plugin: invalid network status for"
	May 12 00:45:05 skaffold-20220512004259-7184 kubelet[1994]: I0512 00:45:05.809469    1994 docker_sandbox.go:402] "Failed to read pod IP from plugin/docker" err="Couldn't find network status for kube-system/coredns-64897985d-qkxxq through plugin: invalid network status for"
	May 12 00:45:26 skaffold-20220512004259-7184 kubelet[1994]: I0512 00:45:26.059150    1994 scope.go:110] "RemoveContainer" containerID="3043d951b3c4103fe3c035ac98ded2fb89242bd7c48f6b2021a73d0f5a5d73d8"
	
	* 
	* ==> storage-provisioner [2da3fb3ac2ab] <==
	* I0512 00:45:26.576546       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0512 00:45:26.669981       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0512 00:45:26.670250       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0512 00:45:26.685871       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0512 00:45:26.686483       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_skaffold-20220512004259-7184_5d38f317-7ad4-4ca5-9fe7-a5c614d38fbb!
	I0512 00:45:26.686853       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"a0d032ab-1f4b-421f-91b6-d166dd0d5e71", APIVersion:"v1", ResourceVersion:"475", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' skaffold-20220512004259-7184_5d38f317-7ad4-4ca5-9fe7-a5c614d38fbb became leader
	I0512 00:45:26.787117       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_skaffold-20220512004259-7184_5d38f317-7ad4-4ca5-9fe7-a5c614d38fbb!
	
	* 
	* ==> storage-provisioner [3043d951b3c4] <==
	* I0512 00:45:04.658263       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F0512 00:45:25.807332       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: connect: connection refused
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p skaffold-20220512004259-7184 -n skaffold-20220512004259-7184
helpers_test.go:254: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p skaffold-20220512004259-7184 -n skaffold-20220512004259-7184: (6.6133384s)
helpers_test.go:261: (dbg) Run:  kubectl --context skaffold-20220512004259-7184 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:270: non-running pods: 
helpers_test.go:272: ======> post-mortem[TestSkaffold]: describe non-running pods <======
helpers_test.go:275: (dbg) Run:  kubectl --context skaffold-20220512004259-7184 describe pod 
helpers_test.go:275: (dbg) Non-zero exit: kubectl --context skaffold-20220512004259-7184 describe pod : exit status 1 (230.397ms)

                                                
                                                
** stderr ** 
	error: resource name may not be empty

                                                
                                                
** /stderr **
helpers_test.go:277: kubectl --context skaffold-20220512004259-7184 describe pod : exit status 1
helpers_test.go:175: Cleaning up "skaffold-20220512004259-7184" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p skaffold-20220512004259-7184
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p skaffold-20220512004259-7184: (21.331381s)
--- FAIL: TestSkaffold (178.25s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (502.1s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:127: (dbg) Run:  C:\Users\jenkins.minikube4\AppData\Local\Temp\minikube-v1.9.0.2203974399.exe start -p running-upgrade-20220512005137-7184 --memory=2200 --vm-driver=docker

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:127: (dbg) Done: C:\Users\jenkins.minikube4\AppData\Local\Temp\minikube-v1.9.0.2203974399.exe start -p running-upgrade-20220512005137-7184 --memory=2200 --vm-driver=docker: (4m12.8517626s)
version_upgrade_test.go:137: (dbg) Run:  out/minikube-windows-amd64.exe start -p running-upgrade-20220512005137-7184 --memory=2200 --alsologtostderr -v=1 --driver=docker
E0512 00:56:24.919565    7184 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-20220511231058-7184\client.crt: The system cannot find the path specified.

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:137: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p running-upgrade-20220512005137-7184 --memory=2200 --alsologtostderr -v=1 --driver=docker: exit status 81 (1m48.8560892s)

                                                
                                                
-- stdout --
	* [running-upgrade-20220512005137-7184] minikube v1.25.2 on Microsoft Windows 10 Enterprise N 10.0.19042 Build 19042
	  - KUBECONFIG=C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube4\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=13639
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	* Kubernetes 1.23.5 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.23.5
	* Using the docker driver based on existing profile
	* Starting control plane node running-upgrade-20220512005137-7184 in cluster running-upgrade-20220512005137-7184
	* Pulling base image ...
	* Updating the running docker "running-upgrade-20220512005137-7184" container ...
	* Preparing Kubernetes v1.18.0 on Docker 19.03.2 ...
	  - kubeadm.pod-network-cidr=10.244.0.0/16
	X Problems detected in kubelet:
	  May 12 00:57:06 running-upgrade-20220512005137-7184 kubelet[8709]: E0512 00:57:06.310532    8709 pod_workers.go:191] Error syncing pod 112c60df9e36eeaf13a6dd3074765810 ("kube-apiserver-running-upgrade-20220512005137-7184_kube-system(112c60df9e36eeaf13a6dd3074765810)"), skipping: failed to "StartContainer" for "kube-apiserver" with CrashLoopBackOff: "back-off 10s restarting failed container=kube-apiserver pod=kube-apiserver-running-upgrade-20220512005137-7184_kube-system(112c60df9e36eeaf13a6dd3074765810)"
	  May 12 00:57:08 running-upgrade-20220512005137-7184 kubelet[8709]: E0512 00:57:08.207139    8709 pod_workers.go:191] Error syncing pod 112c60df9e36eeaf13a6dd3074765810 ("kube-apiserver-running-upgrade-20220512005137-7184_kube-system(112c60df9e36eeaf13a6dd3074765810)"), skipping: failed to "StartContainer" for "kube-apiserver" with CrashLoopBackOff: "back-off 10s restarting failed container=kube-apiserver pod=kube-apiserver-running-upgrade-20220512005137-7184_kube-system(112c60df9e36eeaf13a6dd3074765810)"
	  May 12 00:57:09 running-upgrade-20220512005137-7184 kubelet[8709]: E0512 00:57:09.412197    8709 pod_workers.go:191] Error syncing pod 112c60df9e36eeaf13a6dd3074765810 ("kube-apiserver-running-upgrade-20220512005137-7184_kube-system(112c60df9e36eeaf13a6dd3074765810)"), skipping: failed to "StartContainer" for "kube-apiserver" with CrashLoopBackOff: "back-off 10s restarting failed container=kube-apiserver pod=kube-apiserver-running-upgrade-20220512005137-7184_kube-system(112c60df9e36eeaf13a6dd3074765810)"
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0512 00:55:51.474294    8836 out.go:296] Setting OutFile to fd 1732 ...
	I0512 00:55:51.567336    8836 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0512 00:55:51.567336    8836 out.go:309] Setting ErrFile to fd 1736...
	I0512 00:55:51.567336    8836 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0512 00:55:51.580389    8836 out.go:303] Setting JSON to false
	I0512 00:55:51.583383    8836 start.go:115] hostinfo: {"hostname":"minikube4","uptime":15404,"bootTime":1652301547,"procs":172,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19042 Build 19042","kernelVersion":"10.0.19042 Build 19042","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"df6bfb5f-73f2-4acb-9365-df7854ecbb28"}
	W0512 00:55:51.583383    8836 start.go:123] gopshost.Virtualization returned error: not implemented yet
	I0512 00:55:51.588363    8836 out.go:177] * [running-upgrade-20220512005137-7184] minikube v1.25.2 on Microsoft Windows 10 Enterprise N 10.0.19042 Build 19042
	I0512 00:55:51.600067    8836 notify.go:193] Checking for updates...
	I0512 00:55:51.604998    8836 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	I0512 00:55:51.613005    8836 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube4\minikube-integration\.minikube
	I0512 00:55:51.621014    8836 out.go:177]   - MINIKUBE_LOCATION=13639
	I0512 00:55:51.625019    8836 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0512 00:55:51.633015    8836 config.go:178] Loaded profile config "running-upgrade-20220512005137-7184": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.18.0
	I0512 00:55:51.633015    8836 start_flags.go:634] config upgrade: KicBaseImage=gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1652251400-14138@sha256:8c847a4aa2afc5a7fc659f9731046bf9cc7e788283deecc83c8633014fb0828a
	I0512 00:55:51.638006    8836 out.go:177] * Kubernetes 1.23.5 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.23.5
	I0512 00:55:51.640023    8836 driver.go:358] Setting default libvirt URI to qemu:///system
	I0512 00:55:54.823856    8836 docker.go:137] docker version: linux-20.10.14
	I0512 00:55:54.832706    8836 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0512 00:55:57.656720    8836 cli_runner.go:217] Completed: docker system info --format "{{json .}}": (2.8238683s)
	I0512 00:55:57.656720    8836 info.go:265] docker info: {ID:5MOX:W55Z:6RSS:V5PU:46KT:D723:NTM4:N7FK:USOO:URA3:TW6J:2PNT Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:88 OomKillDisable:true NGoroutines:66 SystemTime:2022-05-12 00:55:56.279079 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.102.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_6
4 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:53902323712 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.14 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3df54a852345ae127d1fa3092b95168e4a88e2f8 Expected:3df54a852345ae127d1fa3092b95168e4a88e2f8} RuncCommit:{ID:v1.0.3-0-gf46b6ba Expected:v1.0.3-0-gf46b6ba} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,p
rofile=default] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.5.0] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:C:\Program Files\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc.
Version:v0.17.0]] Warnings:<nil>}}
	I0512 00:55:57.679050    8836 out.go:177] * Using the docker driver based on existing profile
	I0512 00:55:57.682048    8836 start.go:284] selected driver: docker
	I0512 00:55:57.682048    8836 start.go:801] validating driver "docker" against &{Name:running-upgrade-20220512005137-7184 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1652251400-14138@sha256:8c847a4aa2afc5a7fc659f9731046bf9cc7e788283deecc83c8633014fb0828a Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:0 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser: SSHKey: SSHPort:0 KubernetesConfig:{KubernetesVersion:v1.18.0 ClusterName:running-upgrade-20220512005137-7184 Namespa
ce: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.244.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name:m01 IP:172.17.0.3 Port:8443 KubernetesVersion:v1.18.0 ContainerRuntime: ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[] StartHostTimeout:0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString: Mount9PVersion: MountGID: MountIP: MountMSize:0 MountOptions:[] MountPort:0 MountType: MountUID: BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0512 00:55:57.682048    8836 start.go:812] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0512 00:55:57.778558    8836 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0512 00:56:00.128041    8836 cli_runner.go:217] Completed: docker system info --format "{{json .}}": (2.349362s)
	I0512 00:56:00.128041    8836 info.go:265] docker info: {ID:5MOX:W55Z:6RSS:V5PU:46KT:D723:NTM4:N7FK:USOO:URA3:TW6J:2PNT Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:88 OomKillDisable:true NGoroutines:66 SystemTime:2022-05-12 00:55:58.9348502 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.102.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_
64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:53902323712 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.14 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3df54a852345ae127d1fa3092b95168e4a88e2f8 Expected:3df54a852345ae127d1fa3092b95168e4a88e2f8} RuncCommit:{ID:v1.0.3-0-gf46b6ba Expected:v1.0.3-0-gf46b6ba} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,
profile=default] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.5.0] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:C:\Program Files\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc.
Version:v0.17.0]] Warnings:<nil>}}
	I0512 00:56:00.128041    8836 cni.go:95] Creating CNI manager for ""
	I0512 00:56:00.128041    8836 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0512 00:56:00.128041    8836 start_flags.go:306] config:
	{Name:running-upgrade-20220512005137-7184 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1652251400-14138@sha256:8c847a4aa2afc5a7fc659f9731046bf9cc7e788283deecc83c8633014fb0828a Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:0 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser: SSHKey: SSHPort:0 KubernetesConfig:{KubernetesVersion:v1.18.0 ClusterName:running-upgrade-20220512005137-7184 Namespace: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster
.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.244.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name:m01 IP:172.17.0.3 Port:8443 KubernetesVersion:v1.18.0 ContainerRuntime: ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[] StartHostTimeout:0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString: Mount9PVersion: MountGID: MountIP: MountMSize:0 MountOptions:[] MountPort:0 MountType: MountUID: BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0512 00:56:00.261701    8836 out.go:177] * Starting control plane node running-upgrade-20220512005137-7184 in cluster running-upgrade-20220512005137-7184
	I0512 00:56:00.458452    8836 cache.go:120] Beginning downloading kic base image for docker with docker
	I0512 00:56:00.497914    8836 out.go:177] * Pulling base image ...
	I0512 00:56:00.500752    8836 preload.go:132] Checking if preload exists for k8s version v1.18.0 and runtime docker
	I0512 00:56:00.500752    8836 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1652251400-14138@sha256:8c847a4aa2afc5a7fc659f9731046bf9cc7e788283deecc83c8633014fb0828a in local docker daemon
	W0512 00:56:00.540943    8836 preload.go:115] https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.18.0/preloaded-images-k8s-v18-v1.18.0-docker-overlay2-amd64.tar.lz4 status code: 404
	I0512 00:56:00.541350    8836 profile.go:148] Saving config to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\running-upgrade-20220512005137-7184\config.json ...
	I0512 00:56:00.541459    8836 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\k8s.gcr.io\kube-apiserver:v1.18.0 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\k8s.gcr.io\kube-apiserver_v1.18.0
	I0512 00:56:00.541577    8836 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\k8s.gcr.io\kube-controller-manager:v1.18.0 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\k8s.gcr.io\kube-controller-manager_v1.18.0
	I0512 00:56:00.541613    8836 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\k8s.gcr.io\pause:3.2 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\k8s.gcr.io\pause_3.2
	I0512 00:56:00.541987    8836 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\k8s.gcr.io\kube-scheduler:v1.18.0 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\k8s.gcr.io\kube-scheduler_v1.18.0
	I0512 00:56:00.541987    8836 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\k8s.gcr.io\etcd:3.4.3-0 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\k8s.gcr.io\etcd_3.4.3-0
	I0512 00:56:00.542850    8836 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\k8s.gcr.io\kube-proxy:v1.18.0 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\k8s.gcr.io\kube-proxy_v1.18.0
	I0512 00:56:00.541459    8836 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\gcr.io\k8s-minikube\storage-provisioner:v5 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\gcr.io\k8s-minikube\storage-provisioner_v5
	I0512 00:56:00.544988    8836 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\k8s.gcr.io\coredns:1.6.7 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\k8s.gcr.io\coredns_1.6.7
	I0512 00:56:00.755969    8836 cache.go:107] acquiring lock: {Name:mk8f345a926551a9f97bd69298d56374eda403f1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0512 00:56:00.755969    8836 cache.go:107] acquiring lock: {Name:mk352f5bf629a9838a6dbf3b2a16ff0c4dd2ff59 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0512 00:56:00.755969    8836 cache.go:107] acquiring lock: {Name:mk8ee1f737de2584324bcf40c1da8c06053008f4 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0512 00:56:00.755969    8836 cache.go:115] \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\k8s.gcr.io\kube-proxy_v1.18.0 exists
	I0512 00:56:00.755969    8836 cache.go:115] \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\k8s.gcr.io\etcd_3.4.3-0 exists
	I0512 00:56:00.755969    8836 cache.go:115] \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\gcr.io\k8s-minikube\storage-provisioner_v5 exists
	I0512 00:56:00.755969    8836 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\cache\\images\\amd64\\gcr.io\\k8s-minikube\\storage-provisioner_v5" took 210.9705ms
	I0512 00:56:00.757019    8836 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\gcr.io\k8s-minikube\storage-provisioner_v5 succeeded
	I0512 00:56:00.755969    8836 cache.go:96] cache image "k8s.gcr.io/kube-proxy:v1.18.0" -> "C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\cache\\images\\amd64\\k8s.gcr.io\\kube-proxy_v1.18.0" took 213.108ms
	I0512 00:56:00.757019    8836 cache.go:96] cache image "k8s.gcr.io/etcd:3.4.3-0" -> "C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\cache\\images\\amd64\\k8s.gcr.io\\etcd_3.4.3-0" took 215.0208ms
	I0512 00:56:00.757019    8836 cache.go:80] save to tar file k8s.gcr.io/etcd:3.4.3-0 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\k8s.gcr.io\etcd_3.4.3-0 succeeded
	I0512 00:56:00.757019    8836 cache.go:80] save to tar file k8s.gcr.io/kube-proxy:v1.18.0 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\k8s.gcr.io\kube-proxy_v1.18.0 succeeded
	I0512 00:56:00.771967    8836 cache.go:107] acquiring lock: {Name:mk7d4216c64925b5e1bb051eb5609dd954acd685 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0512 00:56:00.771967    8836 cache.go:115] \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\k8s.gcr.io\coredns_1.6.7 exists
	I0512 00:56:00.771967    8836 cache.go:96] cache image "k8s.gcr.io/coredns:1.6.7" -> "C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\cache\\images\\amd64\\k8s.gcr.io\\coredns_1.6.7" took 226.9668ms
	I0512 00:56:00.771967    8836 cache.go:80] save to tar file k8s.gcr.io/coredns:1.6.7 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\k8s.gcr.io\coredns_1.6.7 succeeded
	I0512 00:56:00.772961    8836 cache.go:107] acquiring lock: {Name:mk3d1e6fb5723cf495ae37e51a62255563c2c003 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0512 00:56:00.772961    8836 cache.go:107] acquiring lock: {Name:mk846ce663c82a7059586221b806d90359219f99 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0512 00:56:00.772961    8836 cache.go:115] \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\k8s.gcr.io\pause_3.2 exists
	I0512 00:56:00.772961    8836 cache.go:115] \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\k8s.gcr.io\kube-controller-manager_v1.18.0 exists
	I0512 00:56:00.773967    8836 cache.go:96] cache image "k8s.gcr.io/pause:3.2" -> "C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\cache\\images\\amd64\\k8s.gcr.io\\pause_3.2" took 232.3421ms
	I0512 00:56:00.773967    8836 cache.go:80] save to tar file k8s.gcr.io/pause:3.2 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\k8s.gcr.io\pause_3.2 succeeded
	I0512 00:56:00.773967    8836 cache.go:96] cache image "k8s.gcr.io/kube-controller-manager:v1.18.0" -> "C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\cache\\images\\amd64\\k8s.gcr.io\\kube-controller-manager_v1.18.0" took 232.3421ms
	I0512 00:56:00.773967    8836 cache.go:80] save to tar file k8s.gcr.io/kube-controller-manager:v1.18.0 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\k8s.gcr.io\kube-controller-manager_v1.18.0 succeeded
	I0512 00:56:00.787603    8836 cache.go:107] acquiring lock: {Name:mkf01f367151153f1ecf01f7130a79a93901537f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0512 00:56:00.787603    8836 cache.go:107] acquiring lock: {Name:mk4eb753e21b24b9adb3dd22d2f7fd67dd181f42 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0512 00:56:00.787603    8836 cache.go:115] \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\k8s.gcr.io\kube-apiserver_v1.18.0 exists
	I0512 00:56:00.787603    8836 cache.go:115] \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\k8s.gcr.io\kube-scheduler_v1.18.0 exists
	I0512 00:56:00.787603    8836 cache.go:96] cache image "k8s.gcr.io/kube-apiserver:v1.18.0" -> "C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\cache\\images\\amd64\\k8s.gcr.io\\kube-apiserver_v1.18.0" took 246.0132ms
	I0512 00:56:00.787603    8836 cache.go:80] save to tar file k8s.gcr.io/kube-apiserver:v1.18.0 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\k8s.gcr.io\kube-apiserver_v1.18.0 succeeded
	I0512 00:56:00.787603    8836 cache.go:96] cache image "k8s.gcr.io/kube-scheduler:v1.18.0" -> "C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\cache\\images\\amd64\\k8s.gcr.io\\kube-scheduler_v1.18.0" took 245.6033ms
	I0512 00:56:00.787603    8836 cache.go:80] save to tar file k8s.gcr.io/kube-scheduler:v1.18.0 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\k8s.gcr.io\kube-scheduler_v1.18.0 succeeded
	I0512 00:56:00.787603    8836 cache.go:87] Successfully saved all images to host disk.
	I0512 00:56:01.797925    8836 image.go:79] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1652251400-14138@sha256:8c847a4aa2afc5a7fc659f9731046bf9cc7e788283deecc83c8633014fb0828a in local docker daemon, skipping pull
	I0512 00:56:01.797925    8836 cache.go:141] gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1652251400-14138@sha256:8c847a4aa2afc5a7fc659f9731046bf9cc7e788283deecc83c8633014fb0828a exists in daemon, skipping load
	I0512 00:56:01.797925    8836 cache.go:206] Successfully downloaded all kic artifacts
	I0512 00:56:01.798226    8836 start.go:352] acquiring machines lock for running-upgrade-20220512005137-7184: {Name:mke7183e7f7bd143ef649e5fde0cd8c924f456f2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0512 00:56:01.798973    8836 start.go:356] acquired machines lock for "running-upgrade-20220512005137-7184" in 747.1µs
	I0512 00:56:01.799725    8836 start.go:94] Skipping create...Using existing machine configuration
	I0512 00:56:01.799725    8836 fix.go:55] fixHost starting: m01
	I0512 00:56:01.818631    8836 cli_runner.go:164] Run: docker container inspect running-upgrade-20220512005137-7184 --format={{.State.Status}}
	I0512 00:56:03.028693    8836 cli_runner.go:217] Completed: docker container inspect running-upgrade-20220512005137-7184 --format={{.State.Status}}: (1.2097757s)
	I0512 00:56:03.028747    8836 fix.go:103] recreateIfNeeded on running-upgrade-20220512005137-7184: state=Running err=<nil>
	W0512 00:56:03.028747    8836 fix.go:129] unexpected machine state, will restart: <nil>
	I0512 00:56:03.034019    8836 out.go:177] * Updating the running docker "running-upgrade-20220512005137-7184" container ...
	I0512 00:56:03.046017    8836 machine.go:88] provisioning docker machine ...
	I0512 00:56:03.046017    8836 ubuntu.go:169] provisioning hostname "running-upgrade-20220512005137-7184"
	I0512 00:56:03.052013    8836 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" running-upgrade-20220512005137-7184
	I0512 00:56:04.214002    8836 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" running-upgrade-20220512005137-7184: (1.1619292s)
	I0512 00:56:04.220006    8836 main.go:134] libmachine: Using SSH client type: native
	I0512 00:56:04.220006    8836 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x422e20] 0x425c80 <nil>  [] 0s} 127.0.0.1 49478 <nil> <nil>}
	I0512 00:56:04.220006    8836 main.go:134] libmachine: About to run SSH command:
	sudo hostname running-upgrade-20220512005137-7184 && echo "running-upgrade-20220512005137-7184" | sudo tee /etc/hostname
	I0512 00:56:04.423481    8836 main.go:134] libmachine: SSH cmd err, output: <nil>: running-upgrade-20220512005137-7184
	
	I0512 00:56:04.431483    8836 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" running-upgrade-20220512005137-7184
	I0512 00:56:05.646393    8836 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" running-upgrade-20220512005137-7184: (1.2137691s)
	I0512 00:56:05.649990    8836 main.go:134] libmachine: Using SSH client type: native
	I0512 00:56:05.649990    8836 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x422e20] 0x425c80 <nil>  [] 0s} 127.0.0.1 49478 <nil> <nil>}
	I0512 00:56:05.649990    8836 main.go:134] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\srunning-upgrade-20220512005137-7184' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 running-upgrade-20220512005137-7184/g' /etc/hosts;
				else 
					echo '127.0.1.1 running-upgrade-20220512005137-7184' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0512 00:56:05.828503    8836 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I0512 00:56:05.828503    8836 ubuntu.go:175] set auth options {CertDir:C:\Users\jenkins.minikube4\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube4\minikube-integration\.minikube}
	I0512 00:56:05.828503    8836 ubuntu.go:177] setting up certificates
	I0512 00:56:05.828503    8836 provision.go:83] configureAuth start
	I0512 00:56:05.839538    8836 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" running-upgrade-20220512005137-7184
	I0512 00:56:07.029786    8836 cli_runner.go:217] Completed: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" running-upgrade-20220512005137-7184: (1.190187s)
	I0512 00:56:07.029786    8836 provision.go:138] copyHostCerts
	I0512 00:56:07.029786    8836 exec_runner.go:144] found C:\Users\jenkins.minikube4\minikube-integration\.minikube/cert.pem, removing ...
	I0512 00:56:07.029786    8836 exec_runner.go:207] rm: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cert.pem
	I0512 00:56:07.029786    8836 exec_runner.go:151] cp: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube4\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0512 00:56:07.031830    8836 exec_runner.go:144] found C:\Users\jenkins.minikube4\minikube-integration\.minikube/key.pem, removing ...
	I0512 00:56:07.031830    8836 exec_runner.go:207] rm: C:\Users\jenkins.minikube4\minikube-integration\.minikube\key.pem
	I0512 00:56:07.031830    8836 exec_runner.go:151] cp: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube4\minikube-integration\.minikube/key.pem (1675 bytes)
	I0512 00:56:07.032788    8836 exec_runner.go:144] found C:\Users\jenkins.minikube4\minikube-integration\.minikube/ca.pem, removing ...
	I0512 00:56:07.032788    8836 exec_runner.go:207] rm: C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.pem
	I0512 00:56:07.033790    8836 exec_runner.go:151] cp: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube4\minikube-integration\.minikube/ca.pem (1078 bytes)
	I0512 00:56:07.034797    8836 provision.go:112] generating server cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.running-upgrade-20220512005137-7184 san=[172.17.0.3 127.0.0.1 localhost 127.0.0.1 minikube running-upgrade-20220512005137-7184]
	I0512 00:56:07.622203    8836 provision.go:172] copyRemoteCerts
	I0512 00:56:07.638694    8836 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0512 00:56:07.650703    8836 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" running-upgrade-20220512005137-7184
	I0512 00:56:08.929378    8836 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" running-upgrade-20220512005137-7184: (1.2786088s)
	I0512 00:56:08.929378    8836 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49478 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\running-upgrade-20220512005137-7184\id_rsa Username:docker}
	I0512 00:56:09.057379    8836 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (1.4186114s)
	I0512 00:56:09.058384    8836 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0512 00:56:09.115015    8836 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1277 bytes)
	I0512 00:56:09.160024    8836 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0512 00:56:09.207027    8836 provision.go:86] duration metric: configureAuth took 3.3783501s
	I0512 00:56:09.207027    8836 ubuntu.go:193] setting minikube options for container-runtime
	I0512 00:56:09.207027    8836 config.go:178] Loaded profile config "running-upgrade-20220512005137-7184": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.18.0
	I0512 00:56:09.222051    8836 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" running-upgrade-20220512005137-7184
	I0512 00:56:10.467245    8836 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" running-upgrade-20220512005137-7184: (1.2451296s)
	I0512 00:56:10.471250    8836 main.go:134] libmachine: Using SSH client type: native
	I0512 00:56:10.471250    8836 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x422e20] 0x425c80 <nil>  [] 0s} 127.0.0.1 49478 <nil> <nil>}
	I0512 00:56:10.471250    8836 main.go:134] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0512 00:56:10.638677    8836 main.go:134] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0512 00:56:10.638677    8836 ubuntu.go:71] root file system type: overlay
	I0512 00:56:10.639680    8836 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0512 00:56:10.646665    8836 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" running-upgrade-20220512005137-7184
	I0512 00:56:11.900516    8836 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" running-upgrade-20220512005137-7184: (1.2537856s)
	I0512 00:56:11.904177    8836 main.go:134] libmachine: Using SSH client type: native
	I0512 00:56:11.904177    8836 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x422e20] 0x425c80 <nil>  [] 0s} 127.0.0.1 49478 <nil> <nil>}
	I0512 00:56:11.904177    8836 main.go:134] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0512 00:56:12.139407    8836 main.go:134] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0512 00:56:12.146432    8836 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" running-upgrade-20220512005137-7184
	I0512 00:56:13.337097    8836 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" running-upgrade-20220512005137-7184: (1.1906039s)
	I0512 00:56:13.342990    8836 main.go:134] libmachine: Using SSH client type: native
	I0512 00:56:13.342990    8836 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x422e20] 0x425c80 <nil>  [] 0s} 127.0.0.1 49478 <nil> <nil>}
	I0512 00:56:13.342990    8836 main.go:134] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0512 00:56:23.844288    8836 main.go:134] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2022-05-12 00:52:21.503783000 +0000
	+++ /lib/systemd/system/docker.service.new	2022-05-12 00:56:12.118926000 +0000
	@@ -5,9 +5,12 @@
	 After=network-online.target firewalld.service containerd.service
	 Wants=network-online.target
	 Requires=docker.socket
	+StartLimitBurst=3
	+StartLimitIntervalSec=60
	 
	 [Service]
	 Type=notify
	+Restart=on-failure
	 
	 
	 
	@@ -23,7 +26,7 @@
	 # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	 ExecStart=
	 ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	-ExecReload=/bin/kill -s HUP 
	+ExecReload=/bin/kill -s HUP $MAINPID
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	
	I0512 00:56:23.844288    8836 machine.go:91] provisioned docker machine in 20.7971977s
	I0512 00:56:23.844288    8836 start.go:306] post-start starting for "running-upgrade-20220512005137-7184" (driver="docker")
	I0512 00:56:23.844288    8836 start.go:316] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0512 00:56:23.858293    8836 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0512 00:56:23.865273    8836 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" running-upgrade-20220512005137-7184
	I0512 00:56:25.268802    8836 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" running-upgrade-20220512005137-7184: (1.4034565s)
	I0512 00:56:25.268802    8836 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49478 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\running-upgrade-20220512005137-7184\id_rsa Username:docker}
	I0512 00:56:25.520921    8836 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (1.6625423s)
	I0512 00:56:25.539811    8836 ssh_runner.go:195] Run: cat /etc/os-release
	I0512 00:56:25.598846    8836 main.go:134] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0512 00:56:25.598846    8836 main.go:134] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0512 00:56:25.598846    8836 main.go:134] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0512 00:56:25.598846    8836 info.go:137] Remote host: Ubuntu 19.10
	I0512 00:56:25.598846    8836 filesync.go:126] Scanning C:\Users\jenkins.minikube4\minikube-integration\.minikube\addons for local assets ...
	I0512 00:56:25.598846    8836 filesync.go:126] Scanning C:\Users\jenkins.minikube4\minikube-integration\.minikube\files for local assets ...
	I0512 00:56:25.601836    8836 filesync.go:149] local asset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\71842.pem -> 71842.pem in /etc/ssl/certs
	I0512 00:56:25.623828    8836 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0512 00:56:25.718732    8836 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\71842.pem --> /etc/ssl/certs/71842.pem (1708 bytes)
	I0512 00:56:26.009369    8836 start.go:309] post-start completed in 2.1649694s
	I0512 00:56:26.029383    8836 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0512 00:56:26.038399    8836 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" running-upgrade-20220512005137-7184
	I0512 00:56:27.383617    8836 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" running-upgrade-20220512005137-7184: (1.3450326s)
	I0512 00:56:27.383752    8836 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49478 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\running-upgrade-20220512005137-7184\id_rsa Username:docker}
	I0512 00:56:27.704585    8836 ssh_runner.go:235] Completed: sh -c "df -h /var | awk 'NR==2{print $5}'": (1.6751147s)
	I0512 00:56:27.724567    8836 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0512 00:56:27.805786    8836 fix.go:57] fixHost completed within 26.0047197s
	I0512 00:56:27.805786    8836 start.go:81] releasing machines lock for "running-upgrade-20220512005137-7184", held for 26.0048545s
	I0512 00:56:27.819785    8836 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" running-upgrade-20220512005137-7184
	I0512 00:56:29.136502    8836 cli_runner.go:217] Completed: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" running-upgrade-20220512005137-7184: (1.3166486s)
	I0512 00:56:29.138538    8836 ssh_runner.go:195] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0512 00:56:29.146506    8836 ssh_runner.go:195] Run: systemctl --version
	I0512 00:56:29.152502    8836 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" running-upgrade-20220512005137-7184
	I0512 00:56:29.153493    8836 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" running-upgrade-20220512005137-7184
	I0512 00:56:30.633417    8836 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" running-upgrade-20220512005137-7184: (1.4798475s)
	I0512 00:56:30.633417    8836 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49478 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\running-upgrade-20220512005137-7184\id_rsa Username:docker}
	I0512 00:56:30.665414    8836 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" running-upgrade-20220512005137-7184: (1.5128338s)
	I0512 00:56:30.665414    8836 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49478 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\running-upgrade-20220512005137-7184\id_rsa Username:docker}
	I0512 00:56:31.002709    8836 ssh_runner.go:235] Completed: systemctl --version: (1.856108s)
	I0512 00:56:31.020707    8836 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0512 00:56:31.200019    8836 ssh_runner.go:235] Completed: curl -sS -m 2 https://k8s.gcr.io/: (2.0603779s)
	I0512 00:56:31.228031    8836 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0512 00:56:31.324667    8836 cruntime.go:273] skipping containerd shutdown because we are bound to it
	I0512 00:56:31.338645    8836 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0512 00:56:31.503629    8836 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/dockershim.sock
	image-endpoint: unix:///var/run/dockershim.sock
	" | sudo tee /etc/crictl.yaml"
	I0512 00:56:32.024174    8836 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0512 00:56:33.021660    8836 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0512 00:56:33.930090    8836 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0512 00:56:34.138895    8836 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0512 00:56:34.921581    8836 ssh_runner.go:195] Run: sudo systemctl start docker
	I0512 00:56:35.314306    8836 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0512 00:56:35.907968    8836 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0512 00:56:36.512839    8836 out.go:204] * Preparing Kubernetes v1.18.0 on Docker 19.03.2 ...
	I0512 00:56:36.522843    8836 cli_runner.go:164] Run: docker exec -t running-upgrade-20220512005137-7184 dig +short host.docker.internal
	I0512 00:56:37.961027    8836 cli_runner.go:217] Completed: docker exec -t running-upgrade-20220512005137-7184 dig +short host.docker.internal: (1.4380113s)
	I0512 00:56:37.961114    8836 network.go:96] got host ip for mount in container by digging dns: 192.168.65.2
	I0512 00:56:37.982015    8836 ssh_runner.go:195] Run: grep 192.168.65.2	host.minikube.internal$ /etc/hosts
	I0512 00:56:38.003377    8836 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.65.2	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0512 00:56:38.037347    8836 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" running-upgrade-20220512005137-7184
	I0512 00:56:39.173120    8836 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" running-upgrade-20220512005137-7184: (1.1357147s)
	I0512 00:56:39.185598    8836 out.go:177]   - kubeadm.pod-network-cidr=10.244.0.0/16
	I0512 00:56:39.188221    8836 preload.go:132] Checking if preload exists for k8s version v1.18.0 and runtime docker
	I0512 00:56:39.198801    8836 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0512 00:56:39.422118    8836 docker.go:610] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-proxy:v1.18.0
	k8s.gcr.io/kube-scheduler:v1.18.0
	k8s.gcr.io/kube-controller-manager:v1.18.0
	k8s.gcr.io/kube-apiserver:v1.18.0
	k8s.gcr.io/pause:3.2
	k8s.gcr.io/coredns:1.6.7
	kindest/kindnetd:0.5.3
	k8s.gcr.io/etcd:3.4.3-0
	gcr.io/k8s-minikube/storage-provisioner:v1.8.1
	
	-- /stdout --
	I0512 00:56:39.422118    8836 docker.go:616] gcr.io/k8s-minikube/storage-provisioner:v5 wasn't preloaded
	I0512 00:56:39.422118    8836 cache_images.go:88] LoadImages start: [k8s.gcr.io/kube-apiserver:v1.18.0 k8s.gcr.io/kube-controller-manager:v1.18.0 k8s.gcr.io/kube-scheduler:v1.18.0 k8s.gcr.io/kube-proxy:v1.18.0 k8s.gcr.io/pause:3.2 k8s.gcr.io/etcd:3.4.3-0 k8s.gcr.io/coredns:1.6.7 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0512 00:56:39.448552    8836 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0512 00:56:39.458559    8836 image.go:134] retrieving image: k8s.gcr.io/etcd:3.4.3-0
	I0512 00:56:39.467545    8836 image.go:134] retrieving image: k8s.gcr.io/kube-controller-manager:v1.18.0
	I0512 00:56:39.471565    8836 image.go:134] retrieving image: k8s.gcr.io/kube-scheduler:v1.18.0
	I0512 00:56:39.477551    8836 image.go:134] retrieving image: k8s.gcr.io/coredns:1.6.7
	I0512 00:56:39.479556    8836 image.go:180] daemon lookup for k8s.gcr.io/kube-scheduler:v1.18.0: Error response from daemon: reference does not exist
	I0512 00:56:39.481563    8836 image.go:134] retrieving image: k8s.gcr.io/kube-proxy:v1.18.0
	I0512 00:56:39.488598    8836 image.go:180] daemon lookup for k8s.gcr.io/etcd:3.4.3-0: Error response from daemon: reference does not exist
	I0512 00:56:39.490544    8836 image.go:180] daemon lookup for k8s.gcr.io/kube-controller-manager:v1.18.0: Error response from daemon: reference does not exist
	I0512 00:56:39.498543    8836 image.go:134] retrieving image: k8s.gcr.io/pause:3.2
	I0512 00:56:39.500551    8836 image.go:180] daemon lookup for k8s.gcr.io/coredns:1.6.7: Error response from daemon: reference does not exist
	I0512 00:56:39.512621    8836 image.go:180] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: reference does not exist
	I0512 00:56:39.522545    8836 image.go:180] daemon lookup for k8s.gcr.io/pause:3.2: Error response from daemon: reference does not exist
	I0512 00:56:39.543564    8836 image.go:180] daemon lookup for k8s.gcr.io/kube-proxy:v1.18.0: Error response from daemon: reference does not exist
	I0512 00:56:39.554575    8836 image.go:134] retrieving image: k8s.gcr.io/kube-apiserver:v1.18.0
	I0512 00:56:39.563549    8836 image.go:180] daemon lookup for k8s.gcr.io/kube-apiserver:v1.18.0: Error response from daemon: reference does not exist
	W0512 00:56:39.756476    8836 image.go:190] authn lookup for k8s.gcr.io/kube-scheduler:v1.18.0 (trying anon): error getting credentials - err: exit status 1, out: `error getting credentials - err: exit status 1, out: `A specified logon session does not exist. It may already have been terminated.``
	I0512 00:56:40.000806    8836 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} k8s.gcr.io/kube-scheduler:v1.18.0
	W0512 00:56:40.020487    8836 image.go:190] authn lookup for k8s.gcr.io/etcd:3.4.3-0 (trying anon): error getting credentials - err: exit status 1, out: `error getting credentials - err: exit status 1, out: `A specified logon session does not exist. It may already have been terminated.``
	I0512 00:56:40.206261    8836 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} k8s.gcr.io/etcd:3.4.3-0
	W0512 00:56:40.284941    8836 image.go:190] authn lookup for k8s.gcr.io/kube-controller-manager:v1.18.0 (trying anon): error getting credentials - err: exit status 1, out: `error getting credentials - err: exit status 1, out: `A specified logon session does not exist. It may already have been terminated.``
	I0512 00:56:40.466450    8836 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} k8s.gcr.io/kube-controller-manager:v1.18.0
	W0512 00:56:40.550764    8836 image.go:190] authn lookup for k8s.gcr.io/coredns:1.6.7 (trying anon): error getting credentials - err: exit status 1, out: `error getting credentials - err: exit status 1, out: `A specified logon session does not exist. It may already have been terminated.``
	I0512 00:56:40.706498    8836 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} k8s.gcr.io/coredns:1.6.7
	W0512 00:56:40.802899    8836 image.go:190] authn lookup for gcr.io/k8s-minikube/storage-provisioner:v5 (trying anon): error getting credentials - err: exit status 1, out: `error getting credentials - err: exit status 1, out: `A specified logon session does not exist. It may already have been terminated.``
	I0512 00:56:41.048677    8836 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	W0512 00:56:41.055582    8836 image.go:190] authn lookup for k8s.gcr.io/pause:3.2 (trying anon): error getting credentials - err: exit status 1, out: `error getting credentials - err: exit status 1, out: `A specified logon session does not exist. It may already have been terminated.``
	I0512 00:56:41.146246    8836 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I0512 00:56:41.146246    8836 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\gcr.io\k8s-minikube\storage-provisioner:v5 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\gcr.io\k8s-minikube\storage-provisioner_v5
	I0512 00:56:41.146246    8836 docker.go:291] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0512 00:56:41.154247    8836 ssh_runner.go:195] Run: docker rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0512 00:56:41.215720    8836 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} k8s.gcr.io/pause:3.2
	I0512 00:56:41.307889    8836 cache_images.go:286] Loading image from: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\gcr.io\k8s-minikube\storage-provisioner_v5
	W0512 00:56:41.319855    8836 image.go:190] authn lookup for k8s.gcr.io/kube-proxy:v1.18.0 (trying anon): error getting credentials - err: exit status 1, out: `error getting credentials - err: exit status 1, out: `A specified logon session does not exist. It may already have been terminated.``
	I0512 00:56:41.324091    8836 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I0512 00:56:41.407506    8836 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I0512 00:56:41.407506    8836 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\gcr.io\k8s-minikube\storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (9060352 bytes)
	I0512 00:56:41.478226    8836 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} k8s.gcr.io/kube-proxy:v1.18.0
	W0512 00:56:41.636054    8836 image.go:190] authn lookup for k8s.gcr.io/kube-apiserver:v1.18.0 (trying anon): error getting credentials - err: exit status 1, out: `error getting credentials - err: exit status 1, out: `A specified logon session does not exist. It may already have been terminated.``
	I0512 00:56:41.798142    8836 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} k8s.gcr.io/kube-apiserver:v1.18.0
	I0512 00:56:42.113688    8836 docker.go:258] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0512 00:56:42.113838    8836 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/storage-provisioner_v5 | docker load"
	I0512 00:56:44.091758    8836 ssh_runner.go:235] Completed: /bin/bash -c "sudo cat /var/lib/minikube/images/storage-provisioner_v5 | docker load": (1.9778175s)
	I0512 00:56:44.091860    8836 cache_images.go:315] Transferred and loaded C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\gcr.io\k8s-minikube\storage-provisioner_v5 from cache
	I0512 00:56:44.091860    8836 cache_images.go:123] Successfully loaded all cached images
	I0512 00:56:44.091860    8836 cache_images.go:92] LoadImages completed in 4.6695018s
	I0512 00:56:44.108379    8836 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0512 00:56:44.341725    8836 cni.go:95] Creating CNI manager for ""
	I0512 00:56:44.341725    8836 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0512 00:56:44.341725    8836 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0512 00:56:44.341725    8836 kubeadm.go:158] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:172.17.0.3 APIServerPort:8443 KubernetesVersion:v1.18.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:running-upgrade-20220512005137-7184 NodeName:running-upgrade-20220512005137-7184 DNSDomain:cluster.local CRISocket:/var/run/dockershim.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "172.17.0.3"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:172.17.0.3 CgroupDriver:cgroupfs ClientCAFile
:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	I0512 00:56:44.342413    8836 kubeadm.go:162] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 172.17.0.3
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/dockershim.sock
	  name: "running-upgrade-20220512005137-7184"
	  kubeletExtraArgs:
	    node-ip: 172.17.0.3
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "172.17.0.3"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.18.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0512 00:56:44.342413    8836 kubeadm.go:936] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.18.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=docker --hostname-override=running-upgrade-20220512005137-7184 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.17.0.3
	
	[Install]
	 config:
	{KubernetesVersion:v1.18.0 ClusterName:running-upgrade-20220512005137-7184 Namespace: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.244.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0512 00:56:44.352687    8836 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.18.0
	I0512 00:56:44.427318    8836 binaries.go:44] Found k8s binaries, skipping transfer
	I0512 00:56:44.438417    8836 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0512 00:56:44.482176    8836 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (359 bytes)
	I0512 00:56:44.524331    8836 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0512 00:56:44.558698    8836 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2072 bytes)
	I0512 00:56:44.616160    8836 ssh_runner.go:195] Run: grep 172.17.0.3	control-plane.minikube.internal$ /etc/hosts
	I0512 00:56:44.628159    8836 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "172.17.0.3	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0512 00:56:44.661908    8836 certs.go:54] Setting up C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\running-upgrade-20220512005137-7184 for IP: 172.17.0.3
	I0512 00:56:44.661908    8836 certs.go:182] skipping minikubeCA CA generation: C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.key
	I0512 00:56:44.662914    8836 certs.go:182] skipping proxyClientCA CA generation: C:\Users\jenkins.minikube4\minikube-integration\.minikube\proxy-client-ca.key
	I0512 00:56:44.662914    8836 certs.go:298] skipping minikube-user signed cert generation: C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\running-upgrade-20220512005137-7184\client.key
	I0512 00:56:44.662914    8836 certs.go:302] generating minikube signed cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\running-upgrade-20220512005137-7184\apiserver.key.0f3e66d0
	I0512 00:56:44.663912    8836 crypto.go:68] Generating cert C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\running-upgrade-20220512005137-7184\apiserver.crt.0f3e66d0 with IP's: [172.17.0.3 10.96.0.1 127.0.0.1 10.0.0.1]
	I0512 00:56:44.946467    8836 crypto.go:156] Writing cert to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\running-upgrade-20220512005137-7184\apiserver.crt.0f3e66d0 ...
	I0512 00:56:44.947462    8836 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\running-upgrade-20220512005137-7184\apiserver.crt.0f3e66d0: {Name:mk49b820b8d0d4c2f466bb975d0146c7fdbd814e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0512 00:56:44.948462    8836 crypto.go:164] Writing key to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\running-upgrade-20220512005137-7184\apiserver.key.0f3e66d0 ...
	I0512 00:56:44.948462    8836 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\running-upgrade-20220512005137-7184\apiserver.key.0f3e66d0: {Name:mk1d6a2317e14626324d3caa4481b902b429517d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0512 00:56:44.949466    8836 certs.go:320] copying C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\running-upgrade-20220512005137-7184\apiserver.crt.0f3e66d0 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\running-upgrade-20220512005137-7184\apiserver.crt
	I0512 00:56:44.956530    8836 certs.go:324] copying C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\running-upgrade-20220512005137-7184\apiserver.key.0f3e66d0 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\running-upgrade-20220512005137-7184\apiserver.key
	I0512 00:56:44.957458    8836 certs.go:298] skipping aggregator signed cert generation: C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\running-upgrade-20220512005137-7184\proxy-client.key
	I0512 00:56:44.958464    8836 certs.go:388] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\7184.pem (1338 bytes)
	W0512 00:56:44.959554    8836 certs.go:384] ignoring C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\7184_empty.pem, impossibly tiny 0 bytes
	I0512 00:56:44.959554    8836 certs.go:388] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca-key.pem (1675 bytes)
	I0512 00:56:44.959847    8836 certs.go:388] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem (1078 bytes)
	I0512 00:56:44.959847    8836 certs.go:388] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0512 00:56:44.960407    8836 certs.go:388] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\key.pem (1675 bytes)
	I0512 00:56:44.960535    8836 certs.go:388] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\71842.pem (1708 bytes)
	I0512 00:56:44.961455    8836 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\running-upgrade-20220512005137-7184\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0512 00:56:45.087123    8836 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\running-upgrade-20220512005137-7184\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0512 00:56:45.148353    8836 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\running-upgrade-20220512005137-7184\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0512 00:56:45.204324    8836 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\running-upgrade-20220512005137-7184\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0512 00:56:45.257887    8836 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0512 00:56:45.328760    8836 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0512 00:56:45.372705    8836 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0512 00:56:45.419714    8836 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0512 00:56:45.464709    8836 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\7184.pem --> /usr/share/ca-certificates/7184.pem (1338 bytes)
	I0512 00:56:45.515791    8836 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\71842.pem --> /usr/share/ca-certificates/71842.pem (1708 bytes)
	I0512 00:56:45.578055    8836 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0512 00:56:45.632420    8836 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0512 00:56:45.678420    8836 ssh_runner.go:195] Run: openssl version
	I0512 00:56:45.710285    8836 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0512 00:56:45.749662    8836 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0512 00:56:45.759653    8836 certs.go:431] hashing: -rwxr-xr-x 1 root root 1111 May 11 23:00 /usr/share/ca-certificates/minikubeCA.pem
	I0512 00:56:45.769654    8836 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0512 00:56:45.791654    8836 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0512 00:56:45.830652    8836 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/7184.pem && ln -fs /usr/share/ca-certificates/7184.pem /etc/ssl/certs/7184.pem"
	I0512 00:56:45.861528    8836 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/7184.pem
	I0512 00:56:45.871533    8836 certs.go:431] hashing: -rwxr-xr-x 1 root root 1338 May 11 23:10 /usr/share/ca-certificates/7184.pem
	I0512 00:56:45.882530    8836 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/7184.pem
	I0512 00:56:45.915549    8836 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/7184.pem /etc/ssl/certs/51391683.0"
	I0512 00:56:45.965414    8836 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/71842.pem && ln -fs /usr/share/ca-certificates/71842.pem /etc/ssl/certs/71842.pem"
	I0512 00:56:45.998881    8836 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/71842.pem
	I0512 00:56:46.012859    8836 certs.go:431] hashing: -rw-r--r-- 1 root root 1708 May 11 23:10 /usr/share/ca-certificates/71842.pem
	I0512 00:56:46.022861    8836 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/71842.pem
	I0512 00:56:46.045861    8836 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/71842.pem /etc/ssl/certs/3ec20f2e.0"
	I0512 00:56:46.066866    8836 kubeadm.go:391] StartCluster: {Name:running-upgrade-20220512005137-7184 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1652251400-14138@sha256:8c847a4aa2afc5a7fc659f9731046bf9cc7e788283deecc83c8633014fb0828a Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:0 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser: SSHKey: SSHPort:0 KubernetesConfig:{KubernetesVersion:v1.18.0 ClusterName:running-upgrade-20220512005137-7184 Namespace: APIServerName:mi
nikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.244.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name:m01 IP:172.17.0.3 Port:8443 KubernetesVersion:v1.18.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[] StartHostTimeout:0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString: Mount9PVersion: MountGID: MountIP: MountMSize:0 MountOptions:[] MountPort:0 MountType: MountUID: BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0512 00:56:46.074866    8836 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0512 00:56:46.180447    8836 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0512 00:56:46.208451    8836 kubeadm.go:402] found existing configuration files, will attempt cluster restart
	I0512 00:56:46.208451    8836 kubeadm.go:601] restartCluster start
	I0512 00:56:46.220451    8836 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0512 00:56:46.245453    8836 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0512 00:56:46.254451    8836 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" running-upgrade-20220512005137-7184
	I0512 00:56:47.418476    8836 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" running-upgrade-20220512005137-7184: (1.1639652s)
	I0512 00:56:47.419475    8836 kubeconfig.go:116] verify returned: extract IP: "running-upgrade-20220512005137-7184" does not appear in C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	I0512 00:56:47.419475    8836 kubeconfig.go:127] "running-upgrade-20220512005137-7184" context is missing from C:\Users\jenkins.minikube4\minikube-integration\kubeconfig - will repair!
	I0512 00:56:47.420476    8836 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\kubeconfig: {Name:mk97c09b788e5010ffd4c9dd9525f9245d5edd25 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0512 00:56:47.430477    8836 kapi.go:59] client config for running-upgrade-20220512005137-7184: &rest.Config{Host:"https://127.0.0.1:49480", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\running-upgrade-20220512005137-7184/client.crt", KeyFile:"C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\running-upgrade-20220512005137-7184/client.key", CAFile:"C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube/ca.crt
", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1315600), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0512 00:56:47.440475    8836 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0512 00:56:47.465804    8836 kubeadm.go:569] needs reconfigure: configs differ:
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml	2022-05-12 00:53:53.242754000 +0000
	+++ /var/tmp/minikube/kubeadm.yaml.new	2022-05-12 00:56:44.577712000 +0000
	@@ -23,16 +23,52 @@
	   certSANs: ["127.0.0.1", "localhost", "172.17.0.3"]
	   extraArgs:
	     enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	+controllerManager:
	+  extraArgs:
	+    allocate-node-cidrs: "true"
	+    leader-elect: "false"
	+scheduler:
	+  extraArgs:
	+    leader-elect: "false"
	 certificatesDir: /var/lib/minikube/certs
	 clusterName: mk
	-controlPlaneEndpoint: 172.17.0.3:8443
	+controlPlaneEndpoint: control-plane.minikube.internal:8443
	 dns:
	   type: CoreDNS
	 etcd:
	   local:
	     dataDir: /var/lib/minikube/etcd
	+    extraArgs:
	+      proxy-refresh-interval: "70000"
	 kubernetesVersion: v1.18.0
	 networking:
	   dnsDomain: cluster.local
	   podSubnet: "10.244.0.0/16"
	   serviceSubnet: 10.96.0.0/12
	+---
	+apiVersion: kubelet.config.k8s.io/v1beta1
	+kind: KubeletConfiguration
	+authentication:
	+  x509:
	+    clientCAFile: /var/lib/minikube/certs/ca.crt
	+cgroupDriver: cgroupfs
	+clusterDomain: "cluster.local"
	+# disable disk resource management by default
	+imageGCHighThresholdPercent: 100
	+evictionHard:
	+  nodefs.available: "0%"
	+  nodefs.inodesFree: "0%"
	+  imagefs.available: "0%"
	+failSwapOn: false
	+staticPodPath: /etc/kubernetes/manifests
	+---
	+apiVersion: kubeproxy.config.k8s.io/v1alpha1
	+kind: KubeProxyConfiguration
	+clusterCIDR: "10.244.0.0/16"
	+metricsBindAddress: 0.0.0.0:10249
	+conntrack:
	+  maxPerCore: 0
	+# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	+  tcpEstablishedTimeout: 0s
	+# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	+  tcpCloseWaitTimeout: 0s
	
	-- /stdout --
	I0512 00:56:47.465804    8836 kubeadm.go:1067] stopping kube-system containers ...
	I0512 00:56:47.472816    8836 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0512 00:56:47.568208    8836 docker.go:442] Stopping containers: [3329df850bae e144e78b3670 877a998aaca1 3cd5fd278ff1 ccfe15f6fb82 e3e108b8fc67 d9dbf265645c d46c41f165df 839e01777dde d90fae423fac 222ceaaf40f7 1ea8badff92f 1555e561b610 82ec388b0dbd 15ea2a03ccb4 43068fdd2c20 952e8d66ad87 bc2ba55432ea ae6cc6eaaef5 4563fcf1e5d6 9ab973ae27d0 86fad7b7ceff 52589df99ac8 c097c3ef674c 53a795a22fde bb2eb6433974 bc316f497958 58f2ebcb9c69 89b8a587823a 889de0f0a48f 6d1e4c6db0a5 391e70a67d60 b652bbf7a10f b706db345626 11bf3e2befad 5fcec9ea40d4]
	I0512 00:56:47.575212    8836 ssh_runner.go:195] Run: docker stop 3329df850bae e144e78b3670 877a998aaca1 3cd5fd278ff1 ccfe15f6fb82 e3e108b8fc67 d9dbf265645c d46c41f165df 839e01777dde d90fae423fac 222ceaaf40f7 1ea8badff92f 1555e561b610 82ec388b0dbd 15ea2a03ccb4 43068fdd2c20 952e8d66ad87 bc2ba55432ea ae6cc6eaaef5 4563fcf1e5d6 9ab973ae27d0 86fad7b7ceff 52589df99ac8 c097c3ef674c 53a795a22fde bb2eb6433974 bc316f497958 58f2ebcb9c69 89b8a587823a 889de0f0a48f 6d1e4c6db0a5 391e70a67d60 b652bbf7a10f b706db345626 11bf3e2befad 5fcec9ea40d4
	I0512 00:56:53.402612    8836 ssh_runner.go:235] Completed: docker stop 3329df850bae e144e78b3670 877a998aaca1 3cd5fd278ff1 ccfe15f6fb82 e3e108b8fc67 d9dbf265645c d46c41f165df 839e01777dde d90fae423fac 222ceaaf40f7 1ea8badff92f 1555e561b610 82ec388b0dbd 15ea2a03ccb4 43068fdd2c20 952e8d66ad87 bc2ba55432ea ae6cc6eaaef5 4563fcf1e5d6 9ab973ae27d0 86fad7b7ceff 52589df99ac8 c097c3ef674c 53a795a22fde bb2eb6433974 bc316f497958 58f2ebcb9c69 89b8a587823a 889de0f0a48f 6d1e4c6db0a5 391e70a67d60 b652bbf7a10f b706db345626 11bf3e2befad 5fcec9ea40d4: (5.8270998s)
	I0512 00:56:53.424617    8836 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0512 00:56:53.643096    8836 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0512 00:56:53.664100    8836 kubeadm.go:155] found existing configuration files:
	-rw------- 1 root root 5590 May 12 00:54 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5626 May 12 00:54 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2058 May 12 00:55 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5574 May 12 00:54 /etc/kubernetes/scheduler.conf
	
	I0512 00:56:53.673099    8836 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0512 00:56:53.704200    8836 kubeadm.go:166] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0512 00:56:53.723201    8836 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0512 00:56:53.752192    8836 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0512 00:56:53.771201    8836 kubeadm.go:166] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0512 00:56:53.785193    8836 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0512 00:56:53.831923    8836 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0512 00:56:53.856629    8836 kubeadm.go:166] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0512 00:56:53.867639    8836 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0512 00:56:53.896632    8836 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0512 00:56:53.919641    8836 kubeadm.go:166] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0512 00:56:53.940632    8836 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0512 00:56:53.981644    8836 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0512 00:56:54.004641    8836 kubeadm.go:678] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0512 00:56:54.004641    8836 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0512 00:56:54.132252    8836 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0512 00:56:55.607216    8836 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.4748874s)
	I0512 00:56:55.607216    8836 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0512 00:56:56.018611    8836 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0512 00:56:56.318787    8836 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0512 00:56:56.606196    8836 api_server.go:51] waiting for apiserver process to appear ...
	I0512 00:56:56.622767    8836 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0512 00:56:56.699485    8836 api_server.go:71] duration metric: took 93.2843ms to wait for apiserver process to appear ...
	I0512 00:56:56.699485    8836 api_server.go:87] waiting for apiserver healthz status ...
	I0512 00:56:56.699485    8836 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:49480/healthz ...
	I0512 00:56:56.736648    8836 api_server.go:266] https://127.0.0.1:49480/healthz returned 200:
	ok
	I0512 00:56:56.767203    8836 api_server.go:140] control plane version: v1.18.0
	I0512 00:56:56.767203    8836 api_server.go:130] duration metric: took 67.7142ms to wait for apiserver health ...
	I0512 00:56:56.767203    8836 cni.go:95] Creating CNI manager for ""
	I0512 00:56:56.767203    8836 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0512 00:56:56.767203    8836 system_pods.go:43] waiting for kube-system pods to appear ...
	I0512 00:56:56.826218    8836 system_pods.go:59] 9 kube-system pods found
	I0512 00:56:56.826218    8836 system_pods.go:61] "coredns-66bff467f8-pb5qz" [d69c0d4d-257a-4a13-bc17-280150e66ad9] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0512 00:56:56.826218    8836 system_pods.go:61] "coredns-66bff467f8-xwcrb" [14fddbd3-5ee4-4e3c-bcbe-4d7256b0819a] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0512 00:56:56.826218    8836 system_pods.go:61] "etcd-running-upgrade-20220512005137-7184" [4218d647-96b8-45a7-b3e6-a50c4a08fbd9] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0512 00:56:56.826218    8836 system_pods.go:61] "kindnet-wm7sg" [fe23a6f8-3c35-4725-9b5f-445ff780708a] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0512 00:56:56.826218    8836 system_pods.go:61] "kube-apiserver-running-upgrade-20220512005137-7184" [d687d3e0-63ee-4bf2-82db-fc92d4bd9cab] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0512 00:56:56.826218    8836 system_pods.go:61] "kube-controller-manager-running-upgrade-20220512005137-7184" [412a3cb4-26da-4c8b-8293-c15c41bfe9f2] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0512 00:56:56.826218    8836 system_pods.go:61] "kube-proxy-l6lnp" [6c0218b7-2f76-4350-8062-48afebcf3a9d] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0512 00:56:56.826218    8836 system_pods.go:61] "kube-scheduler-running-upgrade-20220512005137-7184" [205470e6-1d2f-43aa-9f94-9cfef4a0d069] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0512 00:56:56.826218    8836 system_pods.go:61] "storage-provisioner" [f11b5a02-107f-4292-b755-99216a054488] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0512 00:56:56.826218    8836 system_pods.go:74] duration metric: took 59.0122ms to wait for pod list to return data ...
	I0512 00:56:56.826218    8836 node_conditions.go:102] verifying NodePressure condition ...
	I0512 00:56:56.837195    8836 node_conditions.go:122] node storage ephemeral capacity is 263174212Ki
	I0512 00:56:56.837195    8836 node_conditions.go:123] node cpu capacity is 16
	I0512 00:56:56.837195    8836 node_conditions.go:105] duration metric: took 10.9768ms to run NodePressure ...
	I0512 00:56:56.837195    8836 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0512 00:56:57.297491    8836 retry.go:31] will retry after 110.466µs: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml": Process exited with status 1
	stdout:
	
	stderr:
	W0512 00:56:57.201455    8825 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
	error execution phase addon/coredns: couldn't retrieve DNS addon deployments: Get https://control-plane.minikube.internal:8443/apis/apps/v1/namespaces/kube-system/deployments?labelSelector=k8s-app%3Dkube-dns: x509: certificate is valid for minikubeCA, kubernetes.default.svc.cluster.local, kubernetes.default.svc, kubernetes.default, kubernetes, localhost, not control-plane.minikube.internal
	To see the stack trace of this error execute with --v=5 or higher
	I0512 00:56:57.306682    8836 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0512 00:56:57.442022    8836 retry.go:31] will retry after 216.077µs: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml": Process exited with status 1
	stdout:
	
	stderr:
	W0512 00:56:57.414481    8890 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
	error execution phase addon/coredns: couldn't retrieve DNS addon deployments: Get https://control-plane.minikube.internal:8443/apis/apps/v1/namespaces/kube-system/deployments?labelSelector=k8s-app%3Dkube-dns: x509: certificate is valid for minikubeCA, kubernetes.default.svc.cluster.local, kubernetes.default.svc, kubernetes.default, kubernetes, localhost, not control-plane.minikube.internal
	To see the stack trace of this error execute with --v=5 or higher
	I0512 00:56:57.448014    8836 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0512 00:56:58.214588    8836 retry.go:31] will retry after 262.026µs: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml": Process exited with status 1
	stdout:
	
	stderr:
	W0512 00:56:58.095753    8925 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
	error execution phase addon/coredns: couldn't retrieve DNS addon deployments: Get https://control-plane.minikube.internal:8443/apis/apps/v1/namespaces/kube-system/deployments?labelSelector=k8s-app%3Dkube-dns: x509: certificate is valid for minikubeCA, kubernetes.default.svc.cluster.local, kubernetes.default.svc, kubernetes.default, kubernetes, localhost, not control-plane.minikube.internal
	To see the stack trace of this error execute with --v=5 or higher
	I0512 00:56:58.218697    8836 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0512 00:56:58.916719    8836 retry.go:31] will retry after 316.478µs: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml": Process exited with status 1
	stdout:
	
	stderr:
	W0512 00:56:58.818981    8998 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
	error execution phase addon/coredns: couldn't retrieve DNS addon deployments: Get https://control-plane.minikube.internal:8443/apis/apps/v1/namespaces/kube-system/deployments?labelSelector=k8s-app%3Dkube-dns: x509: certificate is valid for minikubeCA, kubernetes.default.svc.cluster.local, kubernetes.default.svc, kubernetes.default, kubernetes, localhost, not control-plane.minikube.internal
	To see the stack trace of this error execute with --v=5 or higher
	I0512 00:56:58.927126    8836 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0512 00:56:59.711363    8836 retry.go:31] will retry after 468.098µs: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml": Process exited with status 1
	stdout:
	
	stderr:
	W0512 00:56:59.605869    9031 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
	error execution phase addon/coredns: couldn't retrieve DNS addon deployments: Get https://control-plane.minikube.internal:8443/apis/apps/v1/namespaces/kube-system/deployments?labelSelector=k8s-app%3Dkube-dns: x509: certificate is valid for minikubeCA, kubernetes.default.svc.cluster.local, kubernetes.default.svc, kubernetes.default, kubernetes, localhost, not control-plane.minikube.internal
	To see the stack trace of this error execute with --v=5 or higher
	I0512 00:56:59.716297    8836 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0512 00:57:00.000686    8836 retry.go:31] will retry after 901.244µs: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml": Process exited with status 1
	stdout:
	
	stderr:
	W0512 00:56:59.943874    9099 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
	error execution phase addon/coredns: couldn't retrieve DNS addon deployments: Get https://control-plane.minikube.internal:8443/apis/apps/v1/namespaces/kube-system/deployments?labelSelector=k8s-app%3Dkube-dns: x509: certificate is valid for minikubeCA, kubernetes.default.svc.cluster.local, kubernetes.default.svc, kubernetes.default, kubernetes, localhost, not control-plane.minikube.internal
	To see the stack trace of this error execute with --v=5 or higher
	I0512 00:57:00.016033    8836 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0512 00:57:00.166616    8836 retry.go:31] will retry after 644.295µs: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml": Process exited with status 1
	stdout:
	
	stderr:
	W0512 00:57:00.138856    9112 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
	error execution phase addon/coredns: couldn't retrieve DNS addon deployments: Get https://control-plane.minikube.internal:8443/apis/apps/v1/namespaces/kube-system/deployments?labelSelector=k8s-app%3Dkube-dns: x509: certificate is valid for minikubeCA, kubernetes.default.svc.cluster.local, kubernetes.default.svc, kubernetes.default, kubernetes, localhost, not control-plane.minikube.internal
	To see the stack trace of this error execute with --v=5 or higher
	I0512 00:57:00.172929    8836 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0512 00:57:00.346840    8836 retry.go:31] will retry after 1.121724ms: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml": Process exited with status 1
	stdout:
	
	stderr:
	W0512 00:57:00.323673    9124 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
	error execution phase addon/coredns: couldn't retrieve DNS addon deployments: Get https://control-plane.minikube.internal:8443/apis/apps/v1/namespaces/kube-system/deployments?labelSelector=k8s-app%3Dkube-dns: x509: certificate is valid for minikubeCA, kubernetes.default.svc.cluster.local, kubernetes.default.svc, kubernetes.default, kubernetes, localhost, not control-plane.minikube.internal
	To see the stack trace of this error execute with --v=5 or higher
	I0512 00:57:00.361844    8836 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0512 00:57:00.709706    8836 retry.go:31] will retry after 1.529966ms: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml": Process exited with status 1
	stdout:
	
	stderr:
	W0512 00:57:00.626490    9139 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
	error execution phase addon/coredns: couldn't retrieve DNS addon deployments: Get https://control-plane.minikube.internal:8443/apis/apps/v1/namespaces/kube-system/deployments?labelSelector=k8s-app%3Dkube-dns: x509: certificate is valid for minikubeCA, kubernetes.default.svc.cluster.local, kubernetes.default.svc, kubernetes.default, kubernetes, localhost, not control-plane.minikube.internal
	To see the stack trace of this error execute with --v=5 or higher
	I0512 00:57:00.722009    8836 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0512 00:57:00.861360    8836 retry.go:31] will retry after 3.078972ms: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml": Process exited with status 1
	stdout:
	
	stderr:
	W0512 00:57:00.838835    9151 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
	error execution phase addon/coredns: couldn't retrieve DNS addon deployments: Get https://control-plane.minikube.internal:8443/apis/apps/v1/namespaces/kube-system/deployments?labelSelector=k8s-app%3Dkube-dns: x509: certificate is valid for minikubeCA, kubernetes.default.svc.cluster.local, kubernetes.default.svc, kubernetes.default, kubernetes, localhost, not control-plane.minikube.internal
	To see the stack trace of this error execute with --v=5 or higher
	I0512 00:57:00.879383    8836 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0512 00:57:01.035215    8836 retry.go:31] will retry after 5.854223ms: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml": Process exited with status 1
	stdout:
	
	stderr:
	W0512 00:57:01.010492    9165 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
	error execution phase addon/coredns: couldn't retrieve DNS addon deployments: Get https://control-plane.minikube.internal:8443/apis/apps/v1/namespaces/kube-system/deployments?labelSelector=k8s-app%3Dkube-dns: x509: certificate is valid for minikubeCA, kubernetes.default.svc.cluster.local, kubernetes.default.svc, kubernetes.default, kubernetes, localhost, not control-plane.minikube.internal
	To see the stack trace of this error execute with --v=5 or higher
	I0512 00:57:01.051854    8836 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0512 00:57:01.180624    8836 retry.go:31] will retry after 11.362655ms: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml": Process exited with status 1
	stdout:
	
	stderr:
	W0512 00:57:01.158457    9178 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
	error execution phase addon/coredns: couldn't retrieve DNS addon deployments: Get https://control-plane.minikube.internal:8443/apis/apps/v1/namespaces/kube-system/deployments?labelSelector=k8s-app%3Dkube-dns: x509: certificate is valid for minikubeCA, kubernetes.default.svc.cluster.local, kubernetes.default.svc, kubernetes.default, kubernetes, localhost, not control-plane.minikube.internal
	To see the stack trace of this error execute with --v=5 or higher
	I0512 00:57:01.193638    8836 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0512 00:57:01.611405    8836 retry.go:31] will retry after 9.267303ms: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml": Process exited with status 1
	stdout:
	
	stderr:
	W0512 00:57:01.507901    9197 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
	error execution phase addon/coredns: couldn't retrieve DNS addon deployments: Get https://control-plane.minikube.internal:8443/apis/apps/v1/namespaces/kube-system/deployments?labelSelector=k8s-app%3Dkube-dns: x509: certificate is valid for minikubeCA, kubernetes.default.svc.cluster.local, kubernetes.default.svc, kubernetes.default, kubernetes, localhost, not control-plane.minikube.internal
	To see the stack trace of this error execute with --v=5 or higher
	I0512 00:57:01.628559    8836 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0512 00:57:02.419854    8836 retry.go:31] will retry after 17.139291ms: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml": Process exited with status 1
	stdout:
	
	stderr:
	W0512 00:57:02.314316    9303 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
	error execution phase addon/coredns: couldn't retrieve DNS addon deployments: Get https://control-plane.minikube.internal:8443/apis/apps/v1/namespaces/kube-system/deployments?labelSelector=k8s-app%3Dkube-dns: x509: certificate is valid for minikubeCA, kubernetes.default.svc.cluster.local, kubernetes.default.svc, kubernetes.default, kubernetes, localhost, not control-plane.minikube.internal
	To see the stack trace of this error execute with --v=5 or higher
	I0512 00:57:02.445680    8836 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0512 00:57:03.120404    8836 retry.go:31] will retry after 23.881489ms: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml": Process exited with status 1
	stdout:
	
	stderr:
	W0512 00:57:03.026360    9432 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
	error execution phase addon/coredns: couldn't retrieve DNS addon deployments: Get https://control-plane.minikube.internal:8443/apis/apps/v1/namespaces/kube-system/deployments?labelSelector=k8s-app%3Dkube-dns: x509: certificate is valid for minikubeCA, kubernetes.default.svc.cluster.local, kubernetes.default.svc, kubernetes.default, kubernetes, localhost, not control-plane.minikube.internal
	To see the stack trace of this error execute with --v=5 or higher
	I0512 00:57:03.157089    8836 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0512 00:57:04.113625    8836 retry.go:31] will retry after 42.427055ms: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml": Process exited with status 1
	stdout:
	
	stderr:
	W0512 00:57:03.919123    9531 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
	error execution phase addon/coredns: couldn't retrieve DNS addon deployments: Get https://control-plane.minikube.internal:8443/apis/apps/v1/namespaces/kube-system/deployments?labelSelector=k8s-app%3Dkube-dns: x509: certificate is valid for minikubeCA, kubernetes.default.svc.cluster.local, kubernetes.default.svc, kubernetes.default, kubernetes, localhost, not control-plane.minikube.internal
	To see the stack trace of this error execute with --v=5 or higher
	I0512 00:57:04.162613    8836 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0512 00:57:05.200312    8836 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml": (1.0376462s)
	I0512 00:57:05.200312    8836 retry.go:31] will retry after 51.432832ms: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml": Process exited with status 1
	stdout:
	
	stderr:
	W0512 00:57:05.097299    9658 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
	error execution phase addon/coredns: couldn't retrieve DNS addon deployments: Get https://control-plane.minikube.internal:8443/apis/apps/v1/namespaces/kube-system/deployments?labelSelector=k8s-app%3Dkube-dns: x509: certificate is valid for minikubeCA, kubernetes.default.svc.cluster.local, kubernetes.default.svc, kubernetes.default, kubernetes, localhost, not control-plane.minikube.internal
	To see the stack trace of this error execute with --v=5 or higher
	I0512 00:57:05.259438    8836 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0512 00:57:05.898423    8836 retry.go:31] will retry after 78.14118ms: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml": Process exited with status 1
	stdout:
	
	stderr:
	W0512 00:57:05.713173    9796 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
	error execution phase addon/coredns: couldn't retrieve DNS addon deployments: Get https://control-plane.minikube.internal:8443/apis/apps/v1/namespaces/kube-system/deployments?labelSelector=k8s-app%3Dkube-dns: x509: certificate is valid for minikubeCA, kubernetes.default.svc.cluster.local, kubernetes.default.svc, kubernetes.default, kubernetes, localhost, not control-plane.minikube.internal
	To see the stack trace of this error execute with --v=5 or higher
	I0512 00:57:05.981249    8836 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0512 00:57:06.797611    8836 retry.go:31] will retry after 174.255803ms: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml": Process exited with status 1
	stdout:
	
	stderr:
	W0512 00:57:06.701701    9864 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
	error execution phase addon/coredns: couldn't retrieve DNS addon deployments: Get https://control-plane.minikube.internal:8443/apis/apps/v1/namespaces/kube-system/deployments?labelSelector=k8s-app%3Dkube-dns: x509: certificate is valid for minikubeCA, kubernetes.default.svc.cluster.local, kubernetes.default.svc, kubernetes.default, kubernetes, localhost, not control-plane.minikube.internal
	To see the stack trace of this error execute with --v=5 or higher
	I0512 00:57:06.977991    8836 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0512 00:57:07.329039    8836 retry.go:31] will retry after 159.291408ms: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml": Process exited with status 1
	stdout:
	
	stderr:
	W0512 00:57:07.309373    9992 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
	error execution phase addon/coredns: couldn't retrieve DNS addon deployments: Get https://control-plane.minikube.internal:8443/apis/apps/v1/namespaces/kube-system/deployments?labelSelector=k8s-app%3Dkube-dns: x509: certificate is valid for minikubeCA, kubernetes.default.svc.cluster.local, kubernetes.default.svc, kubernetes.default, kubernetes, localhost, not control-plane.minikube.internal
	To see the stack trace of this error execute with --v=5 or higher
	I0512 00:57:07.500606    8836 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0512 00:57:08.397648    8836 retry.go:31] will retry after 233.827468ms: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml": Process exited with status 1
	stdout:
	
	stderr:
	W0512 00:57:08.118984   10015 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
	error execution phase addon/coredns: couldn't retrieve DNS addon deployments: Get https://control-plane.minikube.internal:8443/apis/apps/v1/namespaces/kube-system/deployments?labelSelector=k8s-app%3Dkube-dns: x509: certificate is valid for minikubeCA, kubernetes.default.svc.cluster.local, kubernetes.default.svc, kubernetes.default, kubernetes, localhost, not control-plane.minikube.internal
	To see the stack trace of this error execute with --v=5 or higher
	I0512 00:57:08.636633    8836 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0512 00:57:09.113458    8836 retry.go:31] will retry after 429.392365ms: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml": Process exited with status 1
	stdout:
	
	stderr:
	W0512 00:57:09.008718   10190 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
	error execution phase addon/coredns: couldn't retrieve DNS addon deployments: Get https://control-plane.minikube.internal:8443/apis/apps/v1/namespaces/kube-system/deployments?labelSelector=k8s-app%3Dkube-dns: x509: certificate is valid for minikubeCA, kubernetes.default.svc.cluster.local, kubernetes.default.svc, kubernetes.default, kubernetes, localhost, not control-plane.minikube.internal
	To see the stack trace of this error execute with --v=5 or higher
	I0512 00:57:09.551045    8836 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0512 00:57:10.214701    8836 retry.go:31] will retry after 801.058534ms: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml": Process exited with status 1
	stdout:
	
	stderr:
	W0512 00:57:10.124728   10322 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
	error execution phase addon/coredns: couldn't retrieve DNS addon deployments: Get https://control-plane.minikube.internal:8443/apis/apps/v1/namespaces/kube-system/deployments?labelSelector=k8s-app%3Dkube-dns: x509: certificate is valid for minikubeCA, kubernetes.default.svc.cluster.local, kubernetes.default.svc, kubernetes.default, kubernetes, localhost, not control-plane.minikube.internal
	To see the stack trace of this error execute with --v=5 or higher
	I0512 00:57:11.022077    8836 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0512 00:57:11.501414    8836 retry.go:31] will retry after 1.529087469s: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml": Process exited with status 1
	stdout:
	
	stderr:
	W0512 00:57:11.422680   10438 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
	error execution phase addon/coredns: couldn't retrieve DNS addon deployments: Get https://control-plane.minikube.internal:8443/apis/apps/v1/namespaces/kube-system/deployments?labelSelector=k8s-app%3Dkube-dns: x509: certificate is valid for minikubeCA, kubernetes.default.svc.cluster.local, kubernetes.default.svc, kubernetes.default, kubernetes, localhost, not control-plane.minikube.internal
	To see the stack trace of this error execute with --v=5 or higher
	I0512 00:57:13.036458    8836 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0512 00:57:13.244172    8836 retry.go:31] will retry after 1.335136154s: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml": Process exited with status 1
	stdout:
	
	stderr:
	W0512 00:57:13.213617   10488 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
	error execution phase addon/coredns: couldn't retrieve DNS addon deployments: Get https://control-plane.minikube.internal:8443/apis/apps/v1/namespaces/kube-system/deployments?labelSelector=k8s-app%3Dkube-dns: x509: certificate is valid for minikubeCA, kubernetes.default.svc.cluster.local, kubernetes.default.svc, kubernetes.default, kubernetes, localhost, not control-plane.minikube.internal
	To see the stack trace of this error execute with --v=5 or higher
	I0512 00:57:14.581883    8836 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0512 00:57:14.723793    8836 retry.go:31] will retry after 2.012724691s: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml": Process exited with status 1
	stdout:
	
	stderr:
	W0512 00:57:14.687006   10523 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
	error execution phase addon/coredns: couldn't retrieve DNS addon deployments: Get https://control-plane.minikube.internal:8443/apis/apps/v1/namespaces/kube-system/deployments?labelSelector=k8s-app%3Dkube-dns: x509: certificate is valid for minikubeCA, kubernetes.default.svc.cluster.local, kubernetes.default.svc, kubernetes.default, kubernetes, localhost, not control-plane.minikube.internal
	To see the stack trace of this error execute with --v=5 or higher
	I0512 00:57:16.740093    8836 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0512 00:57:17.099572    8836 retry.go:31] will retry after 4.744335389s: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml": Process exited with status 1
	stdout:
	
	stderr:
	W0512 00:57:17.036757   10668 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
	error execution phase addon/coredns: couldn't retrieve DNS addon deployments: Get https://control-plane.minikube.internal:8443/apis/apps/v1/namespaces/kube-system/deployments?labelSelector=k8s-app%3Dkube-dns: x509: certificate is valid for minikubeCA, kubernetes.default.svc.cluster.local, kubernetes.default.svc, kubernetes.default, kubernetes, localhost, not control-plane.minikube.internal
	To see the stack trace of this error execute with --v=5 or higher
	I0512 00:57:21.856729    8836 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0512 00:57:22.002059    8836 retry.go:31] will retry after 4.014454686s: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml": Process exited with status 1
	stdout:
	
	stderr:
	W0512 00:57:21.970437   10767 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
	error execution phase addon/coredns: couldn't retrieve DNS addon deployments: Get https://control-plane.minikube.internal:8443/apis/apps/v1/namespaces/kube-system/deployments?labelSelector=k8s-app%3Dkube-dns: x509: certificate is valid for minikubeCA, kubernetes.default.svc.cluster.local, kubernetes.default.svc, kubernetes.default, kubernetes, localhost, not control-plane.minikube.internal
	To see the stack trace of this error execute with --v=5 or higher
	I0512 00:57:26.025465    8836 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	W0512 00:57:26.234267    8836 kubeadm.go:727] addon install failed, wil retry: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml": Process exited with status 1
	stdout:
	
	stderr:
	W0512 00:57:26.211728   10943 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
	error execution phase addon/coredns: couldn't retrieve DNS addon deployments: Get https://control-plane.minikube.internal:8443/apis/apps/v1/namespaces/kube-system/deployments?labelSelector=k8s-app%3Dkube-dns: x509: certificate is valid for minikubeCA, kubernetes.default.svc.cluster.local, kubernetes.default.svc, kubernetes.default, kubernetes, localhost, not control-plane.minikube.internal
	To see the stack trace of this error execute with --v=5 or higher
	I0512 00:57:26.234267    8836 kubeadm.go:605] restartCluster took 40.0237513s
	W0512 00:57:26.235517    8836 out.go:239] ! Unable to restart cluster, will reset it: addons: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml": Process exited with status 1
	stdout:
	
	stderr:
	W0512 00:57:26.211728   10943 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
	error execution phase addon/coredns: couldn't retrieve DNS addon deployments: Get https://control-plane.minikube.internal:8443/apis/apps/v1/namespaces/kube-system/deployments?labelSelector=k8s-app%3Dkube-dns: x509: certificate is valid for minikubeCA, kubernetes.default.svc.cluster.local, kubernetes.default.svc, kubernetes.default, kubernetes, localhost, not control-plane.minikube.internal
	To see the stack trace of this error execute with --v=5 or higher
	
	! Unable to restart cluster, will reset it: addons: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml": Process exited with status 1
	stdout:
	
	stderr:
	W0512 00:57:26.211728   10943 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
	error execution phase addon/coredns: couldn't retrieve DNS addon deployments: Get https://control-plane.minikube.internal:8443/apis/apps/v1/namespaces/kube-system/deployments?labelSelector=k8s-app%3Dkube-dns: x509: certificate is valid for minikubeCA, kubernetes.default.svc.cluster.local, kubernetes.default.svc, kubernetes.default, kubernetes, localhost, not control-plane.minikube.internal
	To see the stack trace of this error execute with --v=5 or higher
	
	I0512 00:57:26.235517    8836 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.0:$PATH" kubeadm reset --cri-socket /var/run/dockershim.sock --force"
	I0512 00:57:34.548497    8836 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.0:$PATH" kubeadm reset --cri-socket /var/run/dockershim.sock --force": (8.3124251s)
	I0512 00:57:34.559919    8836 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0512 00:57:34.606485    8836 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0512 00:57:34.638745    8836 kubeadm.go:221] ignoring SystemVerification for kubeadm because of docker driver
	I0512 00:57:34.652517    8836 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0512 00:57:34.673510    8836 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0512 00:57:34.673510    8836 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	W0512 00:57:35.462748    8836 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.18.0
	[preflight] Running pre-flight checks
	
	stderr:
	W0512 00:57:34.808752   11580 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase preflight: [preflight] Some fatal errors occurred:
		[ERROR Port-8443]: Port 8443 is in use
		[ERROR Port-2379]: Port 2379 is in use
		[ERROR Port-2380]: Port 2380 is in use
	[preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.18.0
	[preflight] Running pre-flight checks
	
	stderr:
	W0512 00:57:34.808752   11580 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase preflight: [preflight] Some fatal errors occurred:
		[ERROR Port-8443]: Port 8443 is in use
		[ERROR Port-2379]: Port 2379 is in use
		[ERROR Port-2380]: Port 2380 is in use
	[preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`
	To see the stack trace of this error execute with --v=5 or higher
	
	I0512 00:57:35.462838    8836 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.0:$PATH" kubeadm reset --cri-socket /var/run/dockershim.sock --force"
	I0512 00:57:35.692850    8836 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0512 00:57:35.734171    8836 kubeadm.go:221] ignoring SystemVerification for kubeadm because of docker driver
	I0512 00:57:35.746259    8836 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0512 00:57:35.767031    8836 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0512 00:57:35.767031    8836 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0512 00:57:36.498638    8836 kubeadm.go:393] StartCluster complete in 50.4291695s
	I0512 00:57:36.506636    8836 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0512 00:57:36.585159    8836 logs.go:274] 0 containers: []
	W0512 00:57:36.585159    8836 logs.go:276] No container was found matching "kube-apiserver"
	I0512 00:57:36.596351    8836 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0512 00:57:36.679593    8836 logs.go:274] 0 containers: []
	W0512 00:57:36.679593    8836 logs.go:276] No container was found matching "etcd"
	I0512 00:57:36.689312    8836 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0512 00:57:36.767110    8836 logs.go:274] 0 containers: []
	W0512 00:57:36.767110    8836 logs.go:276] No container was found matching "coredns"
	I0512 00:57:36.776064    8836 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0512 00:57:36.857938    8836 logs.go:274] 0 containers: []
	W0512 00:57:36.858033    8836 logs.go:276] No container was found matching "kube-scheduler"
	I0512 00:57:36.867826    8836 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0512 00:57:36.964287    8836 logs.go:274] 0 containers: []
	W0512 00:57:36.964287    8836 logs.go:276] No container was found matching "kube-proxy"
	I0512 00:57:36.975897    8836 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0512 00:57:37.075632    8836 logs.go:274] 0 containers: []
	W0512 00:57:37.075632    8836 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0512 00:57:37.083626    8836 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0512 00:57:37.178446    8836 logs.go:274] 0 containers: []
	W0512 00:57:37.178446    8836 logs.go:276] No container was found matching "storage-provisioner"
	I0512 00:57:37.186441    8836 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0512 00:57:37.261452    8836 logs.go:274] 0 containers: []
	W0512 00:57:37.261452    8836 logs.go:276] No container was found matching "kube-controller-manager"
	I0512 00:57:37.261452    8836 logs.go:123] Gathering logs for dmesg ...
	I0512 00:57:37.261452    8836 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0512 00:57:37.304465    8836 logs.go:123] Gathering logs for describe nodes ...
	I0512 00:57:37.304465    8836 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.18.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0512 00:57:37.537114    8836 logs.go:123] Gathering logs for Docker ...
	I0512 00:57:37.537114    8836 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0512 00:57:37.640121    8836 logs.go:123] Gathering logs for container status ...
	I0512 00:57:37.641122    8836 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0512 00:57:39.787783    8836 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.1465504s)
	I0512 00:57:39.787783    8836 logs.go:123] Gathering logs for kubelet ...
	I0512 00:57:39.787783    8836 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0512 00:57:39.863790    8836 logs.go:138] Found kubelet problem: May 12 00:57:06 running-upgrade-20220512005137-7184 kubelet[8709]: E0512 00:57:06.310532    8709 pod_workers.go:191] Error syncing pod 112c60df9e36eeaf13a6dd3074765810 ("kube-apiserver-running-upgrade-20220512005137-7184_kube-system(112c60df9e36eeaf13a6dd3074765810)"), skipping: failed to "StartContainer" for "kube-apiserver" with CrashLoopBackOff: "back-off 10s restarting failed container=kube-apiserver pod=kube-apiserver-running-upgrade-20220512005137-7184_kube-system(112c60df9e36eeaf13a6dd3074765810)"
	W0512 00:57:39.878784    8836 logs.go:138] Found kubelet problem: May 12 00:57:08 running-upgrade-20220512005137-7184 kubelet[8709]: E0512 00:57:08.207139    8709 pod_workers.go:191] Error syncing pod 112c60df9e36eeaf13a6dd3074765810 ("kube-apiserver-running-upgrade-20220512005137-7184_kube-system(112c60df9e36eeaf13a6dd3074765810)"), skipping: failed to "StartContainer" for "kube-apiserver" with CrashLoopBackOff: "back-off 10s restarting failed container=kube-apiserver pod=kube-apiserver-running-upgrade-20220512005137-7184_kube-system(112c60df9e36eeaf13a6dd3074765810)"
	W0512 00:57:39.882794    8836 logs.go:138] Found kubelet problem: May 12 00:57:09 running-upgrade-20220512005137-7184 kubelet[8709]: E0512 00:57:09.412197    8709 pod_workers.go:191] Error syncing pod 112c60df9e36eeaf13a6dd3074765810 ("kube-apiserver-running-upgrade-20220512005137-7184_kube-system(112c60df9e36eeaf13a6dd3074765810)"), skipping: failed to "StartContainer" for "kube-apiserver" with CrashLoopBackOff: "back-off 10s restarting failed container=kube-apiserver pod=kube-apiserver-running-upgrade-20220512005137-7184_kube-system(112c60df9e36eeaf13a6dd3074765810)"
	W0512 00:57:39.885806    8836 logs.go:138] Found kubelet problem: May 12 00:57:10 running-upgrade-20220512005137-7184 kubelet[8709]: E0512 00:57:10.808527    8709 pod_workers.go:191] Error syncing pod 112c60df9e36eeaf13a6dd3074765810 ("kube-apiserver-running-upgrade-20220512005137-7184_kube-system(112c60df9e36eeaf13a6dd3074765810)"), skipping: failed to "StartContainer" for "kube-apiserver" with CrashLoopBackOff: "back-off 10s restarting failed container=kube-apiserver pod=kube-apiserver-running-upgrade-20220512005137-7184_kube-system(112c60df9e36eeaf13a6dd3074765810)"
	W0512 00:57:39.886797    8836 logs.go:138] Found kubelet problem: May 12 00:57:10 running-upgrade-20220512005137-7184 kubelet[8709]: E0512 00:57:10.809635    8709 pod_workers.go:191] Error syncing pod 6eb087e932898681e74c978c21efeebc ("etcd-running-upgrade-20220512005137-7184_kube-system(6eb087e932898681e74c978c21efeebc)"), skipping: failed to "StartContainer" for "etcd" with CrashLoopBackOff: "back-off 10s restarting failed container=etcd pod=etcd-running-upgrade-20220512005137-7184_kube-system(6eb087e932898681e74c978c21efeebc)"
	W0512 00:57:39.888781    8836 logs.go:138] Found kubelet problem: May 12 00:57:11 running-upgrade-20220512005137-7184 kubelet[8709]: E0512 00:57:11.638082    8709 pod_workers.go:191] Error syncing pod 6eb087e932898681e74c978c21efeebc ("etcd-running-upgrade-20220512005137-7184_kube-system(6eb087e932898681e74c978c21efeebc)"), skipping: failed to "StartContainer" for "etcd" with CrashLoopBackOff: "back-off 10s restarting failed container=etcd pod=etcd-running-upgrade-20220512005137-7184_kube-system(6eb087e932898681e74c978c21efeebc)"
	W0512 00:57:39.895782    8836 logs.go:138] Found kubelet problem: May 12 00:57:14 running-upgrade-20220512005137-7184 kubelet[8709]: E0512 00:57:14.806338    8709 pod_workers.go:191] Error syncing pod 6eb087e932898681e74c978c21efeebc ("etcd-running-upgrade-20220512005137-7184_kube-system(6eb087e932898681e74c978c21efeebc)"), skipping: failed to "StartContainer" for "etcd" with CrashLoopBackOff: "back-off 10s restarting failed container=etcd pod=etcd-running-upgrade-20220512005137-7184_kube-system(6eb087e932898681e74c978c21efeebc)"
	W0512 00:57:39.905782    8836 logs.go:138] Found kubelet problem: May 12 00:57:17 running-upgrade-20220512005137-7184 kubelet[8709]: E0512 00:57:17.925354    8709 pod_workers.go:191] Error syncing pod 112c60df9e36eeaf13a6dd3074765810 ("kube-apiserver-running-upgrade-20220512005137-7184_kube-system(112c60df9e36eeaf13a6dd3074765810)"), skipping: failed to "StartContainer" for "kube-apiserver" with CrashLoopBackOff: "back-off 20s restarting failed container=kube-apiserver pod=kube-apiserver-running-upgrade-20220512005137-7184_kube-system(112c60df9e36eeaf13a6dd3074765810)"
	W0512 00:57:39.924787    8836 logs.go:138] Found kubelet problem: May 12 00:57:26 running-upgrade-20220512005137-7184 kubelet[8709]: E0512 00:57:26.037949    8709 pod_workers.go:191] Error syncing pod 112c60df9e36eeaf13a6dd3074765810 ("kube-apiserver-running-upgrade-20220512005137-7184_kube-system(112c60df9e36eeaf13a6dd3074765810)"), skipping: failed to "StartContainer" for "kube-apiserver" with CrashLoopBackOff: "back-off 20s restarting failed container=kube-apiserver pod=kube-apiserver-running-upgrade-20220512005137-7184_kube-system(112c60df9e36eeaf13a6dd3074765810)"
	W0512 00:57:39.926785    8836 out.go:369] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.18.0
	[preflight] Running pre-flight checks
	
	stderr:
	W0512 00:57:35.925547   11703 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase preflight: [preflight] Some fatal errors occurred:
		[ERROR Port-8443]: Port 8443 is in use
		[ERROR Port-2379]: Port 2379 is in use
		[ERROR Port-2380]: Port 2380 is in use
	[preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`
	To see the stack trace of this error execute with --v=5 or higher
	W0512 00:57:39.926785    8836 out.go:239] * 
	* 
	W0512 00:57:39.926785    8836 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.18.0
	[preflight] Running pre-flight checks
	
	stderr:
	W0512 00:57:35.925547   11703 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase preflight: [preflight] Some fatal errors occurred:
		[ERROR Port-8443]: Port 8443 is in use
		[ERROR Port-2379]: Port 2379 is in use
		[ERROR Port-2380]: Port 2380 is in use
	[preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.18.0
	[preflight] Running pre-flight checks
	
	stderr:
	W0512 00:57:35.925547   11703 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase preflight: [preflight] Some fatal errors occurred:
		[ERROR Port-8443]: Port 8443 is in use
		[ERROR Port-2379]: Port 2379 is in use
		[ERROR Port-2380]: Port 2380 is in use
	[preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`
	To see the stack trace of this error execute with --v=5 or higher
	
	W0512 00:57:39.926785    8836 out.go:239] * 
	* 
	W0512 00:57:39.928793    8836 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0512 00:57:39.931801    8836 out.go:177] X Problems detected in kubelet:
	I0512 00:57:39.937783    8836 out.go:177]   May 12 00:57:06 running-upgrade-20220512005137-7184 kubelet[8709]: E0512 00:57:06.310532    8709 pod_workers.go:191] Error syncing pod 112c60df9e36eeaf13a6dd3074765810 ("kube-apiserver-running-upgrade-20220512005137-7184_kube-system(112c60df9e36eeaf13a6dd3074765810)"), skipping: failed to "StartContainer" for "kube-apiserver" with CrashLoopBackOff: "back-off 10s restarting failed container=kube-apiserver pod=kube-apiserver-running-upgrade-20220512005137-7184_kube-system(112c60df9e36eeaf13a6dd3074765810)"
	I0512 00:57:39.943788    8836 out.go:177]   May 12 00:57:08 running-upgrade-20220512005137-7184 kubelet[8709]: E0512 00:57:08.207139    8709 pod_workers.go:191] Error syncing pod 112c60df9e36eeaf13a6dd3074765810 ("kube-apiserver-running-upgrade-20220512005137-7184_kube-system(112c60df9e36eeaf13a6dd3074765810)"), skipping: failed to "StartContainer" for "kube-apiserver" with CrashLoopBackOff: "back-off 10s restarting failed container=kube-apiserver pod=kube-apiserver-running-upgrade-20220512005137-7184_kube-system(112c60df9e36eeaf13a6dd3074765810)"
	I0512 00:57:39.949833    8836 out.go:177]   May 12 00:57:09 running-upgrade-20220512005137-7184 kubelet[8709]: E0512 00:57:09.412197    8709 pod_workers.go:191] Error syncing pod 112c60df9e36eeaf13a6dd3074765810 ("kube-apiserver-running-upgrade-20220512005137-7184_kube-system(112c60df9e36eeaf13a6dd3074765810)"), skipping: failed to "StartContainer" for "kube-apiserver" with CrashLoopBackOff: "back-off 10s restarting failed container=kube-apiserver pod=kube-apiserver-running-upgrade-20220512005137-7184_kube-system(112c60df9e36eeaf13a6dd3074765810)"
	I0512 00:57:39.957791    8836 out.go:177] 
	W0512 00:57:39.959792    8836 out.go:239] X Exiting due to GUEST_PORT_IN_USE: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.18.0
	[preflight] Running pre-flight checks
	
	stderr:
	W0512 00:57:35.925547   11703 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase preflight: [preflight] Some fatal errors occurred:
		[ERROR Port-8443]: Port 8443 is in use
		[ERROR Port-2379]: Port 2379 is in use
		[ERROR Port-2380]: Port 2380 is in use
	[preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to GUEST_PORT_IN_USE: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.18.0
	[preflight] Running pre-flight checks
	
	stderr:
	W0512 00:57:35.925547   11703 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase preflight: [preflight] Some fatal errors occurred:
		[ERROR Port-8443]: Port 8443 is in use
		[ERROR Port-2379]: Port 2379 is in use
		[ERROR Port-2380]: Port 2380 is in use
	[preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`
	To see the stack trace of this error execute with --v=5 or higher
	
	W0512 00:57:39.959792    8836 out.go:239] * Suggestion: kubeadm detected a TCP port conflict with another process: probably another local Kubernetes installation. Run lsof -p<port> to find the process and kill it
	* Suggestion: kubeadm detected a TCP port conflict with another process: probably another local Kubernetes installation. Run lsof -p<port> to find the process and kill it
	W0512 00:57:39.959792    8836 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/5484
	* Related issue: https://github.com/kubernetes/minikube/issues/5484
	I0512 00:57:39.965798    8836 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:139: upgrade from v1.9.0 to HEAD failed: out/minikube-windows-amd64.exe start -p running-upgrade-20220512005137-7184 --memory=2200 --alsologtostderr -v=1 --driver=docker: exit status 81
panic.go:482: *** TestRunningBinaryUpgrade FAILED at 2022-05-12 00:57:40.3435431 +0000 GMT m=+7336.746633101
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestRunningBinaryUpgrade]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect running-upgrade-20220512005137-7184
helpers_test.go:231: (dbg) Done: docker inspect running-upgrade-20220512005137-7184: (1.2271612s)
helpers_test.go:235: (dbg) docker inspect running-upgrade-20220512005137-7184:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "69ba9018a13d1cfe6a7f9721b773e713f3d8f11503c2438735aff9b8e2fb8302",
	        "Created": "2022-05-12T00:51:58.0692484Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 141501,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2022-05-12T00:51:59.2994882Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:11589cdc9ef4b67a64cc243dd3cf013e81ad02bbed105fc37dc07aa272044680",
	        "ResolvConfPath": "/var/lib/docker/containers/69ba9018a13d1cfe6a7f9721b773e713f3d8f11503c2438735aff9b8e2fb8302/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/69ba9018a13d1cfe6a7f9721b773e713f3d8f11503c2438735aff9b8e2fb8302/hostname",
	        "HostsPath": "/var/lib/docker/containers/69ba9018a13d1cfe6a7f9721b773e713f3d8f11503c2438735aff9b8e2fb8302/hosts",
	        "LogPath": "/var/lib/docker/containers/69ba9018a13d1cfe6a7f9721b773e713f3d8f11503c2438735aff9b8e2fb8302/69ba9018a13d1cfe6a7f9721b773e713f3d8f11503c2438735aff9b8e2fb8302-json.log",
	        "Name": "/running-upgrade-20220512005137-7184",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "running-upgrade-20220512005137-7184:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "default",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 4613734400,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/bdf4e789089780f89cfb5a6bdf85c663e0c39b249c5e8c43123180d8ba26d7d4-init/diff:/var/lib/docker/overlay2/6b27ff1445ffae2e30bfbf2628ff8bc1456e4529abbac72add14efe93f4d0a52/diff:/var/lib/docker/overlay2/8d7fde4bd56c884cd7fd7d0e2a670fa90dc216a5b967e3493de4af069db01d3c/diff:/var/lib/docker/overlay2/e2d2bc617cf002c940d4836f6025ff339088f7920f0eabd5cffb6aad086161df/diff:/var/lib/docker/overlay2/98168243010c934d16ba40530f385985da6e8be1a88a487f7c21b4bca4ecb189/diff:/var/lib/docker/overlay2/ed25ecd127a4299f480b4009ef0c23fba417470264048e9e590b5be2d0373db5/diff:/var/lib/docker/overlay2/21b969d6c146472ec1405abebe211ddce4ab350b0e1562ef433e0551b42532b2/diff:/var/lib/docker/overlay2/8cbb9f075d1ccf1d9cde5377b34d674674542065a8a7427111056125c8116f79/diff:/var/lib/docker/overlay2/79546a23d32ddeee050a5e3031d10524a492a80571c8f246f53155c61cd7ac60/diff:/var/lib/docker/overlay2/0c730b27e34e5a51dcb53a99ec85c8f7e50557e68da267ad91393eb27c36b8ba/diff:/var/lib/docker/overlay2/cc1c2d
503311a1cb19fbab4a8eefc517bfe146385707fd738c8a4acc895b1769/diff:/var/lib/docker/overlay2/d35bb1e56ae22133a09bded131e64dd8029ac7ce0621fa655549f8178c52160f/diff:/var/lib/docker/overlay2/7a00674c8fdc38091f80a62c719ba1c72e3cbe7987065405265e343c33148d49/diff:/var/lib/docker/overlay2/5ccf5d56ed6d9650e63af40330e379258b9bc38c893ad6bc8c518dc653282751/diff:/var/lib/docker/overlay2/426b34ff6b18a24beb6fb7a5fb084fdb394870ee3478f30e63b84a96d1182c70/diff:/var/lib/docker/overlay2/fc2f57bc0dfe444176a6a30b6822535eb0ed04ba2f4729116bf2e717bd53ec33/diff:/var/lib/docker/overlay2/8493984268dc4831d464f8ce59ae647723fd0fe1332e8026c0d4a2f1c06a528f/diff:/var/lib/docker/overlay2/b9742a7ef19b0dadd769bdc1e64143c35e56623c97e8929f1a7e57e263591063/diff:/var/lib/docker/overlay2/45dd226b1920f5ae5b7a6614961732881bd6b5ca40f230e9e7bdad84494667b9/diff:/var/lib/docker/overlay2/c0e14f51e7b02aa29d01adc2565d19806ccda76bef7bb4b6dee0b1b67decf043/diff:/var/lib/docker/overlay2/bcd564fa96313abca5c4a4d60c37dc4ee817241179ffdd9011a17105ed5cc1cb/diff:/var/lib/d
ocker/overlay2/05a7914d7a9fbaa540145b92b2a971b5ddf68c1179abb63db947cb989a74495c/diff",
	                "MergedDir": "/var/lib/docker/overlay2/bdf4e789089780f89cfb5a6bdf85c663e0c39b249c5e8c43123180d8ba26d7d4/merged",
	                "UpperDir": "/var/lib/docker/overlay2/bdf4e789089780f89cfb5a6bdf85c663e0c39b249c5e8c43123180d8ba26d7d4/diff",
	                "WorkDir": "/var/lib/docker/overlay2/bdf4e789089780f89cfb5a6bdf85c663e0c39b249c5e8c43123180d8ba26d7d4/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "running-upgrade-20220512005137-7184",
	                "Source": "/var/lib/docker/volumes/running-upgrade-20220512005137-7184/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "running-upgrade-20220512005137-7184",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	                "container=docker"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.8@sha256:2f3380ebf1bb0c75b0b47160fd4e61b7b8fef0f1f32f9def108d3eada50a7a81",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "running-upgrade-20220512005137-7184",
	                "name.minikube.sigs.k8s.io": "running-upgrade-20220512005137-7184",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "87782add78e5f65820ddfd426b2bf7c7161e093ac65364b9d426d6cda30b7c1a",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49478"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49479"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49480"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/87782add78e5",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "3815941e918de7f0e6accaa4ea938ed4d357ff1e60baac05a0b74a5d998e61ec",
	            "Gateway": "172.17.0.1",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "172.17.0.3",
	            "IPPrefixLen": 16,
	            "IPv6Gateway": "",
	            "MacAddress": "02:42:ac:11:00:03",
	            "Networks": {
	                "bridge": {
	                    "IPAMConfig": null,
	                    "Links": null,
	                    "Aliases": null,
	                    "NetworkID": "94f204a1590e4c33047495b242ed46b4b5511b7bcc5f58a2c9ae59af80b8681b",
	                    "EndpointID": "3815941e918de7f0e6accaa4ea938ed4d357ff1e60baac05a0b74a5d998e61ec",
	                    "Gateway": "172.17.0.1",
	                    "IPAddress": "172.17.0.3",
	                    "IPPrefixLen": 16,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:ac:11:00:03",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p running-upgrade-20220512005137-7184 -n running-upgrade-20220512005137-7184
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p running-upgrade-20220512005137-7184 -n running-upgrade-20220512005137-7184: exit status 2 (21.440313s)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
** stderr ** 
	E0512 00:58:03.006056    9900 status.go:422] Error apiserver status: https://127.0.0.1:49480/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[-]etcd failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestRunningBinaryUpgrade FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestRunningBinaryUpgrade]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-windows-amd64.exe -p running-upgrade-20220512005137-7184 logs -n 25

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
helpers_test.go:247: (dbg) Done: out/minikube-windows-amd64.exe -p running-upgrade-20220512005137-7184 logs -n 25: (1m12.0456005s)
helpers_test.go:252: TestRunningBinaryUpgrade logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |------------|------------------------------------------|------------------------------------------|-------------------|---------|---------------------|---------------------|
	|  Command   |                   Args                   |                 Profile                  |       User        | Version |     Start Time      |      End Time       |
	|------------|------------------------------------------|------------------------------------------|-------------------|---------|---------------------|---------------------|
	| delete     | -p                                       | test-preload-20220512003344-7184         | minikube4\jenkins | v1.25.2 | 12 May 22 00:38 GMT | 12 May 22 00:39 GMT |
	|            | test-preload-20220512003344-7184         |                                          |                   |         |                     |                     |
	| start      | -p                                       | scheduled-stop-20220512003922-7184       | minikube4\jenkins | v1.25.2 | 12 May 22 00:39 GMT | 12 May 22 00:41 GMT |
	|            | scheduled-stop-20220512003922-7184       |                                          |                   |         |                     |                     |
	|            | --memory=2048 --driver=docker            |                                          |                   |         |                     |                     |
	| stop       | -p                                       | scheduled-stop-20220512003922-7184       | minikube4\jenkins | v1.25.2 | 12 May 22 00:41 GMT | 12 May 22 00:41 GMT |
	|            | scheduled-stop-20220512003922-7184       |                                          |                   |         |                     |                     |
	|            | --schedule 5m                            |                                          |                   |         |                     |                     |
	| ssh        | -p                                       | scheduled-stop-20220512003922-7184       | minikube4\jenkins | v1.25.2 | 12 May 22 00:41 GMT | 12 May 22 00:41 GMT |
	|            | scheduled-stop-20220512003922-7184       |                                          |                   |         |                     |                     |
	|            | -- sudo systemctl show                   |                                          |                   |         |                     |                     |
	|            | minikube-scheduled-stop --no-page        |                                          |                   |         |                     |                     |
	| stop       | -p                                       | scheduled-stop-20220512003922-7184       | minikube4\jenkins | v1.25.2 | 12 May 22 00:41 GMT | 12 May 22 00:41 GMT |
	|            | scheduled-stop-20220512003922-7184       |                                          |                   |         |                     |                     |
	|            | --schedule 5s                            |                                          |                   |         |                     |                     |
	| delete     | -p                                       | scheduled-stop-20220512003922-7184       | minikube4\jenkins | v1.25.2 | 12 May 22 00:42 GMT | 12 May 22 00:42 GMT |
	|            | scheduled-stop-20220512003922-7184       |                                          |                   |         |                     |                     |
	| start      | -p                                       | skaffold-20220512004259-7184             | minikube4\jenkins | v1.25.2 | 12 May 22 00:43 GMT | 12 May 22 00:44 GMT |
	|            | skaffold-20220512004259-7184             |                                          |                   |         |                     |                     |
	|            | --memory=2600 --driver=docker            |                                          |                   |         |                     |                     |
	| docker-env | --shell none -p                          | skaffold-20220512004259-7184             | skaffold          | v1.25.2 | 12 May 22 00:44 GMT | 12 May 22 00:45 GMT |
	|            | skaffold-20220512004259-7184             |                                          |                   |         |                     |                     |
	|            | --user=skaffold                          |                                          |                   |         |                     |                     |
	| logs       | skaffold-20220512004259-7184             | skaffold-20220512004259-7184             | minikube4\jenkins | v1.25.2 | 12 May 22 00:45 GMT | 12 May 22 00:45 GMT |
	|            | logs -n 25                               |                                          |                   |         |                     |                     |
	| delete     | -p                                       | skaffold-20220512004259-7184             | minikube4\jenkins | v1.25.2 | 12 May 22 00:45 GMT | 12 May 22 00:45 GMT |
	|            | skaffold-20220512004259-7184             |                                          |                   |         |                     |                     |
	| delete     | -p                                       | insufficient-storage-20220512004557-7184 | minikube4\jenkins | v1.25.2 | 12 May 22 00:47 GMT | 12 May 22 00:47 GMT |
	|            | insufficient-storage-20220512004557-7184 |                                          |                   |         |                     |                     |
	| start      | -p                                       | NoKubernetes-20220512004748-7184         | minikube4\jenkins | v1.25.2 | 12 May 22 00:47 GMT | 12 May 22 00:50 GMT |
	|            | NoKubernetes-20220512004748-7184         |                                          |                   |         |                     |                     |
	|            | --driver=docker                          |                                          |                   |         |                     |                     |
	| start      | -p                                       | force-systemd-flag-20220512004748-7184   | minikube4\jenkins | v1.25.2 | 12 May 22 00:47 GMT | 12 May 22 00:50 GMT |
	|            | force-systemd-flag-20220512004748-7184   |                                          |                   |         |                     |                     |
	|            | --memory=2048 --force-systemd            |                                          |                   |         |                     |                     |
	|            | --alsologtostderr -v=5 --driver=docker   |                                          |                   |         |                     |                     |
	| ssh        | force-systemd-flag-20220512004748-7184   | force-systemd-flag-20220512004748-7184   | minikube4\jenkins | v1.25.2 | 12 May 22 00:50 GMT | 12 May 22 00:51 GMT |
	|            | ssh docker info --format                 |                                          |                   |         |                     |                     |
	|            | {{.CgroupDriver}}                        |                                          |                   |         |                     |                     |
	| start      | -p                                       | offline-docker-20220512004748-7184       | minikube4\jenkins | v1.25.2 | 12 May 22 00:47 GMT | 12 May 22 00:51 GMT |
	|            | offline-docker-20220512004748-7184       |                                          |                   |         |                     |                     |
	|            | --alsologtostderr -v=1                   |                                          |                   |         |                     |                     |
	|            | --memory=2048 --wait=true                |                                          |                   |         |                     |                     |
	|            | --driver=docker                          |                                          |                   |         |                     |                     |
	| delete     | -p                                       | force-systemd-flag-20220512004748-7184   | minikube4\jenkins | v1.25.2 | 12 May 22 00:51 GMT | 12 May 22 00:51 GMT |
	|            | force-systemd-flag-20220512004748-7184   |                                          |                   |         |                     |                     |
	| delete     | -p                                       | offline-docker-20220512004748-7184       | minikube4\jenkins | v1.25.2 | 12 May 22 00:51 GMT | 12 May 22 00:51 GMT |
	|            | offline-docker-20220512004748-7184       |                                          |                   |         |                     |                     |
	| start      | -p                                       | NoKubernetes-20220512004748-7184         | minikube4\jenkins | v1.25.2 | 12 May 22 00:51 GMT | 12 May 22 00:51 GMT |
	|            | NoKubernetes-20220512004748-7184         |                                          |                   |         |                     |                     |
	|            | --no-kubernetes --driver=docker          |                                          |                   |         |                     |                     |
	| delete     | -p                                       | NoKubernetes-20220512004748-7184         | minikube4\jenkins | v1.25.2 | 12 May 22 00:51 GMT | 12 May 22 00:52 GMT |
	|            | NoKubernetes-20220512004748-7184         |                                          |                   |         |                     |                     |
	| delete     | -p                                       | NoKubernetes-20220512004748-7184         | minikube4\jenkins | v1.25.2 | 12 May 22 00:52 GMT | 12 May 22 00:53 GMT |
	|            | NoKubernetes-20220512004748-7184         |                                          |                   |         |                     |                     |
	| start      | -p                                       | stopped-upgrade-20220512004748-7184      | minikube4\jenkins | v1.25.2 | 12 May 22 00:53 GMT | 12 May 22 00:54 GMT |
	|            | stopped-upgrade-20220512004748-7184      |                                          |                   |         |                     |                     |
	|            | --memory=2200 --alsologtostderr          |                                          |                   |         |                     |                     |
	|            | -v=1 --driver=docker                     |                                          |                   |         |                     |                     |
	| logs       | -p                                       | stopped-upgrade-20220512004748-7184      | minikube4\jenkins | v1.25.2 | 12 May 22 00:54 GMT | 12 May 22 00:54 GMT |
	|            | stopped-upgrade-20220512004748-7184      |                                          |                   |         |                     |                     |
	| delete     | -p                                       | stopped-upgrade-20220512004748-7184      | minikube4\jenkins | v1.25.2 | 12 May 22 00:54 GMT | 12 May 22 00:55 GMT |
	|            | stopped-upgrade-20220512004748-7184      |                                          |                   |         |                     |                     |
	| start      | -p                                       | kubernetes-upgrade-20220512005507-7184   | minikube4\jenkins | v1.25.2 | 12 May 22 00:55 GMT | 12 May 22 00:57 GMT |
	|            | kubernetes-upgrade-20220512005507-7184   |                                          |                   |         |                     |                     |
	|            | --memory=2200                            |                                          |                   |         |                     |                     |
	|            | --kubernetes-version=v1.16.0             |                                          |                   |         |                     |                     |
	|            | --alsologtostderr -v=1 --driver=docker   |                                          |                   |         |                     |                     |
	| stop       | -p                                       | kubernetes-upgrade-20220512005507-7184   | minikube4\jenkins | v1.25.2 | 12 May 22 00:57 GMT | 12 May 22 00:57 GMT |
	|            | kubernetes-upgrade-20220512005507-7184   |                                          |                   |         |                     |                     |
	|------------|------------------------------------------|------------------------------------------|-------------------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2022/05/12 00:57:30
	Running on machine: minikube4
	Binary: Built with gc go1.18.1 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0512 00:57:30.139341    8048 out.go:296] Setting OutFile to fd 1608 ...
	I0512 00:57:30.203450    8048 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0512 00:57:30.203450    8048 out.go:309] Setting ErrFile to fd 1520...
	I0512 00:57:30.203450    8048 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0512 00:57:30.221946    8048 out.go:303] Setting JSON to false
	I0512 00:57:30.226705    8048 start.go:115] hostinfo: {"hostname":"minikube4","uptime":15503,"bootTime":1652301547,"procs":167,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19042 Build 19042","kernelVersion":"10.0.19042 Build 19042","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"df6bfb5f-73f2-4acb-9365-df7854ecbb28"}
	W0512 00:57:30.227793    8048 start.go:123] gopshost.Virtualization returned error: not implemented yet
	I0512 00:57:30.233083    8048 out.go:177] * [kubernetes-upgrade-20220512005507-7184] minikube v1.25.2 on Microsoft Windows 10 Enterprise N 10.0.19042 Build 19042
	I0512 00:57:30.235005    8048 notify.go:193] Checking for updates...
	I0512 00:57:30.237220    8048 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	I0512 00:57:30.239718    8048 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube4\minikube-integration\.minikube
	I0512 00:57:30.242539    8048 out.go:177]   - MINIKUBE_LOCATION=13639
	I0512 00:57:30.244918    8048 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	W0512 00:57:27.064678    3732 cli_runner.go:211] docker container inspect -f {{.Id}} missing-upgrade-20220512005316-7184 returned with exit code 1
	I0512 00:57:27.064678    3732 cli_runner.go:217] Completed: docker container inspect -f {{.Id}} missing-upgrade-20220512005316-7184: (1.139854s)
	I0512 00:57:27.072668    3732 cli_runner.go:164] Run: docker network inspect  --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0512 00:57:28.202824    3732 cli_runner.go:211] docker network inspect  --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0512 00:57:28.202824    3732 cli_runner.go:217] Completed: docker network inspect  --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}": (1.1300981s)
	I0512 00:57:28.211830    3732 network_create.go:272] running [docker network inspect ] to gather additional debugging logs...
	I0512 00:57:28.211830    3732 cli_runner.go:164] Run: docker network inspect 
	W0512 00:57:29.286441    3732 cli_runner.go:211] docker network inspect  returned with exit code 1
	I0512 00:57:29.286459    3732 cli_runner.go:217] Completed: docker network inspect : (1.0744468s)
	I0512 00:57:29.286516    3732 network_create.go:275] error running [docker network inspect ]: docker network inspect : exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: 
	I0512 00:57:29.286642    3732 network_create.go:277] output of [docker network inspect ]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: 
	
	** /stderr **
	W0512 00:57:29.288034    3732 delete.go:139] delete failed (probably ok) <nil>
	I0512 00:57:29.288034    3732 fix.go:115] Sleeping 1 second for extra luck!
	I0512 00:57:30.297334    3732 start.go:131] createHost starting for "m01" (driver="docker")
	I0512 00:57:30.304153    3732 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0512 00:57:30.304533    3732 start.go:165] libmachine.API.Create for "missing-upgrade-20220512005316-7184" (driver="docker")
	I0512 00:57:30.304625    3732 client.go:168] LocalClient.Create starting
	I0512 00:57:30.305114    3732 main.go:134] libmachine: Reading certificate data from C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem
	I0512 00:57:30.305472    3732 main.go:134] libmachine: Decoding PEM data...
	I0512 00:57:30.305532    3732 main.go:134] libmachine: Parsing certificate...
	I0512 00:57:30.305823    3732 main.go:134] libmachine: Reading certificate data from C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem
	I0512 00:57:30.306092    3732 main.go:134] libmachine: Decoding PEM data...
	I0512 00:57:30.306366    3732 main.go:134] libmachine: Parsing certificate...
	I0512 00:57:30.317818    3732 cli_runner.go:164] Run: docker network inspect missing-upgrade-20220512005316-7184 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0512 00:57:26.893103    9076 cli_runner.go:217] Completed: docker container inspect -f {{.Id}} pause-20220512005140-7184: (1.1406647s)
	I0512 00:57:26.902867    9076 cli_runner.go:164] Run: docker container inspect pause-20220512005140-7184 --format={{.State.Status}}
	I0512 00:57:28.046824    9076 cli_runner.go:217] Completed: docker container inspect pause-20220512005140-7184 --format={{.State.Status}}: (1.1438984s)
	I0512 00:57:28.056294    9076 cli_runner.go:164] Run: docker exec --privileged -t pause-20220512005140-7184 /bin/bash -c "sudo init 0"
	W0512 00:57:29.175728    9076 cli_runner.go:211] docker exec --privileged -t pause-20220512005140-7184 /bin/bash -c "sudo init 0" returned with exit code 1
	I0512 00:57:29.175728    9076 cli_runner.go:217] Completed: docker exec --privileged -t pause-20220512005140-7184 /bin/bash -c "sudo init 0": (1.1193757s)
	I0512 00:57:29.175728    9076 oci.go:625] error shutdown pause-20220512005140-7184: docker exec --privileged -t pause-20220512005140-7184 /bin/bash -c "sudo init 0": exit status 1
	stdout:
	
	stderr:
	Error response from daemon: Container 492c4542535d839205229783f800eb034aa7708cfce85f337345bbf815be4aad is not running
	I0512 00:57:30.185581    9076 cli_runner.go:164] Run: docker container inspect pause-20220512005140-7184 --format={{.State.Status}}
	I0512 00:57:31.284954    9076 cli_runner.go:217] Completed: docker container inspect pause-20220512005140-7184 --format={{.State.Status}}: (1.0993159s)
	I0512 00:57:31.284954    9076 oci.go:639] temporary error: container pause-20220512005140-7184 status is  but expect it to be exited
	I0512 00:57:31.284954    9076 oci.go:645] Successfully shutdown container pause-20220512005140-7184
	I0512 00:57:31.291944    9076 cli_runner.go:164] Run: docker rm -f -v pause-20220512005140-7184
	I0512 00:57:30.247421    8048 config.go:178] Loaded profile config "kubernetes-upgrade-20220512005507-7184": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.16.0
	I0512 00:57:30.248674    8048 driver.go:358] Setting default libvirt URI to qemu:///system
	I0512 00:57:33.001251    8048 docker.go:137] docker version: linux-20.10.14
	I0512 00:57:33.012850    8048 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0512 00:57:35.271014    8048 cli_runner.go:217] Completed: docker system info --format "{{json .}}": (2.258048s)
	I0512 00:57:35.271014    8048 info.go:265] docker info: {ID:5MOX:W55Z:6RSS:V5PU:46KT:D723:NTM4:N7FK:USOO:URA3:TW6J:2PNT Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:60 OomKillDisable:true NGoroutines:51 SystemTime:2022-05-12 00:57:34.0866239 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.102.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_
64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:53902323712 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.14 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3df54a852345ae127d1fa3092b95168e4a88e2f8 Expected:3df54a852345ae127d1fa3092b95168e4a88e2f8} RuncCommit:{ID:v1.0.3-0-gf46b6ba Expected:v1.0.3-0-gf46b6ba} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,
profile=default] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.5.0] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:C:\Program Files\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc.
Version:v0.17.0]] Warnings:<nil>}}
	I0512 00:57:35.274019    8048 out.go:177] * Using the docker driver based on existing profile
	W0512 00:57:31.412693    3732 cli_runner.go:211] docker network inspect missing-upgrade-20220512005316-7184 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0512 00:57:31.412693    3732 cli_runner.go:217] Completed: docker network inspect missing-upgrade-20220512005316-7184 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}": (1.0948185s)
	I0512 00:57:31.419696    3732 network_create.go:272] running [docker network inspect missing-upgrade-20220512005316-7184] to gather additional debugging logs...
	I0512 00:57:31.419696    3732 cli_runner.go:164] Run: docker network inspect missing-upgrade-20220512005316-7184
	W0512 00:57:32.511932    3732 cli_runner.go:211] docker network inspect missing-upgrade-20220512005316-7184 returned with exit code 1
	I0512 00:57:32.511932    3732 cli_runner.go:217] Completed: docker network inspect missing-upgrade-20220512005316-7184: (1.0921795s)
	I0512 00:57:32.511932    3732 network_create.go:275] error running [docker network inspect missing-upgrade-20220512005316-7184]: docker network inspect missing-upgrade-20220512005316-7184: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: missing-upgrade-20220512005316-7184
	I0512 00:57:32.511932    3732 network_create.go:277] output of [docker network inspect missing-upgrade-20220512005316-7184]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: missing-upgrade-20220512005316-7184
	
	** /stderr **
	I0512 00:57:32.514900    3732 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0512 00:57:33.630490    3732 cli_runner.go:217] Completed: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}": (1.1155328s)
	I0512 00:57:33.650496    3732 network.go:288] reserving subnet 192.168.49.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[] amended:true}} dirty:map[192.168.49.0:0xc000ba8340] misses:0}
	I0512 00:57:33.650496    3732 network.go:235] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0512 00:57:33.650496    3732 network_create.go:115] attempt to create docker network missing-upgrade-20220512005316-7184 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0512 00:57:33.657490    3732 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true missing-upgrade-20220512005316-7184
	W0512 00:57:34.818292    3732 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true missing-upgrade-20220512005316-7184 returned with exit code 1
	I0512 00:57:34.818292    3732 cli_runner.go:217] Completed: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true missing-upgrade-20220512005316-7184: (1.1606944s)
	W0512 00:57:34.818292    3732 network_create.go:107] failed to create docker network missing-upgrade-20220512005316-7184 192.168.49.0/24, will retry: subnet is taken
	I0512 00:57:34.834981    3732 network.go:279] skipping subnet 192.168.49.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc000ba8340] amended:false}} dirty:map[] misses:0}
	I0512 00:57:34.834981    3732 network.go:238] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0512 00:57:34.851982    3732 network.go:288] reserving subnet 192.168.58.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc000ba8340] amended:true}} dirty:map[192.168.49.0:0xc000ba8340 192.168.58.0:0xc0000069a8] misses:0}
	I0512 00:57:34.851982    3732 network.go:235] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0512 00:57:34.851982    3732 network_create.go:115] attempt to create docker network missing-upgrade-20220512005316-7184 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
	I0512 00:57:34.858980    3732 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true missing-upgrade-20220512005316-7184
	I0512 00:57:36.121946    3732 cli_runner.go:217] Completed: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true missing-upgrade-20220512005316-7184: (1.2629006s)
	I0512 00:57:36.121946    3732 network_create.go:99] docker network missing-upgrade-20220512005316-7184 192.168.58.0/24 created
	I0512 00:57:36.121946    3732 kic.go:106] calculated static IP "192.168.58.2" for the "missing-upgrade-20220512005316-7184" container
	I0512 00:57:36.136866    3732 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0512 00:57:32.466274    9076 cli_runner.go:217] Completed: docker rm -f -v pause-20220512005140-7184: (1.1742689s)
	I0512 00:57:32.476370    9076 cli_runner.go:164] Run: docker container inspect -f {{.Id}} pause-20220512005140-7184
	W0512 00:57:33.598496    9076 cli_runner.go:211] docker container inspect -f {{.Id}} pause-20220512005140-7184 returned with exit code 1
	I0512 00:57:33.598496    9076 cli_runner.go:217] Completed: docker container inspect -f {{.Id}} pause-20220512005140-7184: (1.1220682s)
	I0512 00:57:33.605505    9076 cli_runner.go:164] Run: docker network inspect pause-20220512005140-7184 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0512 00:57:34.722977    9076 cli_runner.go:211] docker network inspect pause-20220512005140-7184 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0512 00:57:34.722977    9076 cli_runner.go:217] Completed: docker network inspect pause-20220512005140-7184 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}": (1.1174144s)
	I0512 00:57:34.728975    9076 network_create.go:272] running [docker network inspect pause-20220512005140-7184] to gather additional debugging logs...
	I0512 00:57:34.728975    9076 cli_runner.go:164] Run: docker network inspect pause-20220512005140-7184
	W0512 00:57:35.915885    9076 cli_runner.go:211] docker network inspect pause-20220512005140-7184 returned with exit code 1
	I0512 00:57:35.915885    9076 cli_runner.go:217] Completed: docker network inspect pause-20220512005140-7184: (1.1868492s)
	I0512 00:57:35.915885    9076 network_create.go:275] error running [docker network inspect pause-20220512005140-7184]: docker network inspect pause-20220512005140-7184: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: pause-20220512005140-7184
	I0512 00:57:35.915885    9076 network_create.go:277] output of [docker network inspect pause-20220512005140-7184]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: pause-20220512005140-7184
	
	** /stderr **
	W0512 00:57:35.916883    9076 delete.go:139] delete failed (probably ok) <nil>
	I0512 00:57:35.916883    9076 fix.go:115] Sleeping 1 second for extra luck!
	I0512 00:57:34.548497    8836 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.0:$PATH" kubeadm reset --cri-socket /var/run/dockershim.sock --force": (8.3124251s)
	I0512 00:57:34.559919    8836 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0512 00:57:34.606485    8836 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0512 00:57:34.638745    8836 kubeadm.go:221] ignoring SystemVerification for kubeadm because of docker driver
	I0512 00:57:34.652517    8836 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0512 00:57:34.673510    8836 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0512 00:57:34.673510    8836 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	W0512 00:57:35.462748    8836 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.18.0
	[preflight] Running pre-flight checks
	
	stderr:
	W0512 00:57:34.808752   11580 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase preflight: [preflight] Some fatal errors occurred:
		[ERROR Port-8443]: Port 8443 is in use
		[ERROR Port-2379]: Port 2379 is in use
		[ERROR Port-2380]: Port 2380 is in use
	[preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`
	To see the stack trace of this error execute with --v=5 or higher
	
	I0512 00:57:35.462838    8836 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.0:$PATH" kubeadm reset --cri-socket /var/run/dockershim.sock --force"
	I0512 00:57:35.692850    8836 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0512 00:57:35.734171    8836 kubeadm.go:221] ignoring SystemVerification for kubeadm because of docker driver
	I0512 00:57:35.746259    8836 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0512 00:57:35.767031    8836 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0512 00:57:35.767031    8836 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0512 00:57:36.498638    8836 kubeadm.go:393] StartCluster complete in 50.4291695s
	I0512 00:57:36.506636    8836 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0512 00:57:36.925061    9076 start.go:131] createHost starting for "" (driver="docker")
	I0512 00:57:35.278033    8048 start.go:284] selected driver: docker
	I0512 00:57:35.278033    8048 start.go:801] validating driver "docker" against &{Name:kubernetes-upgrade-20220512005507-7184 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1652251400-14138@sha256:8c847a4aa2afc5a7fc659f9731046bf9cc7e788283deecc83c8633014fb0828a Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:kubernetes-upgrade-20220512005507-7
184 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube4:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false}
	I0512 00:57:35.278033    8048 start.go:812] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0512 00:57:35.371759    8048 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0512 00:57:37.542118    8048 cli_runner.go:217] Completed: docker system info --format "{{json .}}": (2.1702473s)
	I0512 00:57:37.542118    8048 info.go:265] docker info: {ID:5MOX:W55Z:6RSS:V5PU:46KT:D723:NTM4:N7FK:USOO:URA3:TW6J:2PNT Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:60 OomKillDisable:true NGoroutines:51 SystemTime:2022-05-12 00:57:36.4860248 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.102.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_
64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:53902323712 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.14 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3df54a852345ae127d1fa3092b95168e4a88e2f8 Expected:3df54a852345ae127d1fa3092b95168e4a88e2f8} RuncCommit:{ID:v1.0.3-0-gf46b6ba Expected:v1.0.3-0-gf46b6ba} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,
profile=default] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.5.0] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:C:\Program Files\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc.
Version:v0.17.0]] Warnings:<nil>}}
	I0512 00:57:37.542118    8048 cni.go:95] Creating CNI manager for ""
	I0512 00:57:37.542118    8048 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0512 00:57:37.542118    8048 start_flags.go:306] config:
	{Name:kubernetes-upgrade-20220512005507-7184 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1652251400-14138@sha256:8c847a4aa2afc5a7fc659f9731046bf9cc7e788283deecc83c8633014fb0828a Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.6-rc.0 ClusterName:kubernetes-upgrade-20220512005507-7184 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerI
Ps:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube4:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0512 00:57:37.546118    8048 out.go:177] * Starting control plane node kubernetes-upgrade-20220512005507-7184 in cluster kubernetes-upgrade-20220512005507-7184
	I0512 00:57:37.550109    8048 cache.go:120] Beginning downloading kic base image for docker with docker
	I0512 00:57:37.553119    8048 out.go:177] * Pulling base image ...
	I0512 00:57:37.556117    8048 preload.go:132] Checking if preload exists for k8s version v1.23.6-rc.0 and runtime docker
	I0512 00:57:37.556117    8048 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1652251400-14138@sha256:8c847a4aa2afc5a7fc659f9731046bf9cc7e788283deecc83c8633014fb0828a in local docker daemon
	I0512 00:57:37.556117    8048 preload.go:148] Found local preload: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.23.6-rc.0-docker-overlay2-amd64.tar.lz4
	I0512 00:57:37.556117    8048 cache.go:57] Caching tarball of preloaded images
	I0512 00:57:37.556117    8048 preload.go:174] Found C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.23.6-rc.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0512 00:57:37.556117    8048 cache.go:60] Finished verifying existence of preloaded tar for  v1.23.6-rc.0 on docker
	I0512 00:57:37.556117    8048 profile.go:148] Saving config to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kubernetes-upgrade-20220512005507-7184\config.json ...
	I0512 00:57:38.626765    8048 image.go:79] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1652251400-14138@sha256:8c847a4aa2afc5a7fc659f9731046bf9cc7e788283deecc83c8633014fb0828a in local docker daemon, skipping pull
	I0512 00:57:38.626828    8048 cache.go:141] gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1652251400-14138@sha256:8c847a4aa2afc5a7fc659f9731046bf9cc7e788283deecc83c8633014fb0828a exists in daemon, skipping load
	I0512 00:57:38.626828    8048 cache.go:206] Successfully downloaded all kic artifacts
	I0512 00:57:38.626961    8048 start.go:352] acquiring machines lock for kubernetes-upgrade-20220512005507-7184: {Name:mk7e6675a844f5828604be28939544b5cd10b73c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0512 00:57:38.627131    8048 start.go:356] acquired machines lock for "kubernetes-upgrade-20220512005507-7184" in 137.9µs
	I0512 00:57:38.627131    8048 start.go:94] Skipping create...Using existing machine configuration
	I0512 00:57:38.627131    8048 fix.go:55] fixHost starting: 
	I0512 00:57:38.645242    8048 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-20220512005507-7184 --format={{.State.Status}}
	I0512 00:57:39.756777    8048 cli_runner.go:217] Completed: docker container inspect kubernetes-upgrade-20220512005507-7184 --format={{.State.Status}}: (1.1114771s)
	I0512 00:57:39.756777    8048 fix.go:103] recreateIfNeeded on kubernetes-upgrade-20220512005507-7184: state=Stopped err=<nil>
	W0512 00:57:39.756777    8048 fix.go:129] unexpected machine state, will restart: <nil>
	I0512 00:57:39.759785    8048 out.go:177] * Restarting existing docker container for "kubernetes-upgrade-20220512005507-7184" ...
	I0512 00:57:36.585159    8836 logs.go:274] 0 containers: []
	W0512 00:57:36.585159    8836 logs.go:276] No container was found matching "kube-apiserver"
	I0512 00:57:36.596351    8836 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0512 00:57:36.679593    8836 logs.go:274] 0 containers: []
	W0512 00:57:36.679593    8836 logs.go:276] No container was found matching "etcd"
	I0512 00:57:36.689312    8836 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0512 00:57:36.767110    8836 logs.go:274] 0 containers: []
	W0512 00:57:36.767110    8836 logs.go:276] No container was found matching "coredns"
	I0512 00:57:36.776064    8836 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0512 00:57:36.857938    8836 logs.go:274] 0 containers: []
	W0512 00:57:36.858033    8836 logs.go:276] No container was found matching "kube-scheduler"
	I0512 00:57:36.867826    8836 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0512 00:57:36.964287    8836 logs.go:274] 0 containers: []
	W0512 00:57:36.964287    8836 logs.go:276] No container was found matching "kube-proxy"
	I0512 00:57:36.975897    8836 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0512 00:57:37.075632    8836 logs.go:274] 0 containers: []
	W0512 00:57:37.075632    8836 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0512 00:57:37.083626    8836 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0512 00:57:37.178446    8836 logs.go:274] 0 containers: []
	W0512 00:57:37.178446    8836 logs.go:276] No container was found matching "storage-provisioner"
	I0512 00:57:37.186441    8836 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0512 00:57:37.261452    8836 logs.go:274] 0 containers: []
	W0512 00:57:37.261452    8836 logs.go:276] No container was found matching "kube-controller-manager"
	I0512 00:57:37.261452    8836 logs.go:123] Gathering logs for dmesg ...
	I0512 00:57:37.261452    8836 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0512 00:57:37.304465    8836 logs.go:123] Gathering logs for describe nodes ...
	I0512 00:57:37.304465    8836 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.18.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0512 00:57:37.537114    8836 logs.go:123] Gathering logs for Docker ...
	I0512 00:57:37.537114    8836 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0512 00:57:37.640121    8836 logs.go:123] Gathering logs for container status ...
	I0512 00:57:37.641122    8836 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0512 00:57:39.787783    8836 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.1465504s)
	I0512 00:57:39.787783    8836 logs.go:123] Gathering logs for kubelet ...
	I0512 00:57:39.787783    8836 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0512 00:57:39.863790    8836 logs.go:138] Found kubelet problem: May 12 00:57:06 running-upgrade-20220512005137-7184 kubelet[8709]: E0512 00:57:06.310532    8709 pod_workers.go:191] Error syncing pod 112c60df9e36eeaf13a6dd3074765810 ("kube-apiserver-running-upgrade-20220512005137-7184_kube-system(112c60df9e36eeaf13a6dd3074765810)"), skipping: failed to "StartContainer" for "kube-apiserver" with CrashLoopBackOff: "back-off 10s restarting failed container=kube-apiserver pod=kube-apiserver-running-upgrade-20220512005137-7184_kube-system(112c60df9e36eeaf13a6dd3074765810)"
	W0512 00:57:39.878784    8836 logs.go:138] Found kubelet problem: May 12 00:57:08 running-upgrade-20220512005137-7184 kubelet[8709]: E0512 00:57:08.207139    8709 pod_workers.go:191] Error syncing pod 112c60df9e36eeaf13a6dd3074765810 ("kube-apiserver-running-upgrade-20220512005137-7184_kube-system(112c60df9e36eeaf13a6dd3074765810)"), skipping: failed to "StartContainer" for "kube-apiserver" with CrashLoopBackOff: "back-off 10s restarting failed container=kube-apiserver pod=kube-apiserver-running-upgrade-20220512005137-7184_kube-system(112c60df9e36eeaf13a6dd3074765810)"
	W0512 00:57:39.882794    8836 logs.go:138] Found kubelet problem: May 12 00:57:09 running-upgrade-20220512005137-7184 kubelet[8709]: E0512 00:57:09.412197    8709 pod_workers.go:191] Error syncing pod 112c60df9e36eeaf13a6dd3074765810 ("kube-apiserver-running-upgrade-20220512005137-7184_kube-system(112c60df9e36eeaf13a6dd3074765810)"), skipping: failed to "StartContainer" for "kube-apiserver" with CrashLoopBackOff: "back-off 10s restarting failed container=kube-apiserver pod=kube-apiserver-running-upgrade-20220512005137-7184_kube-system(112c60df9e36eeaf13a6dd3074765810)"
	W0512 00:57:39.885806    8836 logs.go:138] Found kubelet problem: May 12 00:57:10 running-upgrade-20220512005137-7184 kubelet[8709]: E0512 00:57:10.808527    8709 pod_workers.go:191] Error syncing pod 112c60df9e36eeaf13a6dd3074765810 ("kube-apiserver-running-upgrade-20220512005137-7184_kube-system(112c60df9e36eeaf13a6dd3074765810)"), skipping: failed to "StartContainer" for "kube-apiserver" with CrashLoopBackOff: "back-off 10s restarting failed container=kube-apiserver pod=kube-apiserver-running-upgrade-20220512005137-7184_kube-system(112c60df9e36eeaf13a6dd3074765810)"
	W0512 00:57:39.886797    8836 logs.go:138] Found kubelet problem: May 12 00:57:10 running-upgrade-20220512005137-7184 kubelet[8709]: E0512 00:57:10.809635    8709 pod_workers.go:191] Error syncing pod 6eb087e932898681e74c978c21efeebc ("etcd-running-upgrade-20220512005137-7184_kube-system(6eb087e932898681e74c978c21efeebc)"), skipping: failed to "StartContainer" for "etcd" with CrashLoopBackOff: "back-off 10s restarting failed container=etcd pod=etcd-running-upgrade-20220512005137-7184_kube-system(6eb087e932898681e74c978c21efeebc)"
	W0512 00:57:39.888781    8836 logs.go:138] Found kubelet problem: May 12 00:57:11 running-upgrade-20220512005137-7184 kubelet[8709]: E0512 00:57:11.638082    8709 pod_workers.go:191] Error syncing pod 6eb087e932898681e74c978c21efeebc ("etcd-running-upgrade-20220512005137-7184_kube-system(6eb087e932898681e74c978c21efeebc)"), skipping: failed to "StartContainer" for "etcd" with CrashLoopBackOff: "back-off 10s restarting failed container=etcd pod=etcd-running-upgrade-20220512005137-7184_kube-system(6eb087e932898681e74c978c21efeebc)"
	W0512 00:57:39.895782    8836 logs.go:138] Found kubelet problem: May 12 00:57:14 running-upgrade-20220512005137-7184 kubelet[8709]: E0512 00:57:14.806338    8709 pod_workers.go:191] Error syncing pod 6eb087e932898681e74c978c21efeebc ("etcd-running-upgrade-20220512005137-7184_kube-system(6eb087e932898681e74c978c21efeebc)"), skipping: failed to "StartContainer" for "etcd" with CrashLoopBackOff: "back-off 10s restarting failed container=etcd pod=etcd-running-upgrade-20220512005137-7184_kube-system(6eb087e932898681e74c978c21efeebc)"
	W0512 00:57:39.905782    8836 logs.go:138] Found kubelet problem: May 12 00:57:17 running-upgrade-20220512005137-7184 kubelet[8709]: E0512 00:57:17.925354    8709 pod_workers.go:191] Error syncing pod 112c60df9e36eeaf13a6dd3074765810 ("kube-apiserver-running-upgrade-20220512005137-7184_kube-system(112c60df9e36eeaf13a6dd3074765810)"), skipping: failed to "StartContainer" for "kube-apiserver" with CrashLoopBackOff: "back-off 20s restarting failed container=kube-apiserver pod=kube-apiserver-running-upgrade-20220512005137-7184_kube-system(112c60df9e36eeaf13a6dd3074765810)"
	W0512 00:57:39.924787    8836 logs.go:138] Found kubelet problem: May 12 00:57:26 running-upgrade-20220512005137-7184 kubelet[8709]: E0512 00:57:26.037949    8709 pod_workers.go:191] Error syncing pod 112c60df9e36eeaf13a6dd3074765810 ("kube-apiserver-running-upgrade-20220512005137-7184_kube-system(112c60df9e36eeaf13a6dd3074765810)"), skipping: failed to "StartContainer" for "kube-apiserver" with CrashLoopBackOff: "back-off 20s restarting failed container=kube-apiserver pod=kube-apiserver-running-upgrade-20220512005137-7184_kube-system(112c60df9e36eeaf13a6dd3074765810)"
	W0512 00:57:39.926785    8836 out.go:369] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.18.0
	[preflight] Running pre-flight checks
	
	stderr:
	W0512 00:57:35.925547   11703 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase preflight: [preflight] Some fatal errors occurred:
		[ERROR Port-8443]: Port 8443 is in use
		[ERROR Port-2379]: Port 2379 is in use
		[ERROR Port-2380]: Port 2380 is in use
	[preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`
	To see the stack trace of this error execute with --v=5 or higher
	W0512 00:57:39.926785    8836 out.go:239] * 
	W0512 00:57:39.926785    8836 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.18.0
	[preflight] Running pre-flight checks
	
	stderr:
	W0512 00:57:35.925547   11703 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase preflight: [preflight] Some fatal errors occurred:
		[ERROR Port-8443]: Port 8443 is in use
		[ERROR Port-2379]: Port 2379 is in use
		[ERROR Port-2380]: Port 2380 is in use
	[preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`
	To see the stack trace of this error execute with --v=5 or higher
	
	W0512 00:57:39.926785    8836 out.go:239] * 
	W0512 00:57:39.928793    8836 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0512 00:57:39.931801    8836 out.go:177] X Problems detected in kubelet:
	I0512 00:57:39.937783    8836 out.go:177]   May 12 00:57:06 running-upgrade-20220512005137-7184 kubelet[8709]: E0512 00:57:06.310532    8709 pod_workers.go:191] Error syncing pod 112c60df9e36eeaf13a6dd3074765810 ("kube-apiserver-running-upgrade-20220512005137-7184_kube-system(112c60df9e36eeaf13a6dd3074765810)"), skipping: failed to "StartContainer" for "kube-apiserver" with CrashLoopBackOff: "back-off 10s restarting failed container=kube-apiserver pod=kube-apiserver-running-upgrade-20220512005137-7184_kube-system(112c60df9e36eeaf13a6dd3074765810)"
	I0512 00:57:39.943788    8836 out.go:177]   May 12 00:57:08 running-upgrade-20220512005137-7184 kubelet[8709]: E0512 00:57:08.207139    8709 pod_workers.go:191] Error syncing pod 112c60df9e36eeaf13a6dd3074765810 ("kube-apiserver-running-upgrade-20220512005137-7184_kube-system(112c60df9e36eeaf13a6dd3074765810)"), skipping: failed to "StartContainer" for "kube-apiserver" with CrashLoopBackOff: "back-off 10s restarting failed container=kube-apiserver pod=kube-apiserver-running-upgrade-20220512005137-7184_kube-system(112c60df9e36eeaf13a6dd3074765810)"
	I0512 00:57:39.949833    8836 out.go:177]   May 12 00:57:09 running-upgrade-20220512005137-7184 kubelet[8709]: E0512 00:57:09.412197    8709 pod_workers.go:191] Error syncing pod 112c60df9e36eeaf13a6dd3074765810 ("kube-apiserver-running-upgrade-20220512005137-7184_kube-system(112c60df9e36eeaf13a6dd3074765810)"), skipping: failed to "StartContainer" for "kube-apiserver" with CrashLoopBackOff: "back-off 10s restarting failed container=kube-apiserver pod=kube-apiserver-running-upgrade-20220512005137-7184_kube-system(112c60df9e36eeaf13a6dd3074765810)"
	I0512 00:57:39.957791    8836 out.go:177] 
	W0512 00:57:39.959792    8836 out.go:239] X Exiting due to GUEST_PORT_IN_USE: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.18.0
	[preflight] Running pre-flight checks
	
	stderr:
	W0512 00:57:35.925547   11703 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase preflight: [preflight] Some fatal errors occurred:
		[ERROR Port-8443]: Port 8443 is in use
		[ERROR Port-2379]: Port 2379 is in use
		[ERROR Port-2380]: Port 2380 is in use
	[preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`
	To see the stack trace of this error execute with --v=5 or higher
	
	W0512 00:57:39.959792    8836 out.go:239] * Suggestion: kubeadm detected a TCP port conflict with another process: probably another local Kubernetes installation. Run lsof -p<port> to find the process and kill it
	W0512 00:57:39.959792    8836 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/5484
	I0512 00:57:39.965798    8836 out.go:177] 
	I0512 00:57:39.770775    8048 cli_runner.go:164] Run: docker start kubernetes-upgrade-20220512005507-7184
	I0512 00:57:37.241445    3732 cli_runner.go:217] Completed: docker ps -a --format {{.Names}}: (1.1045224s)
	I0512 00:57:37.248453    3732 cli_runner.go:164] Run: docker volume create missing-upgrade-20220512005316-7184 --label name.minikube.sigs.k8s.io=missing-upgrade-20220512005316-7184 --label created_by.minikube.sigs.k8s.io=true
	I0512 00:57:38.330671    3732 cli_runner.go:217] Completed: docker volume create missing-upgrade-20220512005316-7184 --label name.minikube.sigs.k8s.io=missing-upgrade-20220512005316-7184 --label created_by.minikube.sigs.k8s.io=true: (1.0821627s)
	I0512 00:57:38.330671    3732 oci.go:103] Successfully created a docker volume missing-upgrade-20220512005316-7184
	I0512 00:57:38.339570    3732 cli_runner.go:164] Run: docker run --rm --name missing-upgrade-20220512005316-7184-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=missing-upgrade-20220512005316-7184 --entrypoint /usr/bin/test -v missing-upgrade-20220512005316-7184:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1652251400-14138@sha256:8c847a4aa2afc5a7fc659f9731046bf9cc7e788283deecc83c8633014fb0828a -d /var/lib
	I0512 00:57:41.090544    3732 cli_runner.go:217] Completed: docker run --rm --name missing-upgrade-20220512005316-7184-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=missing-upgrade-20220512005316-7184 --entrypoint /usr/bin/test -v missing-upgrade-20220512005316-7184:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1652251400-14138@sha256:8c847a4aa2afc5a7fc659f9731046bf9cc7e788283deecc83c8633014fb0828a -d /var/lib: (2.7508323s)
	I0512 00:57:41.090544    3732 oci.go:107] Successfully prepared a docker volume missing-upgrade-20220512005316-7184
	I0512 00:57:41.090544    3732 preload.go:132] Checking if preload exists for k8s version v1.18.0 and runtime docker
	I0512 00:57:41.098545    3732 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0512 00:57:36.929984    9076 out.go:204] * Creating docker container (CPUs=2, Memory=2048MB) ...
	I0512 00:57:36.930772    9076 start.go:165] libmachine.API.Create for "pause-20220512005140-7184" (driver="docker")
	I0512 00:57:36.930772    9076 client.go:168] LocalClient.Create starting
	I0512 00:57:36.931282    9076 main.go:134] libmachine: Reading certificate data from C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem
	I0512 00:57:36.931462    9076 main.go:134] libmachine: Decoding PEM data...
	I0512 00:57:36.931539    9076 main.go:134] libmachine: Parsing certificate...
	I0512 00:57:36.931726    9076 main.go:134] libmachine: Reading certificate data from C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem
	I0512 00:57:36.931909    9076 main.go:134] libmachine: Decoding PEM data...
	I0512 00:57:36.931909    9076 main.go:134] libmachine: Parsing certificate...
	I0512 00:57:36.940965    9076 cli_runner.go:164] Run: docker network inspect pause-20220512005140-7184 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0512 00:57:38.051231    9076 cli_runner.go:211] docker network inspect pause-20220512005140-7184 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0512 00:57:38.051231    9076 cli_runner.go:217] Completed: docker network inspect pause-20220512005140-7184 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}": (1.110021s)
	I0512 00:57:38.058339    9076 network_create.go:272] running [docker network inspect pause-20220512005140-7184] to gather additional debugging logs...
	I0512 00:57:38.058339    9076 cli_runner.go:164] Run: docker network inspect pause-20220512005140-7184
	W0512 00:57:39.145615    9076 cli_runner.go:211] docker network inspect pause-20220512005140-7184 returned with exit code 1
	I0512 00:57:39.145615    9076 cli_runner.go:217] Completed: docker network inspect pause-20220512005140-7184: (1.087132s)
	I0512 00:57:39.145615    9076 network_create.go:275] error running [docker network inspect pause-20220512005140-7184]: docker network inspect pause-20220512005140-7184: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: pause-20220512005140-7184
	I0512 00:57:39.145615    9076 network_create.go:277] output of [docker network inspect pause-20220512005140-7184]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: pause-20220512005140-7184
	
	** /stderr **
	I0512 00:57:39.153501    9076 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0512 00:57:40.322532    9076 cli_runner.go:217] Completed: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}": (1.1689706s)
	I0512 00:57:40.343543    9076 network.go:284] reusing subnet 192.168.49.0 that has expired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc00030d138] amended:false}} dirty:map[] misses:0}
	I0512 00:57:40.343543    9076 network.go:235] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0512 00:57:40.343543    9076 network_create.go:115] attempt to create docker network pause-20220512005140-7184 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0512 00:57:40.351546    9076 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true pause-20220512005140-7184
	I0512 00:57:42.172399    8048 cli_runner.go:217] Completed: docker start kubernetes-upgrade-20220512005507-7184: (2.4014993s)
	I0512 00:57:42.179378    8048 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-20220512005507-7184 --format={{.State.Status}}
	I0512 00:57:43.367874    8048 cli_runner.go:217] Completed: docker container inspect kubernetes-upgrade-20220512005507-7184 --format={{.State.Status}}: (1.1884349s)
	I0512 00:57:43.367874    8048 kic.go:416] container "kubernetes-upgrade-20220512005507-7184" state is running.
	I0512 00:57:43.382870    8048 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kubernetes-upgrade-20220512005507-7184
	I0512 00:57:44.533706    8048 cli_runner.go:217] Completed: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kubernetes-upgrade-20220512005507-7184: (1.1507764s)
	I0512 00:57:44.533706    8048 profile.go:148] Saving config to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kubernetes-upgrade-20220512005507-7184\config.json ...
	I0512 00:57:44.540695    8048 machine.go:88] provisioning docker machine ...
	I0512 00:57:44.540695    8048 ubuntu.go:169] provisioning hostname "kubernetes-upgrade-20220512005507-7184"
	I0512 00:57:44.550701    8048 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220512005507-7184
	I0512 00:57:43.430526    3732 cli_runner.go:217] Completed: docker system info --format "{{json .}}": (2.331861s)
	I0512 00:57:43.430526    3732 info.go:265] docker info: {ID:5MOX:W55Z:6RSS:V5PU:46KT:D723:NTM4:N7FK:USOO:URA3:TW6J:2PNT Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:71 OomKillDisable:true NGoroutines:56 SystemTime:2022-05-12 00:57:42.250125 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.102.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_6
4 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:53902323712 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.14 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3df54a852345ae127d1fa3092b95168e4a88e2f8 Expected:3df54a852345ae127d1fa3092b95168e4a88e2f8} RuncCommit:{ID:v1.0.3-0-gf46b6ba Expected:v1.0.3-0-gf46b6ba} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,p
rofile=default] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.5.0] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:C:\Program Files\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc.
Version:v0.17.0]] Warnings:<nil>}}
	I0512 00:57:43.438523    3732 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0512 00:57:45.632779    3732 cli_runner.go:217] Completed: docker info --format "'{{json .SecurityOptions}}'": (2.1940337s)
	I0512 00:57:45.642178    3732 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname missing-upgrade-20220512005316-7184 --name missing-upgrade-20220512005316-7184 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=missing-upgrade-20220512005316-7184 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=missing-upgrade-20220512005316-7184 --network missing-upgrade-20220512005316-7184 --ip 192.168.58.2 --volume missing-upgrade-20220512005316-7184:/var --security-opt apparmor=unconfined --memory=2200mb --memory-swap=2200mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1652251400-14138@sha256:8c847a4aa2afc5a7fc659f9731046bf9cc7e788283deecc83c8633014fb0828a
	W0512 00:57:41.578238    9076 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true pause-20220512005140-7184 returned with exit code 1
	I0512 00:57:41.578238    9076 cli_runner.go:217] Completed: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true pause-20220512005140-7184: (1.2263621s)
	W0512 00:57:41.578238    9076 network_create.go:107] failed to create docker network pause-20220512005140-7184 192.168.49.0/24, will retry: subnet is taken
	I0512 00:57:41.603835    9076 network.go:279] skipping subnet 192.168.49.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc00030d138] amended:false}} dirty:map[] misses:0}
	I0512 00:57:41.603926    9076 network.go:238] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0512 00:57:41.623075    9076 network.go:288] reserving subnet 192.168.58.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc00030d138] amended:true}} dirty:map[192.168.49.0:0xc00030d138 192.168.58.0:0xc0005c48e8] misses:0}
	I0512 00:57:41.623125    9076 network.go:235] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0512 00:57:41.623125    9076 network_create.go:115] attempt to create docker network pause-20220512005140-7184 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
	I0512 00:57:41.631925    9076 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true pause-20220512005140-7184
	W0512 00:57:42.813950    9076 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true pause-20220512005140-7184 returned with exit code 1
	I0512 00:57:42.813950    9076 cli_runner.go:217] Completed: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true pause-20220512005140-7184: (1.1818387s)
	W0512 00:57:42.814116    9076 network_create.go:107] failed to create docker network pause-20220512005140-7184 192.168.58.0/24, will retry: subnet is taken
	I0512 00:57:42.842424    9076 network.go:279] skipping subnet 192.168.58.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc00030d138] amended:true}} dirty:map[192.168.49.0:0xc00030d138 192.168.58.0:0xc0005c48e8] misses:1}
	I0512 00:57:42.843361    9076 network.go:238] skipping subnet 192.168.58.0/24 that is reserved: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0512 00:57:42.859370    9076 network.go:288] reserving subnet 192.168.67.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc00030d138] amended:true}} dirty:map[192.168.49.0:0xc00030d138 192.168.58.0:0xc0005c48e8 192.168.67.0:0xc000006a98] misses:1}
	I0512 00:57:42.859370    9076 network.go:235] using free private subnet 192.168.67.0/24: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0512 00:57:42.860368    9076 network_create.go:115] attempt to create docker network pause-20220512005140-7184 192.168.67.0/24 with gateway 192.168.67.1 and MTU of 1500 ...
	I0512 00:57:42.867364    9076 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true pause-20220512005140-7184
	I0512 00:57:44.156221    9076 cli_runner.go:217] Completed: docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true pause-20220512005140-7184: (1.2887905s)
	I0512 00:57:44.156221    9076 network_create.go:99] docker network pause-20220512005140-7184 192.168.67.0/24 created
	I0512 00:57:44.156315    9076 kic.go:106] calculated static IP "192.168.67.2" for the "pause-20220512005140-7184" container
	I0512 00:57:44.173125    9076 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0512 00:57:45.306659    9076 cli_runner.go:217] Completed: docker ps -a --format {{.Names}}: (1.133475s)
	I0512 00:57:45.314649    9076 cli_runner.go:164] Run: docker volume create pause-20220512005140-7184 --label name.minikube.sigs.k8s.io=pause-20220512005140-7184 --label created_by.minikube.sigs.k8s.io=true
	I0512 00:57:45.648273    8048 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220512005507-7184: (1.0975147s)
	I0512 00:57:45.651465    8048 main.go:134] libmachine: Using SSH client type: native
	I0512 00:57:45.652465    8048 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x422e20] 0x425c80 <nil>  [] 0s} 127.0.0.1 49824 <nil> <nil>}
	I0512 00:57:45.652465    8048 main.go:134] libmachine: About to run SSH command:
	sudo hostname kubernetes-upgrade-20220512005507-7184 && echo "kubernetes-upgrade-20220512005507-7184" | sudo tee /etc/hostname
	I0512 00:57:45.855006    8048 main.go:134] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-20220512005507-7184
	
	I0512 00:57:45.868133    8048 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220512005507-7184
	I0512 00:57:46.963091    8048 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220512005507-7184: (1.0949015s)
	I0512 00:57:46.966146    8048 main.go:134] libmachine: Using SSH client type: native
	I0512 00:57:46.967081    8048 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x422e20] 0x425c80 <nil>  [] 0s} 127.0.0.1 49824 <nil> <nil>}
	I0512 00:57:46.967081    8048 main.go:134] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skubernetes-upgrade-20220512005507-7184' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kubernetes-upgrade-20220512005507-7184/g' /etc/hosts;
				else 
					echo '127.0.1.1 kubernetes-upgrade-20220512005507-7184' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0512 00:57:47.152091    8048 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I0512 00:57:47.152091    8048 ubuntu.go:175] set auth options {CertDir:C:\Users\jenkins.minikube4\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube4\minikube-integration\.minikube}
	I0512 00:57:47.152091    8048 ubuntu.go:177] setting up certificates
	I0512 00:57:47.152091    8048 provision.go:83] configureAuth start
	I0512 00:57:47.158088    8048 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kubernetes-upgrade-20220512005507-7184
	I0512 00:57:48.390672    8048 cli_runner.go:217] Completed: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kubernetes-upgrade-20220512005507-7184: (1.2325202s)
	I0512 00:57:48.390672    8048 provision.go:138] copyHostCerts
	I0512 00:57:48.390672    8048 exec_runner.go:144] found C:\Users\jenkins.minikube4\minikube-integration\.minikube/ca.pem, removing ...
	I0512 00:57:48.390672    8048 exec_runner.go:207] rm: C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.pem
	I0512 00:57:48.391670    8048 exec_runner.go:151] cp: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube4\minikube-integration\.minikube/ca.pem (1078 bytes)
	I0512 00:57:48.392669    8048 exec_runner.go:144] found C:\Users\jenkins.minikube4\minikube-integration\.minikube/cert.pem, removing ...
	I0512 00:57:48.392669    8048 exec_runner.go:207] rm: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cert.pem
	I0512 00:57:48.393673    8048 exec_runner.go:151] cp: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube4\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0512 00:57:48.394671    8048 exec_runner.go:144] found C:\Users\jenkins.minikube4\minikube-integration\.minikube/key.pem, removing ...
	I0512 00:57:48.394671    8048 exec_runner.go:207] rm: C:\Users\jenkins.minikube4\minikube-integration\.minikube\key.pem
	I0512 00:57:48.394671    8048 exec_runner.go:151] cp: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube4\minikube-integration\.minikube/key.pem (1675 bytes)
	I0512 00:57:48.395669    8048 provision.go:112] generating server cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.kubernetes-upgrade-20220512005507-7184 san=[192.168.49.2 127.0.0.1 localhost 127.0.0.1 minikube kubernetes-upgrade-20220512005507-7184]
	I0512 00:57:48.544733    8048 provision.go:172] copyRemoteCerts
	I0512 00:57:48.556739    8048 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0512 00:57:48.563744    8048 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220512005507-7184
	I0512 00:57:49.761136    8048 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220512005507-7184: (1.1971468s)
	I0512 00:57:49.761136    8048 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49824 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\kubernetes-upgrade-20220512005507-7184\id_rsa Username:docker}
	I0512 00:57:49.896214    8048 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (1.3384037s)
	I0512 00:57:49.896214    8048 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0512 00:57:49.947982    8048 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1285 bytes)
	I0512 00:57:50.001053    8048 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0512 00:57:50.052620    8048 provision.go:86] duration metric: configureAuth took 2.900379s
	I0512 00:57:50.052620    8048 ubuntu.go:193] setting minikube options for container-runtime
	I0512 00:57:50.053609    8048 config.go:178] Loaded profile config "kubernetes-upgrade-20220512005507-7184": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.6-rc.0
	I0512 00:57:50.061624    8048 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220512005507-7184
	I0512 00:57:47.818717    3732 cli_runner.go:217] Completed: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname missing-upgrade-20220512005316-7184 --name missing-upgrade-20220512005316-7184 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=missing-upgrade-20220512005316-7184 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=missing-upgrade-20220512005316-7184 --network missing-upgrade-20220512005316-7184 --ip 192.168.58.2 --volume missing-upgrade-20220512005316-7184:/var --security-opt apparmor=unconfined --memory=2200mb --memory-swap=2200mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1652251400-14138@sha256:8c847a4aa2afc5a7fc659f9731046bf9cc7e788283deecc83c8633014fb0828a: (2.1763905s)
	I0512 00:57:47.832755    3732 cli_runner.go:164] Run: docker container inspect missing-upgrade-20220512005316-7184 --format={{.State.Running}}
	I0512 00:57:49.032080    3732 cli_runner.go:217] Completed: docker container inspect missing-upgrade-20220512005316-7184 --format={{.State.Running}}: (1.1992161s)
	I0512 00:57:49.042333    3732 cli_runner.go:164] Run: docker container inspect missing-upgrade-20220512005316-7184 --format={{.State.Status}}
	I0512 00:57:50.220597    3732 cli_runner.go:217] Completed: docker container inspect missing-upgrade-20220512005316-7184 --format={{.State.Status}}: (1.1782036s)
	I0512 00:57:50.227606    3732 cli_runner.go:164] Run: docker exec missing-upgrade-20220512005316-7184 stat /var/lib/dpkg/alternatives/iptables
	I0512 00:57:46.385374    9076 cli_runner.go:217] Completed: docker volume create pause-20220512005140-7184 --label name.minikube.sigs.k8s.io=pause-20220512005140-7184 --label created_by.minikube.sigs.k8s.io=true: (1.0706693s)
	I0512 00:57:46.385513    9076 oci.go:103] Successfully created a docker volume pause-20220512005140-7184
	I0512 00:57:46.393656    9076 cli_runner.go:164] Run: docker run --rm --name pause-20220512005140-7184-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=pause-20220512005140-7184 --entrypoint /usr/bin/test -v pause-20220512005140-7184:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1652251400-14138@sha256:8c847a4aa2afc5a7fc659f9731046bf9cc7e788283deecc83c8633014fb0828a -d /var/lib
	I0512 00:57:48.936793    9076 cli_runner.go:217] Completed: docker run --rm --name pause-20220512005140-7184-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=pause-20220512005140-7184 --entrypoint /usr/bin/test -v pause-20220512005140-7184:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1652251400-14138@sha256:8c847a4aa2afc5a7fc659f9731046bf9cc7e788283deecc83c8633014fb0828a -d /var/lib: (2.5430061s)
	I0512 00:57:48.936793    9076 oci.go:107] Successfully prepared a docker volume pause-20220512005140-7184
	I0512 00:57:48.936793    9076 preload.go:132] Checking if preload exists for k8s version v1.23.5 and runtime docker
	I0512 00:57:48.936793    9076 kic.go:179] Starting extracting preloaded images to volume ...
	I0512 00:57:48.946802    9076 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.23.5-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v pause-20220512005140-7184:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1652251400-14138@sha256:8c847a4aa2afc5a7fc659f9731046bf9cc7e788283deecc83c8633014fb0828a -I lz4 -xf /preloaded.tar -C /extractDir
	I0512 00:57:51.145224    8048 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220512005507-7184: (1.0834531s)
	I0512 00:57:51.149906    8048 main.go:134] libmachine: Using SSH client type: native
	I0512 00:57:51.149906    8048 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x422e20] 0x425c80 <nil>  [] 0s} 127.0.0.1 49824 <nil> <nil>}
	I0512 00:57:51.151019    8048 main.go:134] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0512 00:57:51.277753    8048 main.go:134] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0512 00:57:51.277753    8048 ubuntu.go:71] root file system type: overlay
	I0512 00:57:51.278757    8048 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0512 00:57:51.287776    8048 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220512005507-7184
	I0512 00:57:52.372688    8048 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220512005507-7184: (1.084856s)
	I0512 00:57:52.378653    8048 main.go:134] libmachine: Using SSH client type: native
	I0512 00:57:52.378711    8048 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x422e20] 0x425c80 <nil>  [] 0s} 127.0.0.1 49824 <nil> <nil>}
	I0512 00:57:52.378711    8048 main.go:134] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0512 00:57:52.548771    8048 main.go:134] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0512 00:57:52.560024    8048 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220512005507-7184
	I0512 00:57:53.630819    8048 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220512005507-7184: (1.0700978s)
	I0512 00:57:53.634579    8048 main.go:134] libmachine: Using SSH client type: native
	I0512 00:57:53.635183    8048 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x422e20] 0x425c80 <nil>  [] 0s} 127.0.0.1 49824 <nil> <nil>}
	I0512 00:57:53.635183    8048 main.go:134] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0512 00:57:53.811176    8048 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I0512 00:57:53.811176    8048 machine.go:91] provisioned docker machine in 9.2700027s
	I0512 00:57:53.811176    8048 start.go:306] post-start starting for "kubernetes-upgrade-20220512005507-7184" (driver="docker")
	I0512 00:57:53.811176    8048 start.go:316] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0512 00:57:53.822166    8048 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0512 00:57:53.831164    8048 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220512005507-7184
	I0512 00:57:54.866469    8048 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220512005507-7184: (1.0352517s)
	I0512 00:57:54.866469    8048 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49824 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\kubernetes-upgrade-20220512005507-7184\id_rsa Username:docker}
	I0512 00:57:55.010856    8048 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (1.1876215s)
	I0512 00:57:55.023786    8048 ssh_runner.go:195] Run: cat /etc/os-release
	I0512 00:57:55.046482    8048 main.go:134] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0512 00:57:55.046482    8048 main.go:134] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0512 00:57:55.046482    8048 main.go:134] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0512 00:57:55.046482    8048 info.go:137] Remote host: Ubuntu 20.04.4 LTS
	I0512 00:57:55.046482    8048 filesync.go:126] Scanning C:\Users\jenkins.minikube4\minikube-integration\.minikube\addons for local assets ...
	I0512 00:57:55.047698    8048 filesync.go:126] Scanning C:\Users\jenkins.minikube4\minikube-integration\.minikube\files for local assets ...
	I0512 00:57:55.048745    8048 filesync.go:149] local asset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\71842.pem -> 71842.pem in /etc/ssl/certs
	I0512 00:57:55.064671    8048 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0512 00:57:55.084656    8048 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\71842.pem --> /etc/ssl/certs/71842.pem (1708 bytes)
	I0512 00:57:55.144682    8048 start.go:309] post-start completed in 1.3334372s
	I0512 00:57:55.155679    8048 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0512 00:57:55.163682    8048 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220512005507-7184
	I0512 00:57:51.461093    3732 cli_runner.go:217] Completed: docker exec missing-upgrade-20220512005316-7184 stat /var/lib/dpkg/alternatives/iptables: (1.2332992s)
	I0512 00:57:51.461093    3732 oci.go:247] the created container "missing-upgrade-20220512005316-7184" has a running status.
	I0512 00:57:51.461178    3732 kic.go:210] Creating ssh key for kic: C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\missing-upgrade-20220512005316-7184\id_rsa...
	I0512 00:57:51.794026    3732 kic_runner.go:191] docker (temp): C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\missing-upgrade-20220512005316-7184\id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0512 00:57:53.009118    3732 cli_runner.go:164] Run: docker container inspect missing-upgrade-20220512005316-7184 --format={{.State.Status}}
	I0512 00:57:54.072748    3732 cli_runner.go:217] Completed: docker container inspect missing-upgrade-20220512005316-7184 --format={{.State.Status}}: (1.0633616s)
	I0512 00:57:54.092462    3732 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0512 00:57:54.092462    3732 kic_runner.go:114] Args: [docker exec --privileged missing-upgrade-20220512005316-7184 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0512 00:57:55.353220    3732 kic_runner.go:123] Done: [docker exec --privileged missing-upgrade-20220512005316-7184 chown docker:docker /home/docker/.ssh/authorized_keys]: (1.2606936s)
	I0512 00:57:55.356233    3732 kic.go:250] ensuring only current user has permissions to key file located at : C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\missing-upgrade-20220512005316-7184\id_rsa...
	I0512 00:57:55.861935    3732 cli_runner.go:164] Run: docker container inspect missing-upgrade-20220512005316-7184 --format={{.State.Status}}
	I0512 00:57:56.305878    8048 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220512005507-7184: (1.1421367s)
	I0512 00:57:56.305878    8048 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49824 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\kubernetes-upgrade-20220512005507-7184\id_rsa Username:docker}
	I0512 00:57:56.428843    8048 ssh_runner.go:235] Completed: sh -c "df -h /var | awk 'NR==2{print $5}'": (1.2730983s)
	I0512 00:57:56.443205    8048 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0512 00:57:56.459135    8048 fix.go:57] fixHost completed within 17.8310835s
	I0512 00:57:56.460313    8048 start.go:81] releasing machines lock for "kubernetes-upgrade-20220512005507-7184", held for 17.8322613s
	I0512 00:57:56.467545    8048 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kubernetes-upgrade-20220512005507-7184
	I0512 00:57:57.532516    8048 cli_runner.go:217] Completed: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kubernetes-upgrade-20220512005507-7184: (1.0647978s)
	I0512 00:57:57.534781    8048 ssh_runner.go:195] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0512 00:57:57.543339    8048 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220512005507-7184
	I0512 00:57:57.544316    8048 ssh_runner.go:195] Run: systemctl --version
	I0512 00:57:57.551319    8048 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220512005507-7184
	I0512 00:57:58.602039    8048 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220512005507-7184: (1.050666s)
	I0512 00:57:58.602039    8048 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49824 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\kubernetes-upgrade-20220512005507-7184\id_rsa Username:docker}
	I0512 00:57:58.645044    8048 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220512005507-7184: (1.1016477s)
	I0512 00:57:58.645044    8048 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49824 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\kubernetes-upgrade-20220512005507-7184\id_rsa Username:docker}
	I0512 00:57:58.668036    8048 ssh_runner.go:235] Completed: systemctl --version: (1.1236625s)
	I0512 00:57:58.679045    8048 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0512 00:57:58.718049    8048 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0512 00:57:58.807878    8048 ssh_runner.go:235] Completed: curl -sS -m 2 https://k8s.gcr.io/: (1.2729319s)
	I0512 00:57:58.808194    8048 cruntime.go:273] skipping containerd shutdown because we are bound to it
	I0512 00:57:58.822583    8048 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0512 00:57:58.847123    8048 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/dockershim.sock
	image-endpoint: unix:///var/run/dockershim.sock
	" | sudo tee /etc/crictl.yaml"
	I0512 00:57:58.895295    8048 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0512 00:57:59.018549    8048 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0512 00:57:59.210838    8048 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0512 00:57:59.248664    8048 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0512 00:57:59.427678    8048 ssh_runner.go:195] Run: sudo systemctl start docker
	I0512 00:57:59.461674    8048 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0512 00:57:59.549582    8048 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0512 00:57:59.631535    8048 out.go:204] * Preparing Kubernetes v1.23.6-rc.0 on Docker 20.10.15 ...
	I0512 00:57:59.640539    8048 cli_runner.go:164] Run: docker exec -t kubernetes-upgrade-20220512005507-7184 dig +short host.docker.internal
	I0512 00:57:56.933837    3732 cli_runner.go:217] Completed: docker container inspect missing-upgrade-20220512005316-7184 --format={{.State.Status}}: (1.0718467s)
	I0512 00:57:56.933837    3732 machine.go:88] provisioning docker machine ...
	I0512 00:57:56.933837    3732 ubuntu.go:169] provisioning hostname "missing-upgrade-20220512005316-7184"
	I0512 00:57:56.940881    3732 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-20220512005316-7184
	I0512 00:57:57.985063    3732 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-20220512005316-7184: (1.0441281s)
	I0512 00:57:57.988054    3732 main.go:134] libmachine: Using SSH client type: native
	I0512 00:57:57.994545    3732 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x422e20] 0x425c80 <nil>  [] 0s} 127.0.0.1 49837 <nil> <nil>}
	I0512 00:57:57.994545    3732 main.go:134] libmachine: About to run SSH command:
	sudo hostname missing-upgrade-20220512005316-7184 && echo "missing-upgrade-20220512005316-7184" | sudo tee /etc/hostname
	I0512 00:57:58.149893    3732 main.go:134] libmachine: SSH cmd err, output: <nil>: missing-upgrade-20220512005316-7184
	
	I0512 00:57:58.156965    3732 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-20220512005316-7184
	I0512 00:57:59.251683    3732 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-20220512005316-7184: (1.094662s)
	I0512 00:57:59.255669    3732 main.go:134] libmachine: Using SSH client type: native
	I0512 00:57:59.255669    3732 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x422e20] 0x425c80 <nil>  [] 0s} 127.0.0.1 49837 <nil> <nil>}
	I0512 00:57:59.256674    3732 main.go:134] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smissing-upgrade-20220512005316-7184' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 missing-upgrade-20220512005316-7184/g' /etc/hosts;
				else 
					echo '127.0.1.1 missing-upgrade-20220512005316-7184' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0512 00:57:59.372675    3732 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I0512 00:57:59.372675    3732 ubuntu.go:175] set auth options {CertDir:C:\Users\jenkins.minikube4\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube4\minikube-integration\.minikube}
	I0512 00:57:59.372675    3732 ubuntu.go:177] setting up certificates
	I0512 00:57:59.372675    3732 provision.go:83] configureAuth start
	I0512 00:57:59.380677    3732 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" missing-upgrade-20220512005316-7184
	I0512 00:58:00.456056    3732 cli_runner.go:217] Completed: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" missing-upgrade-20220512005316-7184: (1.0752242s)
	I0512 00:58:00.456111    3732 provision.go:138] copyHostCerts
	I0512 00:58:00.456111    3732 exec_runner.go:144] found C:\Users\jenkins.minikube4\minikube-integration\.minikube/ca.pem, removing ...
	I0512 00:58:00.456111    3732 exec_runner.go:207] rm: C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.pem
	I0512 00:58:00.456834    3732 exec_runner.go:151] cp: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube4\minikube-integration\.minikube/ca.pem (1078 bytes)
	I0512 00:58:00.458104    3732 exec_runner.go:144] found C:\Users\jenkins.minikube4\minikube-integration\.minikube/cert.pem, removing ...
	I0512 00:58:00.458175    3732 exec_runner.go:207] rm: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cert.pem
	I0512 00:58:00.458484    3732 exec_runner.go:151] cp: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube4\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0512 00:58:00.459438    3732 exec_runner.go:144] found C:\Users\jenkins.minikube4\minikube-integration\.minikube/key.pem, removing ...
	I0512 00:58:00.459484    3732 exec_runner.go:207] rm: C:\Users\jenkins.minikube4\minikube-integration\.minikube\key.pem
	I0512 00:58:00.459838    3732 exec_runner.go:151] cp: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube4\minikube-integration\.minikube/key.pem (1675 bytes)
	I0512 00:58:00.460772    3732 provision.go:112] generating server cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.missing-upgrade-20220512005316-7184 san=[192.168.58.2 127.0.0.1 localhost 127.0.0.1 minikube missing-upgrade-20220512005316-7184]
	I0512 00:58:00.873991    3732 provision.go:172] copyRemoteCerts
	I0512 00:58:00.888730    3732 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0512 00:58:00.902807    3732 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-20220512005316-7184
	
	* 
	* ==> Docker <==
	* -- Logs begin at Thu 2022-05-12 00:52:00 UTC, end at Thu 2022-05-12 00:58:10 UTC. --
	May 12 00:56:53 running-upgrade-20220512005137-7184 dockerd[6473]: time="2022-05-12T00:56:53.232048000Z" level=warning msg="d9dbf265645ce275777bb54367da1a01755bf0da47469305a422d9a791210551 cleanup: failed to unmount IPC: umount /var/lib/docker/containers/d9dbf265645ce275777bb54367da1a01755bf0da47469305a422d9a791210551/mounts/shm, flags: 0x2: no such file or directory"
	May 12 00:57:05 running-upgrade-20220512005137-7184 dockerd[6473]: time="2022-05-12T00:57:05.606189700Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	May 12 00:57:05 running-upgrade-20220512005137-7184 dockerd[6473]: time="2022-05-12T00:57:05.606404700Z" level=warning msg="23a4f4d2f9095dae60cdffa5d72c5292234c90c93fb192ce83bc88cc422ae4b2 cleanup: failed to unmount IPC: umount /var/lib/docker/containers/23a4f4d2f9095dae60cdffa5d72c5292234c90c93fb192ce83bc88cc422ae4b2/mounts/shm, flags: 0x2: no such file or directory"
	May 12 00:57:06 running-upgrade-20220512005137-7184 dockerd[6473]: time="2022-05-12T00:57:06.920157100Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	May 12 00:57:06 running-upgrade-20220512005137-7184 dockerd[6473]: time="2022-05-12T00:57:06.920389600Z" level=warning msg="19e20872fb0645ed001e65636e05c7dc4a51464f5a959eb495c5d029bade02f1 cleanup: failed to unmount IPC: umount /var/lib/docker/containers/19e20872fb0645ed001e65636e05c7dc4a51464f5a959eb495c5d029bade02f1/mounts/shm, flags: 0x2: no such file or directory"
	May 12 00:57:10 running-upgrade-20220512005137-7184 dockerd[6473]: time="2022-05-12T00:57:10.006861400Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	May 12 00:57:10 running-upgrade-20220512005137-7184 dockerd[6473]: time="2022-05-12T00:57:10.007014300Z" level=warning msg="0eeb935928f59220aa4f460bf977207213b9ed7b69cfb6e216d1cf32db487127 cleanup: failed to unmount IPC: umount /var/lib/docker/containers/0eeb935928f59220aa4f460bf977207213b9ed7b69cfb6e216d1cf32db487127/mounts/shm, flags: 0x2: no such file or directory"
	May 12 00:57:16 running-upgrade-20220512005137-7184 dockerd[6473]: time="2022-05-12T00:57:16.816930100Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	May 12 00:57:16 running-upgrade-20220512005137-7184 dockerd[6473]: time="2022-05-12T00:57:16.817122900Z" level=warning msg="576ff80e558890f31a329a7d53c5fe81a07357b2f96ee90635ff3471c0bd8316 cleanup: failed to unmount IPC: umount /var/lib/docker/containers/576ff80e558890f31a329a7d53c5fe81a07357b2f96ee90635ff3471c0bd8316/mounts/shm, flags: 0x2: no such file or directory"
	May 12 00:57:22 running-upgrade-20220512005137-7184 dockerd[6473]: time="2022-05-12T00:57:22.258539100Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	May 12 00:57:22 running-upgrade-20220512005137-7184 dockerd[6473]: time="2022-05-12T00:57:22.258862200Z" level=warning msg="93e40218b0badfbf855ffd5aaaea334c01ac5c31db47c8b6c3330e3ad8a45f20 cleanup: failed to unmount IPC: umount /var/lib/docker/containers/93e40218b0badfbf855ffd5aaaea334c01ac5c31db47c8b6c3330e3ad8a45f20/mounts/shm, flags: 0x2: no such file or directory"
	May 12 00:57:27 running-upgrade-20220512005137-7184 dockerd[6473]: time="2022-05-12T00:57:27.253200600Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	May 12 00:57:27 running-upgrade-20220512005137-7184 dockerd[6473]: time="2022-05-12T00:57:27.253419100Z" level=warning msg="624ae4bd0bb303d84d1576b82b5c9412004cfa486fdf1772efe665a224d9514a cleanup: failed to unmount IPC: umount /var/lib/docker/containers/624ae4bd0bb303d84d1576b82b5c9412004cfa486fdf1772efe665a224d9514a/mounts/shm, flags: 0x2: no such file or directory"
	May 12 00:57:28 running-upgrade-20220512005137-7184 dockerd[6473]: time="2022-05-12T00:57:28.050505600Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	May 12 00:57:28 running-upgrade-20220512005137-7184 dockerd[6473]: time="2022-05-12T00:57:28.050723700Z" level=warning msg="2e507489883987c728ed506cc8299239513cbd17a3d0e727f184d3efe3751315 cleanup: failed to unmount IPC: umount /var/lib/docker/containers/2e507489883987c728ed506cc8299239513cbd17a3d0e727f184d3efe3751315/mounts/shm, flags: 0x2: no such file or directory"
	May 12 00:57:28 running-upgrade-20220512005137-7184 dockerd[6473]: time="2022-05-12T00:57:28.426835600Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	May 12 00:57:28 running-upgrade-20220512005137-7184 dockerd[6473]: time="2022-05-12T00:57:28.838252600Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	May 12 00:57:29 running-upgrade-20220512005137-7184 dockerd[6473]: time="2022-05-12T00:57:29.227703200Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	May 12 00:57:29 running-upgrade-20220512005137-7184 dockerd[6473]: time="2022-05-12T00:57:29.561523000Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	May 12 00:57:29 running-upgrade-20220512005137-7184 dockerd[6473]: time="2022-05-12T00:57:29.922953600Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	May 12 00:57:30 running-upgrade-20220512005137-7184 dockerd[6473]: time="2022-05-12T00:57:30.321774600Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	May 12 00:57:30 running-upgrade-20220512005137-7184 dockerd[6473]: time="2022-05-12T00:57:30.685725500Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	May 12 00:57:31 running-upgrade-20220512005137-7184 dockerd[6473]: time="2022-05-12T00:57:31.025283600Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	May 12 00:57:31 running-upgrade-20220512005137-7184 dockerd[6473]: time="2022-05-12T00:57:31.362943100Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	May 12 00:57:31 running-upgrade-20220512005137-7184 dockerd[6473]: time="2022-05-12T00:57:31.727592700Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	
	* 
	* ==> container status <==
	* time="2022-05-12T00:58:13Z" level=fatal msg="failed to connect: failed to connect, make sure you are running as root and the runtime has been started: context deadline exceeded"
	CONTAINER ID        IMAGE               COMMAND             CREATED             STATUS              PORTS               NAMES
	
	* 
	* ==> describe nodes <==
	* 
	* ==> dmesg <==
	* [May12 00:38] WSL2: Performing memory compaction.
	[May12 00:39] WSL2: Performing memory compaction.
	[May12 00:41] WSL2: Performing memory compaction.
	[May12 00:42] WSL2: Performing memory compaction.
	[May12 00:43] WSL2: Performing memory compaction.
	[May12 00:44] WSL2: Performing memory compaction.
	[May12 00:45] WSL2: Performing memory compaction.
	[May12 00:46] WSL2: Performing memory compaction.
	[May12 00:47] WSL2: Performing memory compaction.
	[May12 00:48] WSL2: Performing memory compaction.
	[May12 00:49] process 'docker/tmp/qemu-check071081722/check' started with executable stack
	[ +21.082981] WSL2: Performing memory compaction.
	[May12 00:51] WSL2: Performing memory compaction.
	[May12 00:52] WSL2: Performing memory compaction.
	[May12 00:54] WSL2: Performing memory compaction.
	[May12 00:56] overlayfs: upperdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
	[  +0.000004] overlayfs: workdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
	[  +0.010948] overlayfs: upperdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
	[  +0.000005] overlayfs: workdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
	[  +0.036593] overlayfs: upperdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
	[  +0.000001] overlayfs: upperdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
	[  +0.000004] overlayfs: workdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
	[  +0.000001] overlayfs: workdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
	[May12 00:57] WSL2: Performing memory compaction.
	[May12 00:58] WSL2: Performing memory compaction.
	
	* 
	* ==> kernel <==
	*  00:59:14 up  2:07,  0 users,  load average: 5.01, 4.63, 3.17
	Linux running-upgrade-20220512005137-7184 5.10.102.1-microsoft-standard-WSL2 #1 SMP Wed Mar 2 00:30:59 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 19.10"
	
	* 
	* ==> kubelet <==
	* -- Logs begin at Thu 2022-05-12 00:52:00 UTC, end at Thu 2022-05-12 00:59:14 UTC. --
	May 12 00:57:25 running-upgrade-20220512005137-7184 kubelet[8709]: E0512 00:57:25.216906    8709 kubelet.go:2267] node "running-upgrade-20220512005137-7184" not found
	May 12 00:57:25 running-upgrade-20220512005137-7184 kubelet[8709]: E0512 00:57:25.317717    8709 kubelet.go:2267] node "running-upgrade-20220512005137-7184" not found
	May 12 00:57:25 running-upgrade-20220512005137-7184 kubelet[8709]: E0512 00:57:25.418434    8709 kubelet.go:2267] node "running-upgrade-20220512005137-7184" not found
	May 12 00:57:25 running-upgrade-20220512005137-7184 kubelet[8709]: E0512 00:57:25.519395    8709 kubelet.go:2267] node "running-upgrade-20220512005137-7184" not found
	May 12 00:57:25 running-upgrade-20220512005137-7184 kubelet[8709]: E0512 00:57:25.620557    8709 kubelet.go:2267] node "running-upgrade-20220512005137-7184" not found
	May 12 00:57:25 running-upgrade-20220512005137-7184 kubelet[8709]: E0512 00:57:25.721475    8709 kubelet.go:2267] node "running-upgrade-20220512005137-7184" not found
	May 12 00:57:25 running-upgrade-20220512005137-7184 kubelet[8709]: E0512 00:57:25.822609    8709 kubelet.go:2267] node "running-upgrade-20220512005137-7184" not found
	May 12 00:57:25 running-upgrade-20220512005137-7184 kubelet[8709]: E0512 00:57:25.923378    8709 kubelet.go:2267] node "running-upgrade-20220512005137-7184" not found
	May 12 00:57:25 running-upgrade-20220512005137-7184 kubelet[8709]: I0512 00:57:25.967893    8709 kubelet_node_status.go:294] Setting node annotation to enable volume controller attach/detach
	May 12 00:57:26 running-upgrade-20220512005137-7184 kubelet[8709]: E0512 00:57:26.024510    8709 kubelet.go:2267] node "running-upgrade-20220512005137-7184" not found
	May 12 00:57:26 running-upgrade-20220512005137-7184 kubelet[8709]: I0512 00:57:26.037196    8709 topology_manager.go:219] [topologymanager] RemoveContainer - Container ID: 576ff80e558890f31a329a7d53c5fe81a07357b2f96ee90635ff3471c0bd8316
	May 12 00:57:26 running-upgrade-20220512005137-7184 kubelet[8709]: E0512 00:57:26.037949    8709 pod_workers.go:191] Error syncing pod 112c60df9e36eeaf13a6dd3074765810 ("kube-apiserver-running-upgrade-20220512005137-7184_kube-system(112c60df9e36eeaf13a6dd3074765810)"), skipping: failed to "StartContainer" for "kube-apiserver" with CrashLoopBackOff: "back-off 20s restarting failed container=kube-apiserver pod=kube-apiserver-running-upgrade-20220512005137-7184_kube-system(112c60df9e36eeaf13a6dd3074765810)"
	May 12 00:57:26 running-upgrade-20220512005137-7184 kubelet[8709]: W0512 00:57:26.043276    8709 status_manager.go:556] Failed to get status for pod "kube-apiserver-running-upgrade-20220512005137-7184_kube-system(112c60df9e36eeaf13a6dd3074765810)": Get https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-running-upgrade-20220512005137-7184: x509: certificate is valid for minikubeCA, kubernetes.default.svc.cluster.local, kubernetes.default.svc, kubernetes.default, kubernetes, localhost, not control-plane.minikube.internal
	May 12 00:57:26 running-upgrade-20220512005137-7184 kubelet[8709]: E0512 00:57:26.118043    8709 event.go:269] Unable to write event: 'Post https://control-plane.minikube.internal:8443/api/v1/namespaces/default/events: x509: certificate is valid for minikubeCA, kubernetes.default.svc.cluster.local, kubernetes.default.svc, kubernetes.default, kubernetes, localhost, not control-plane.minikube.internal' (may retry after sleeping)
	May 12 00:57:26 running-upgrade-20220512005137-7184 kubelet[8709]: E0512 00:57:26.125485    8709 kubelet.go:2267] node "running-upgrade-20220512005137-7184" not found
	May 12 00:57:26 running-upgrade-20220512005137-7184 kubelet[8709]: E0512 00:57:26.227154    8709 kubelet.go:2267] node "running-upgrade-20220512005137-7184" not found
	May 12 00:57:26 running-upgrade-20220512005137-7184 kubelet[8709]: E0512 00:57:26.328291    8709 kubelet.go:2267] node "running-upgrade-20220512005137-7184" not found
	May 12 00:57:26 running-upgrade-20220512005137-7184 kubelet[8709]: E0512 00:57:26.429686    8709 kubelet.go:2267] node "running-upgrade-20220512005137-7184" not found
	May 12 00:57:26 running-upgrade-20220512005137-7184 kubelet[8709]: E0512 00:57:26.530279    8709 kubelet.go:2267] node "running-upgrade-20220512005137-7184" not found
	May 12 00:57:26 running-upgrade-20220512005137-7184 kubelet[8709]: E0512 00:57:26.630894    8709 kubelet.go:2267] node "running-upgrade-20220512005137-7184" not found
	May 12 00:57:26 running-upgrade-20220512005137-7184 kubelet[8709]: E0512 00:57:26.661044    8709 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1beta1.RuntimeClass: Get https://control-plane.minikube.internal:8443/apis/node.k8s.io/v1beta1/runtimeclasses?limit=500&resourceVersion=0: x509: certificate is valid for minikubeCA, kubernetes.default.svc.cluster.local, kubernetes.default.svc, kubernetes.default, kubernetes, localhost, not control-plane.minikube.internal
	May 12 00:57:26 running-upgrade-20220512005137-7184 kubelet[8709]: E0512 00:57:26.732309    8709 kubelet.go:2267] node "running-upgrade-20220512005137-7184" not found
	May 12 00:57:26 running-upgrade-20220512005137-7184 systemd[1]: Stopping kubelet: The Kubernetes Node Agent...
	May 12 00:57:26 running-upgrade-20220512005137-7184 systemd[1]: kubelet.service: Succeeded.
	May 12 00:57:26 running-upgrade-20220512005137-7184 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0512 00:59:14.886304    9696 logs.go:192] command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.18.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.18.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	Error from server (Timeout): the server was unable to return a response in the time allotted, but may still be processing the request (get nodes)
	 output: "\n** stderr ** \nError from server (Timeout): the server was unable to return a response in the time allotted, but may still be processing the request (get nodes)\n\n** /stderr **"
	! unable to fetch logs for: describe nodes

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p running-upgrade-20220512005137-7184 -n running-upgrade-20220512005137-7184

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p running-upgrade-20220512005137-7184 -n running-upgrade-20220512005137-7184: exit status 2 (22.4457986s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0512 00:59:37.785623    4476 status.go:422] Error apiserver status: https://127.0.0.1:49480/healthz returned error 500:
	[+]ping ok
	[-]log failed: reason withheld
	[-]etcd failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	

                                                
                                                
** /stderr **
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "running-upgrade-20220512005137-7184" apiserver is not running, skipping kubectl commands (state="Error")
helpers_test.go:175: Cleaning up "running-upgrade-20220512005137-7184" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p running-upgrade-20220512005137-7184

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p running-upgrade-20220512005137-7184: (22.0280367s)
--- FAIL: TestRunningBinaryUpgrade (502.10s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (41.6s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-windows-amd64.exe start -p NoKubernetes-20220512004748-7184 --no-kubernetes --driver=docker

                                                
                                                
=== CONT  TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p NoKubernetes-20220512004748-7184 --no-kubernetes --driver=docker: exit status 1 (33.4822536s)

                                                
                                                
-- stdout --
	* [NoKubernetes-20220512004748-7184] minikube v1.25.2 on Microsoft Windows 10 Enterprise N 10.0.19042 Build 19042
	  - KUBECONFIG=C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube4\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=13639
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	* Using the docker driver based on user configuration
	* Using Docker Desktop driver with the root privilege
	* Starting minikube without Kubernetes NoKubernetes-20220512004748-7184 in cluster NoKubernetes-20220512004748-7184
	* Pulling base image ...
	* Creating docker container (CPUs=2, Memory=16300MB) ...

                                                
                                                
-- /stdout --
no_kubernetes_test.go:138: failed to start minikube with args: "out/minikube-windows-amd64.exe start -p NoKubernetes-20220512004748-7184 --no-kubernetes --driver=docker" : exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestNoKubernetes/serial/Start]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect NoKubernetes-20220512004748-7184
helpers_test.go:231: (dbg) Done: docker inspect NoKubernetes-20220512004748-7184: (1.2167695s)
helpers_test.go:235: (dbg) docker inspect NoKubernetes-20220512004748-7184:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "5a64b175d7176fbacc2d7582c097df7a6729dd8731a9540f58ca2e8138cd81a6",
	        "Created": "2022-05-12T00:52:44.4896852Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 144435,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2022-05-12T00:52:45.6632847Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:8a42e1145657f551cd435eddb43b96ab44d0facbe44106da934225366eeb7757",
	        "ResolvConfPath": "/var/lib/docker/containers/5a64b175d7176fbacc2d7582c097df7a6729dd8731a9540f58ca2e8138cd81a6/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/5a64b175d7176fbacc2d7582c097df7a6729dd8731a9540f58ca2e8138cd81a6/hostname",
	        "HostsPath": "/var/lib/docker/containers/5a64b175d7176fbacc2d7582c097df7a6729dd8731a9540f58ca2e8138cd81a6/hosts",
	        "LogPath": "/var/lib/docker/containers/5a64b175d7176fbacc2d7582c097df7a6729dd8731a9540f58ca2e8138cd81a6/5a64b175d7176fbacc2d7582c097df7a6729dd8731a9540f58ca2e8138cd81a6-json.log",
	        "Name": "/NoKubernetes-20220512004748-7184",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "NoKubernetes-20220512004748-7184:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "NoKubernetes-20220512004748-7184",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 17091788800,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 17091788800,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/44115ef8558b8eb5778265120e779360b480c99f3414e2f8b34cfb0c29beac66-init/diff:/var/lib/docker/overlay2/f5e9ce82d74c36e0b6de7ac5d28dc4eacb2abae094b3550db66974795ad74446/diff:/var/lib/docker/overlay2/63fa2edc88f635760dd847938696c8fc50aad8a0c51ab6c6f93b0aa9a6fcefe6/diff:/var/lib/docker/overlay2/5fcaace21e215fd120a775470b4a78ef06d9f466e024f0b1baef234ddf87f04f/diff:/var/lib/docker/overlay2/6c9accb62919ca088333f8b3a193f43886e2c3042c5ec726e1c7fd19ee183024/diff:/var/lib/docker/overlay2/a9a1aea640018bd9e524c36422ea97cc97a391601b718810fed779260e48c87a/diff:/var/lib/docker/overlay2/2476b6e8d656e43d76d4288098236bc765cb37fa9dde696f09bfce485e99229e/diff:/var/lib/docker/overlay2/6cdf08ddc61561e961de67f3e14478397868c53a4d8552909a5b84e35b28db1f/diff:/var/lib/docker/overlay2/6f6f8b6686cc7838a52ce30fdc4e938cde2fb68b161e09a9bc81fa11011619a6/diff:/var/lib/docker/overlay2/9f55b91f762ea60dc3da71cf56e5ca24181e712389d99cb8d972bba474f5d6a4/diff:/var/lib/docker/overlay2/4c269b
df57eae0d21d2034dc086308278f9b00f2301c726b6de876b9ff97298d/diff:/var/lib/docker/overlay2/5fb8ed9b9e765df8150f27b7847ec7333b2c7d978dbc1161da97d630ec7e43e2/diff:/var/lib/docker/overlay2/3a297e9f6ab51d930ef61c49a0fea772cdc2a2704a077db6adb142eb044d9a93/diff:/var/lib/docker/overlay2/2068464f4655627fb513b31660ab34c938e559da10d44fd723ce9e1d744a037d/diff:/var/lib/docker/overlay2/f783596106daebadefbb7774015c369d757665d434c96581b426b2e5f5b453c4/diff:/var/lib/docker/overlay2/ac8927d3cc7829cc82e4c0214dd4adee97aedbe2b7d992cbbc08288443c8e320/diff:/var/lib/docker/overlay2/91eff4612dd56b2c82f4357b879f9cdcbc13704bf8f6dcbfc56febb104774843/diff:/var/lib/docker/overlay2/ee7366acc162efb7b878c4c56df021a8ef756fa595230ffe898cd0dd0355eb44/diff:/var/lib/docker/overlay2/ab5df115d2ec8cd71172942a2e449de32b93a3b6b5d90122c0e734c5e11d6bb2/diff:/var/lib/docker/overlay2/59fcbe9b552129cdc5c96e9e8ff27f4b88a12645aec3cf8f48f28d91521760e0/diff:/var/lib/docker/overlay2/0684681eb5880654b43e8803ef8f17b85e6129c85ff81c13b509563184a77625/diff:/var/lib/d
ocker/overlay2/8bbd24801b480df6ca8545e8f8bb09c17b7598c2868fb94ea5b8775ce2f311b4/diff:/var/lib/docker/overlay2/f28553cc59fccccfdfc5c24b7b8dfe4055c625d0a004731911c34b4ba32a9dfb/diff:/var/lib/docker/overlay2/2e47a8ef6e4481885d71f57a1d9ef99898b741644addd2796de5c2f4c696edb0/diff:/var/lib/docker/overlay2/e1f1eaaa809c974dabd197f590d19be05325f506a53a9a1f8ba29defd7096f60/diff:/var/lib/docker/overlay2/83ce12af60df76f98283ed8f3450cd5727b42d06055b18f04a18068b105ae128/diff:/var/lib/docker/overlay2/5fd34820f54e7f8f0c898c21b5d9d030e5b82c65c901897306c3db475481167f/diff:/var/lib/docker/overlay2/3ecb6f46fa47a8906ff5de1da5a63be9c664ff5bc66faf870126868d36bb77c6/diff:/var/lib/docker/overlay2/ccb92f12dd3e84b11b2c9b1ef6a0581ad5894648432ebe7cb5d16d48c7aacf6e/diff:/var/lib/docker/overlay2/7c6d11dc9abdd4916f3759c8ae4db8c3011cff872f2fd3cc502e7f663e496765/diff:/var/lib/docker/overlay2/b865b0351704115fa113e25f7651d1dc1e2f0348c332552e555e898094f34802/diff:/var/lib/docker/overlay2/bbcf207462c3f88368214d8e4ca222f28a828bd30661741d421665b4d10
80f07/diff:/var/lib/docker/overlay2/b554a32e9a2e4d3773e918754c27a1b32bc7ec5327d3bd1f52d7a146a07fa2c5/diff:/var/lib/docker/overlay2/d0a997bacfa9b1b54f61c62f00ad2797616ea9bb55182aad68ed805f96f5f72b/diff:/var/lib/docker/overlay2/e0c168ecfe6a93618f4f653c1aba422023114f242ab1045591d0c8454573d5c2/diff:/var/lib/docker/overlay2/fb67af38a46ef55935fcfb4f1be5f34b45b3d0e1c571538828117f23eedea417/diff:/var/lib/docker/overlay2/e96ed0776e5f27ef225469ac5f5e8ed2e299c72d5db88782599c0fdd1cec2fe3/diff:/var/lib/docker/overlay2/91b77e60e0a7864ace4f5a4d65f465bd7fe862616a87a74ee9fee21dc5dceb07/diff:/var/lib/docker/overlay2/9829211293f70b356dfa8d07b5dbbc3a6d05415cbd2840fd9dd948b8b315bf18/diff:/var/lib/docker/overlay2/dc35dda36e34a2f4f3a5d958b1a7d4d75db8655c4bc7b4b3d9591f43f9a645fc/diff:/var/lib/docker/overlay2/968c2bb04f641a9c8bd30d38659dc28973b31bfd577bb1aa355ae6c2ab4a0d34/diff:/var/lib/docker/overlay2/37432c6ae0b10a52e95b215fdd2e256362060f32c4a52f0d2021b6e10b3ed77b/diff:/var/lib/docker/overlay2/77687f9734b19f3e8a5bb08b07067e0b572775
20867b7a6ad80b67ffebe332d7/diff",
	                "MergedDir": "/var/lib/docker/overlay2/44115ef8558b8eb5778265120e779360b480c99f3414e2f8b34cfb0c29beac66/merged",
	                "UpperDir": "/var/lib/docker/overlay2/44115ef8558b8eb5778265120e779360b480c99f3414e2f8b34cfb0c29beac66/diff",
	                "WorkDir": "/var/lib/docker/overlay2/44115ef8558b8eb5778265120e779360b480c99f3414e2f8b34cfb0c29beac66/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "NoKubernetes-20220512004748-7184",
	                "Source": "/var/lib/docker/volumes/NoKubernetes-20220512004748-7184/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "NoKubernetes-20220512004748-7184",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1652251400-14138@sha256:8c847a4aa2afc5a7fc659f9731046bf9cc7e788283deecc83c8633014fb0828a",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "NoKubernetes-20220512004748-7184",
	                "name.minikube.sigs.k8s.io": "NoKubernetes-20220512004748-7184",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "55ba57607af68a8ae049ff57894c8067bb79dc401c1818fab923fa0cdbc8f258",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49510"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49511"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49512"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49513"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49514"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/55ba57607af6",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "NoKubernetes-20220512004748-7184": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "5a64b175d717",
	                        "NoKubernetes-20220512004748-7184"
	                    ],
	                    "NetworkID": "020bb783e5c975cf489b8249af266c3e1235b8df77defc00eb735dc65738f8a2",
	                    "EndpointID": "9cb0c8f0ae1ab9278769bdcd418fbd70e13caddbae5c2fde5c60a0dcb4b8b4eb",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p NoKubernetes-20220512004748-7184 -n NoKubernetes-20220512004748-7184
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p NoKubernetes-20220512004748-7184 -n NoKubernetes-20220512004748-7184: exit status 3 (6.8882418s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0512 00:52:56.626014    7776 status.go:374] failed to get storage capacity of /var: NewSession: new client: new client: Error creating new native config from ssh using: docker, &{[] [C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\NoKubernetes-20220512004748-7184\id_rsa]}: open C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\NoKubernetes-20220512004748-7184\id_rsa: The system cannot find the file specified.
	E0512 00:52:56.626043    7776 status.go:247] status error: NewSession: new client: new client: Error creating new native config from ssh using: docker, &{[] [C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\NoKubernetes-20220512004748-7184\id_rsa]}: open C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\NoKubernetes-20220512004748-7184\id_rsa: The system cannot find the file specified.

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "NoKubernetes-20220512004748-7184" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestNoKubernetes/serial/Start (41.60s)

                                                
                                    
x
+
TestPause/serial/Pause (57.59s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-windows-amd64.exe pause -p pause-20220512005140-7184 --alsologtostderr -v=5
pause_test.go:110: (dbg) Non-zero exit: out/minikube-windows-amd64.exe pause -p pause-20220512005140-7184 --alsologtostderr -v=5: exit status 80 (7.5175356s)

                                                
                                                
-- stdout --
	* Pausing node pause-20220512005140-7184 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0512 01:01:02.645717    6108 out.go:296] Setting OutFile to fd 1620 ...
	I0512 01:01:02.726523    6108 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0512 01:01:02.726523    6108 out.go:309] Setting ErrFile to fd 1652...
	I0512 01:01:02.726523    6108 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0512 01:01:02.744813    6108 out.go:303] Setting JSON to false
	I0512 01:01:02.744813    6108 mustload.go:65] Loading cluster: pause-20220512005140-7184
	I0512 01:01:02.745416    6108 config.go:178] Loaded profile config "pause-20220512005140-7184": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.5
	I0512 01:01:02.761617    6108 cli_runner.go:164] Run: docker container inspect pause-20220512005140-7184 --format={{.State.Status}}
	I0512 01:01:05.585541    6108 cli_runner.go:217] Completed: docker container inspect pause-20220512005140-7184 --format={{.State.Status}}: (2.8237784s)
	I0512 01:01:05.585541    6108 host.go:66] Checking if "pause-20220512005140-7184" exists ...
	I0512 01:01:05.595889    6108 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" pause-20220512005140-7184
	I0512 01:01:06.723842    6108 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" pause-20220512005140-7184: (1.1278951s)
	I0512 01:01:06.724836    6108 pause.go:58] "namespaces" ="keys" ="(MISSING)"
	I0512 01:01:06.728900    6108 out.go:177] * Pausing node pause-20220512005140-7184 ... 
	I0512 01:01:06.731844    6108 host.go:66] Checking if "pause-20220512005140-7184" exists ...
	I0512 01:01:06.742840    6108 ssh_runner.go:195] Run: systemctl --version
	I0512 01:01:06.748838    6108 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-20220512005140-7184
	I0512 01:01:07.830415    6108 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-20220512005140-7184: (1.0815216s)
	I0512 01:01:07.830947    6108 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49879 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\pause-20220512005140-7184\id_rsa Username:docker}
	I0512 01:01:07.915650    6108 ssh_runner.go:235] Completed: systemctl --version: (1.172749s)
	I0512 01:01:07.926550    6108 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0512 01:01:07.964579    6108 pause.go:50] kubelet running: true
	I0512 01:01:07.981360    6108 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I0512 01:01:08.228406    6108 retry.go:31] will retry after 276.165072ms: kubelet disable --now: sudo systemctl disable --now kubelet: Process exited with status 1
	stdout:
	
	stderr:
	Synchronizing state of kubelet.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install disable kubelet
	update-rc.d: error: kubelet Default-Start contains no runlevels, aborting.
	I0512 01:01:08.515765    6108 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0512 01:01:08.543764    6108 pause.go:50] kubelet running: true
	I0512 01:01:08.556763    6108 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I0512 01:01:08.783375    6108 retry.go:31] will retry after 540.190908ms: kubelet disable --now: sudo systemctl disable --now kubelet: Process exited with status 1
	stdout:
	
	stderr:
	Synchronizing state of kubelet.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install disable kubelet
	update-rc.d: error: kubelet Default-Start contains no runlevels, aborting.
	I0512 01:01:09.343228    6108 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0512 01:01:09.375627    6108 pause.go:50] kubelet running: true
	I0512 01:01:09.392889    6108 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I0512 01:01:09.754577    6108 out.go:177] 
	W0512 01:01:09.763576    6108 out.go:239] X Exiting due to GUEST_PAUSE: kubelet disable --now: sudo systemctl disable --now kubelet: Process exited with status 1
	stdout:
	
	stderr:
	Synchronizing state of kubelet.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install disable kubelet
	update-rc.d: error: kubelet Default-Start contains no runlevels, aborting.
	
	X Exiting due to GUEST_PAUSE: kubelet disable --now: sudo systemctl disable --now kubelet: Process exited with status 1
	stdout:
	
	stderr:
	Synchronizing state of kubelet.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install disable kubelet
	update-rc.d: error: kubelet Default-Start contains no runlevels, aborting.
	
	W0512 01:01:09.763576    6108 out.go:239] * 
	* 
	W0512 01:01:09.792576    6108 out.go:239] ╭───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                       │
	│    * If the above advice does not help, please let us know:                                                           │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                         │
	│                                                                                                                       │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                              │
	│    * Please also attach the following file to the GitHub issue:                                                       │
	│    * - C:\Users\jenkins.minikube4\AppData\Local\Temp\minikube_pause_0a4d03c8adbe4992011689b475409882710ca950_1.log    │
	│                                                                                                                       │
	╰───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                       │
	│    * If the above advice does not help, please let us know:                                                           │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                         │
	│                                                                                                                       │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                              │
	│    * Please also attach the following file to the GitHub issue:                                                       │
	│    * - C:\Users\jenkins.minikube4\AppData\Local\Temp\minikube_pause_0a4d03c8adbe4992011689b475409882710ca950_1.log    │
	│                                                                                                                       │
	╰───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I0512 01:01:09.795581    6108 out.go:177] 

                                                
                                                
** /stderr **
pause_test.go:112: failed to pause minikube with args: "out/minikube-windows-amd64.exe pause -p pause-20220512005140-7184 --alsologtostderr -v=5" : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestPause/serial/Pause]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect pause-20220512005140-7184
helpers_test.go:231: (dbg) Done: docker inspect pause-20220512005140-7184: (1.1587443s)
helpers_test.go:235: (dbg) docker inspect pause-20220512005140-7184:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "18e2eed271b9010aeba677455559d5fa350c421d241ea6643e75bf1b1295b98a",
	        "Created": "2022-05-12T00:58:24.8559004Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 173863,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2022-05-12T00:58:26.7948067Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:8a42e1145657f551cd435eddb43b96ab44d0facbe44106da934225366eeb7757",
	        "ResolvConfPath": "/var/lib/docker/containers/18e2eed271b9010aeba677455559d5fa350c421d241ea6643e75bf1b1295b98a/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/18e2eed271b9010aeba677455559d5fa350c421d241ea6643e75bf1b1295b98a/hostname",
	        "HostsPath": "/var/lib/docker/containers/18e2eed271b9010aeba677455559d5fa350c421d241ea6643e75bf1b1295b98a/hosts",
	        "LogPath": "/var/lib/docker/containers/18e2eed271b9010aeba677455559d5fa350c421d241ea6643e75bf1b1295b98a/18e2eed271b9010aeba677455559d5fa350c421d241ea6643e75bf1b1295b98a-json.log",
	        "Name": "/pause-20220512005140-7184",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "pause-20220512005140-7184:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "pause-20220512005140-7184",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2147483648,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 2147483648,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/e8d5531512b7a911f6cfe7100e18d8756ac20aba06f91fca74ca5931c044b75b-init/diff:/var/lib/docker/overlay2/f5e9ce82d74c36e0b6de7ac5d28dc4eacb2abae094b3550db66974795ad74446/diff:/var/lib/docker/overlay2/63fa2edc88f635760dd847938696c8fc50aad8a0c51ab6c6f93b0aa9a6fcefe6/diff:/var/lib/docker/overlay2/5fcaace21e215fd120a775470b4a78ef06d9f466e024f0b1baef234ddf87f04f/diff:/var/lib/docker/overlay2/6c9accb62919ca088333f8b3a193f43886e2c3042c5ec726e1c7fd19ee183024/diff:/var/lib/docker/overlay2/a9a1aea640018bd9e524c36422ea97cc97a391601b718810fed779260e48c87a/diff:/var/lib/docker/overlay2/2476b6e8d656e43d76d4288098236bc765cb37fa9dde696f09bfce485e99229e/diff:/var/lib/docker/overlay2/6cdf08ddc61561e961de67f3e14478397868c53a4d8552909a5b84e35b28db1f/diff:/var/lib/docker/overlay2/6f6f8b6686cc7838a52ce30fdc4e938cde2fb68b161e09a9bc81fa11011619a6/diff:/var/lib/docker/overlay2/9f55b91f762ea60dc3da71cf56e5ca24181e712389d99cb8d972bba474f5d6a4/diff:/var/lib/docker/overlay2/4c269b
df57eae0d21d2034dc086308278f9b00f2301c726b6de876b9ff97298d/diff:/var/lib/docker/overlay2/5fb8ed9b9e765df8150f27b7847ec7333b2c7d978dbc1161da97d630ec7e43e2/diff:/var/lib/docker/overlay2/3a297e9f6ab51d930ef61c49a0fea772cdc2a2704a077db6adb142eb044d9a93/diff:/var/lib/docker/overlay2/2068464f4655627fb513b31660ab34c938e559da10d44fd723ce9e1d744a037d/diff:/var/lib/docker/overlay2/f783596106daebadefbb7774015c369d757665d434c96581b426b2e5f5b453c4/diff:/var/lib/docker/overlay2/ac8927d3cc7829cc82e4c0214dd4adee97aedbe2b7d992cbbc08288443c8e320/diff:/var/lib/docker/overlay2/91eff4612dd56b2c82f4357b879f9cdcbc13704bf8f6dcbfc56febb104774843/diff:/var/lib/docker/overlay2/ee7366acc162efb7b878c4c56df021a8ef756fa595230ffe898cd0dd0355eb44/diff:/var/lib/docker/overlay2/ab5df115d2ec8cd71172942a2e449de32b93a3b6b5d90122c0e734c5e11d6bb2/diff:/var/lib/docker/overlay2/59fcbe9b552129cdc5c96e9e8ff27f4b88a12645aec3cf8f48f28d91521760e0/diff:/var/lib/docker/overlay2/0684681eb5880654b43e8803ef8f17b85e6129c85ff81c13b509563184a77625/diff:/var/lib/d
ocker/overlay2/8bbd24801b480df6ca8545e8f8bb09c17b7598c2868fb94ea5b8775ce2f311b4/diff:/var/lib/docker/overlay2/f28553cc59fccccfdfc5c24b7b8dfe4055c625d0a004731911c34b4ba32a9dfb/diff:/var/lib/docker/overlay2/2e47a8ef6e4481885d71f57a1d9ef99898b741644addd2796de5c2f4c696edb0/diff:/var/lib/docker/overlay2/e1f1eaaa809c974dabd197f590d19be05325f506a53a9a1f8ba29defd7096f60/diff:/var/lib/docker/overlay2/83ce12af60df76f98283ed8f3450cd5727b42d06055b18f04a18068b105ae128/diff:/var/lib/docker/overlay2/5fd34820f54e7f8f0c898c21b5d9d030e5b82c65c901897306c3db475481167f/diff:/var/lib/docker/overlay2/3ecb6f46fa47a8906ff5de1da5a63be9c664ff5bc66faf870126868d36bb77c6/diff:/var/lib/docker/overlay2/ccb92f12dd3e84b11b2c9b1ef6a0581ad5894648432ebe7cb5d16d48c7aacf6e/diff:/var/lib/docker/overlay2/7c6d11dc9abdd4916f3759c8ae4db8c3011cff872f2fd3cc502e7f663e496765/diff:/var/lib/docker/overlay2/b865b0351704115fa113e25f7651d1dc1e2f0348c332552e555e898094f34802/diff:/var/lib/docker/overlay2/bbcf207462c3f88368214d8e4ca222f28a828bd30661741d421665b4d10
80f07/diff:/var/lib/docker/overlay2/b554a32e9a2e4d3773e918754c27a1b32bc7ec5327d3bd1f52d7a146a07fa2c5/diff:/var/lib/docker/overlay2/d0a997bacfa9b1b54f61c62f00ad2797616ea9bb55182aad68ed805f96f5f72b/diff:/var/lib/docker/overlay2/e0c168ecfe6a93618f4f653c1aba422023114f242ab1045591d0c8454573d5c2/diff:/var/lib/docker/overlay2/fb67af38a46ef55935fcfb4f1be5f34b45b3d0e1c571538828117f23eedea417/diff:/var/lib/docker/overlay2/e96ed0776e5f27ef225469ac5f5e8ed2e299c72d5db88782599c0fdd1cec2fe3/diff:/var/lib/docker/overlay2/91b77e60e0a7864ace4f5a4d65f465bd7fe862616a87a74ee9fee21dc5dceb07/diff:/var/lib/docker/overlay2/9829211293f70b356dfa8d07b5dbbc3a6d05415cbd2840fd9dd948b8b315bf18/diff:/var/lib/docker/overlay2/dc35dda36e34a2f4f3a5d958b1a7d4d75db8655c4bc7b4b3d9591f43f9a645fc/diff:/var/lib/docker/overlay2/968c2bb04f641a9c8bd30d38659dc28973b31bfd577bb1aa355ae6c2ab4a0d34/diff:/var/lib/docker/overlay2/37432c6ae0b10a52e95b215fdd2e256362060f32c4a52f0d2021b6e10b3ed77b/diff:/var/lib/docker/overlay2/77687f9734b19f3e8a5bb08b07067e0b572775
20867b7a6ad80b67ffebe332d7/diff",
	                "MergedDir": "/var/lib/docker/overlay2/e8d5531512b7a911f6cfe7100e18d8756ac20aba06f91fca74ca5931c044b75b/merged",
	                "UpperDir": "/var/lib/docker/overlay2/e8d5531512b7a911f6cfe7100e18d8756ac20aba06f91fca74ca5931c044b75b/diff",
	                "WorkDir": "/var/lib/docker/overlay2/e8d5531512b7a911f6cfe7100e18d8756ac20aba06f91fca74ca5931c044b75b/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "pause-20220512005140-7184",
	                "Source": "/var/lib/docker/volumes/pause-20220512005140-7184/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "pause-20220512005140-7184",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1652251400-14138@sha256:8c847a4aa2afc5a7fc659f9731046bf9cc7e788283deecc83c8633014fb0828a",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "pause-20220512005140-7184",
	                "name.minikube.sigs.k8s.io": "pause-20220512005140-7184",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "5fc86e33a51dd5e3f8a6f4418511d60ecf2eedb16bc3a9b28d55bc8d4edf64db",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49879"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49880"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49881"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49877"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49878"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/5fc86e33a51d",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "pause-20220512005140-7184": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.67.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "18e2eed271b9",
	                        "pause-20220512005140-7184"
	                    ],
	                    "NetworkID": "a9929553bfb020a9e4bf303619ae9b575309dee125013399d5cd8de3ba117e4b",
	                    "EndpointID": "663cda03432b757552b9a422b19dc422d6ee77f8fcff664417a7d7ae476fad45",
	                    "Gateway": "192.168.67.1",
	                    "IPAddress": "192.168.67.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:43:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p pause-20220512005140-7184 -n pause-20220512005140-7184
helpers_test.go:239: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.Host}} -p pause-20220512005140-7184 -n pause-20220512005140-7184: (7.0350501s)
helpers_test.go:244: <<< TestPause/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestPause/serial/Pause]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-windows-amd64.exe -p pause-20220512005140-7184 logs -n 25
E0512 01:01:24.944725    7184 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-20220511231058-7184\client.crt: The system cannot find the path specified.
helpers_test.go:247: (dbg) Done: out/minikube-windows-amd64.exe -p pause-20220512005140-7184 logs -n 25: (7.8692202s)
helpers_test.go:252: TestPause/serial/Pause logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|------------------------------------------|------------------------------------------|-------------------|---------|---------------------|---------------------|
	| Command |                   Args                   |                 Profile                  |       User        | Version |     Start Time      |      End Time       |
	|---------|------------------------------------------|------------------------------------------|-------------------|---------|---------------------|---------------------|
	| delete  | -p                                       | skaffold-20220512004259-7184             | minikube4\jenkins | v1.25.2 | 12 May 22 00:45 GMT | 12 May 22 00:45 GMT |
	|         | skaffold-20220512004259-7184             |                                          |                   |         |                     |                     |
	| delete  | -p                                       | insufficient-storage-20220512004557-7184 | minikube4\jenkins | v1.25.2 | 12 May 22 00:47 GMT | 12 May 22 00:47 GMT |
	|         | insufficient-storage-20220512004557-7184 |                                          |                   |         |                     |                     |
	| start   | -p                                       | NoKubernetes-20220512004748-7184         | minikube4\jenkins | v1.25.2 | 12 May 22 00:47 GMT | 12 May 22 00:50 GMT |
	|         | NoKubernetes-20220512004748-7184         |                                          |                   |         |                     |                     |
	|         | --driver=docker                          |                                          |                   |         |                     |                     |
	| start   | -p                                       | force-systemd-flag-20220512004748-7184   | minikube4\jenkins | v1.25.2 | 12 May 22 00:47 GMT | 12 May 22 00:50 GMT |
	|         | force-systemd-flag-20220512004748-7184   |                                          |                   |         |                     |                     |
	|         | --memory=2048 --force-systemd            |                                          |                   |         |                     |                     |
	|         | --alsologtostderr -v=5 --driver=docker   |                                          |                   |         |                     |                     |
	| ssh     | force-systemd-flag-20220512004748-7184   | force-systemd-flag-20220512004748-7184   | minikube4\jenkins | v1.25.2 | 12 May 22 00:50 GMT | 12 May 22 00:51 GMT |
	|         | ssh docker info --format                 |                                          |                   |         |                     |                     |
	|         | {{.CgroupDriver}}                        |                                          |                   |         |                     |                     |
	| start   | -p                                       | offline-docker-20220512004748-7184       | minikube4\jenkins | v1.25.2 | 12 May 22 00:47 GMT | 12 May 22 00:51 GMT |
	|         | offline-docker-20220512004748-7184       |                                          |                   |         |                     |                     |
	|         | --alsologtostderr -v=1                   |                                          |                   |         |                     |                     |
	|         | --memory=2048 --wait=true                |                                          |                   |         |                     |                     |
	|         | --driver=docker                          |                                          |                   |         |                     |                     |
	| delete  | -p                                       | force-systemd-flag-20220512004748-7184   | minikube4\jenkins | v1.25.2 | 12 May 22 00:51 GMT | 12 May 22 00:51 GMT |
	|         | force-systemd-flag-20220512004748-7184   |                                          |                   |         |                     |                     |
	| delete  | -p                                       | offline-docker-20220512004748-7184       | minikube4\jenkins | v1.25.2 | 12 May 22 00:51 GMT | 12 May 22 00:51 GMT |
	|         | offline-docker-20220512004748-7184       |                                          |                   |         |                     |                     |
	| start   | -p                                       | NoKubernetes-20220512004748-7184         | minikube4\jenkins | v1.25.2 | 12 May 22 00:51 GMT | 12 May 22 00:51 GMT |
	|         | NoKubernetes-20220512004748-7184         |                                          |                   |         |                     |                     |
	|         | --no-kubernetes --driver=docker          |                                          |                   |         |                     |                     |
	| delete  | -p                                       | NoKubernetes-20220512004748-7184         | minikube4\jenkins | v1.25.2 | 12 May 22 00:51 GMT | 12 May 22 00:52 GMT |
	|         | NoKubernetes-20220512004748-7184         |                                          |                   |         |                     |                     |
	| delete  | -p                                       | NoKubernetes-20220512004748-7184         | minikube4\jenkins | v1.25.2 | 12 May 22 00:52 GMT | 12 May 22 00:53 GMT |
	|         | NoKubernetes-20220512004748-7184         |                                          |                   |         |                     |                     |
	| start   | -p                                       | stopped-upgrade-20220512004748-7184      | minikube4\jenkins | v1.25.2 | 12 May 22 00:53 GMT | 12 May 22 00:54 GMT |
	|         | stopped-upgrade-20220512004748-7184      |                                          |                   |         |                     |                     |
	|         | --memory=2200 --alsologtostderr          |                                          |                   |         |                     |                     |
	|         | -v=1 --driver=docker                     |                                          |                   |         |                     |                     |
	| logs    | -p                                       | stopped-upgrade-20220512004748-7184      | minikube4\jenkins | v1.25.2 | 12 May 22 00:54 GMT | 12 May 22 00:54 GMT |
	|         | stopped-upgrade-20220512004748-7184      |                                          |                   |         |                     |                     |
	| delete  | -p                                       | stopped-upgrade-20220512004748-7184      | minikube4\jenkins | v1.25.2 | 12 May 22 00:54 GMT | 12 May 22 00:55 GMT |
	|         | stopped-upgrade-20220512004748-7184      |                                          |                   |         |                     |                     |
	| start   | -p                                       | kubernetes-upgrade-20220512005507-7184   | minikube4\jenkins | v1.25.2 | 12 May 22 00:55 GMT | 12 May 22 00:57 GMT |
	|         | kubernetes-upgrade-20220512005507-7184   |                                          |                   |         |                     |                     |
	|         | --memory=2200                            |                                          |                   |         |                     |                     |
	|         | --kubernetes-version=v1.16.0             |                                          |                   |         |                     |                     |
	|         | --alsologtostderr -v=1 --driver=docker   |                                          |                   |         |                     |                     |
	| stop    | -p                                       | kubernetes-upgrade-20220512005507-7184   | minikube4\jenkins | v1.25.2 | 12 May 22 00:57 GMT | 12 May 22 00:57 GMT |
	|         | kubernetes-upgrade-20220512005507-7184   |                                          |                   |         |                     |                     |
	| start   | -p                                       | kubernetes-upgrade-20220512005507-7184   | minikube4\jenkins | v1.25.2 | 12 May 22 00:57 GMT | 12 May 22 00:59 GMT |
	|         | kubernetes-upgrade-20220512005507-7184   |                                          |                   |         |                     |                     |
	|         | --memory=2200                            |                                          |                   |         |                     |                     |
	|         | --kubernetes-version=v1.23.6-rc.0        |                                          |                   |         |                     |                     |
	|         | --alsologtostderr -v=1 --driver=docker   |                                          |                   |         |                     |                     |
	| logs    | running-upgrade-20220512005137-7184      | running-upgrade-20220512005137-7184      | minikube4\jenkins | v1.25.2 | 12 May 22 00:58 GMT | 12 May 22 00:59 GMT |
	|         | logs -n 25                               |                                          |                   |         |                     |                     |
	| start   | -p                                       | missing-upgrade-20220512005316-7184      | minikube4\jenkins | v1.25.2 | 12 May 22 00:56 GMT | 12 May 22 00:59 GMT |
	|         | missing-upgrade-20220512005316-7184      |                                          |                   |         |                     |                     |
	|         | --memory=2200 --alsologtostderr          |                                          |                   |         |                     |                     |
	|         | -v=1 --driver=docker                     |                                          |                   |         |                     |                     |
	| start   | -p                                       | kubernetes-upgrade-20220512005507-7184   | minikube4\jenkins | v1.25.2 | 12 May 22 00:59 GMT | 12 May 22 00:59 GMT |
	|         | kubernetes-upgrade-20220512005507-7184   |                                          |                   |         |                     |                     |
	|         | --memory=2200                            |                                          |                   |         |                     |                     |
	|         | --kubernetes-version=v1.23.6-rc.0        |                                          |                   |         |                     |                     |
	|         | --alsologtostderr -v=1 --driver=docker   |                                          |                   |         |                     |                     |
	| delete  | -p                                       | missing-upgrade-20220512005316-7184      | minikube4\jenkins | v1.25.2 | 12 May 22 00:59 GMT | 12 May 22 00:59 GMT |
	|         | missing-upgrade-20220512005316-7184      |                                          |                   |         |                     |                     |
	| delete  | -p                                       | running-upgrade-20220512005137-7184      | minikube4\jenkins | v1.25.2 | 12 May 22 00:59 GMT | 12 May 22 00:59 GMT |
	|         | running-upgrade-20220512005137-7184      |                                          |                   |         |                     |                     |
	| delete  | -p                                       | kubernetes-upgrade-20220512005507-7184   | minikube4\jenkins | v1.25.2 | 12 May 22 00:59 GMT | 12 May 22 01:00 GMT |
	|         | kubernetes-upgrade-20220512005507-7184   |                                          |                   |         |                     |                     |
	| start   | -p pause-20220512005140-7184             | pause-20220512005140-7184                | minikube4\jenkins | v1.25.2 | 12 May 22 00:51 GMT | 12 May 22 01:00 GMT |
	|         | --memory=2048                            |                                          |                   |         |                     |                     |
	|         | --install-addons=false                   |                                          |                   |         |                     |                     |
	|         | --wait=all --driver=docker               |                                          |                   |         |                     |                     |
	| start   | -p pause-20220512005140-7184             | pause-20220512005140-7184                | minikube4\jenkins | v1.25.2 | 12 May 22 01:00 GMT | 12 May 22 01:01 GMT |
	|         | --alsologtostderr -v=1                   |                                          |                   |         |                     |                     |
	|         | --driver=docker                          |                                          |                   |         |                     |                     |
	|---------|------------------------------------------|------------------------------------------|-------------------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2022/05/12 01:00:21
	Running on machine: minikube4
	Binary: Built with gc go1.18.1 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0512 01:00:21.213283    9720 out.go:296] Setting OutFile to fd 1688 ...
	I0512 01:00:21.282534    9720 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0512 01:00:21.282534    9720 out.go:309] Setting ErrFile to fd 1656...
	I0512 01:00:21.282534    9720 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0512 01:00:21.305200    9720 out.go:303] Setting JSON to false
	I0512 01:00:21.308301    9720 start.go:115] hostinfo: {"hostname":"minikube4","uptime":15674,"bootTime":1652301547,"procs":172,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19042 Build 19042","kernelVersion":"10.0.19042 Build 19042","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"df6bfb5f-73f2-4acb-9365-df7854ecbb28"}
	W0512 01:00:21.308301    9720 start.go:123] gopshost.Virtualization returned error: not implemented yet
	I0512 01:00:21.660884    9720 out.go:177] * [pause-20220512005140-7184] minikube v1.25.2 on Microsoft Windows 10 Enterprise N 10.0.19042 Build 19042
	I0512 01:00:21.672856    9720 notify.go:193] Checking for updates...
	I0512 01:00:21.675391    9720 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	I0512 01:00:21.684694    9720 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube4\minikube-integration\.minikube
	I0512 01:00:21.690598    9720 out.go:177]   - MINIKUBE_LOCATION=13639
	I0512 01:00:21.696642    9720 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0512 01:00:19.429679    2560 start.go:284] selected driver: docker
	I0512 01:00:19.430675    2560 start.go:801] validating driver "docker" against <nil>
	I0512 01:00:19.430735    2560 start.go:812] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0512 01:00:19.508422    2560 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0512 01:00:21.694798    2560 cli_runner.go:217] Completed: docker system info --format "{{json .}}": (2.1860777s)
	I0512 01:00:21.695172    2560 info.go:265] docker info: {ID:5MOX:W55Z:6RSS:V5PU:46KT:D723:NTM4:N7FK:USOO:URA3:TW6J:2PNT Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:69 OomKillDisable:true NGoroutines:67 SystemTime:2022-05-12 01:00:20.6005639 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.102.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_
64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:53902323712 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.14 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3df54a852345ae127d1fa3092b95168e4a88e2f8 Expected:3df54a852345ae127d1fa3092b95168e4a88e2f8} RuncCommit:{ID:v1.0.3-0-gf46b6ba Expected:v1.0.3-0-gf46b6ba} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,
profile=default] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.5.0] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:C:\Program Files\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc.
Version:v0.17.0]] Warnings:<nil>}}
	I0512 01:00:21.695733    2560 start_flags.go:292] no existing cluster config was found, will generate one from the flags 
	I0512 01:00:21.696642    2560 start_flags.go:829] Wait components to verify : map[apiserver:true system_pods:true]
	I0512 01:00:21.702107    2560 out.go:177] * Using Docker Desktop driver with the root privilege
	I0512 01:00:21.706636    2560 cni.go:95] Creating CNI manager for ""
	I0512 01:00:21.706636    2560 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0512 01:00:21.706692    2560 start_flags.go:306] config:
	{Name:cert-options-20220512010013-7184 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1652251400-14138@sha256:8c847a4aa2afc5a7fc659f9731046bf9cc7e788283deecc83c8633014fb0828a Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.5 ClusterName:cert-options-20220512010013-7184 Namespace:default APIServerName:localhost APIServerNames:[localhost www.google.com] APIS
erverIPs:[127.0.0.1 192.168.15.15] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8555 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube4:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0512 01:00:21.711205    2560 out.go:177] * Starting control plane node cert-options-20220512010013-7184 in cluster cert-options-20220512010013-7184
	I0512 01:00:21.714196    2560 cache.go:120] Beginning downloading kic base image for docker with docker
	I0512 01:00:21.716234    2560 out.go:177] * Pulling base image ...
	I0512 01:00:21.720203    2560 preload.go:132] Checking if preload exists for k8s version v1.23.5 and runtime docker
	I0512 01:00:21.720203    2560 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1652251400-14138@sha256:8c847a4aa2afc5a7fc659f9731046bf9cc7e788283deecc83c8633014fb0828a in local docker daemon
	I0512 01:00:21.720203    2560 preload.go:148] Found local preload: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.23.5-docker-overlay2-amd64.tar.lz4
	I0512 01:00:21.720203    2560 cache.go:57] Caching tarball of preloaded images
	I0512 01:00:21.720203    2560 preload.go:174] Found C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.23.5-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0512 01:00:21.721226    2560 cache.go:60] Finished verifying existence of preloaded tar for  v1.23.5 on docker
	I0512 01:00:21.721226    2560 profile.go:148] Saving config to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\cert-options-20220512010013-7184\config.json ...
	I0512 01:00:21.721226    2560 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\cert-options-20220512010013-7184\config.json: {Name:mkf0687d73aad6be387e3af041b729cff9e41140 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0512 01:00:22.782049    2560 image.go:79] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1652251400-14138@sha256:8c847a4aa2afc5a7fc659f9731046bf9cc7e788283deecc83c8633014fb0828a in local docker daemon, skipping pull
	I0512 01:00:22.782049    2560 cache.go:141] gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1652251400-14138@sha256:8c847a4aa2afc5a7fc659f9731046bf9cc7e788283deecc83c8633014fb0828a exists in daemon, skipping load
	I0512 01:00:22.782102    2560 cache.go:206] Successfully downloaded all kic artifacts
	I0512 01:00:22.782102    2560 start.go:352] acquiring machines lock for cert-options-20220512010013-7184: {Name:mkc9630d6bf42b39fa8bbbbf1e40af095a872c10 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0512 01:00:22.782102    2560 start.go:356] acquired machines lock for "cert-options-20220512010013-7184" in 0s
	I0512 01:00:22.782102    2560 start.go:91] Provisioning new machine with config: &{Name:cert-options-20220512010013-7184 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1652251400-14138@sha256:8c847a4aa2afc5a7fc659f9731046bf9cc7e788283deecc83c8633014fb0828a Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.5 ClusterName:cert-options-20220512010013-7184 Namesp
ace:default APIServerName:localhost APIServerNames:[localhost www.google.com] APIServerIPs:[127.0.0.1 192.168.15.15] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8555 NodeName:} Nodes:[{Name: IP: Port:8555 KubernetesVersion:v1.23.5 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube4:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false
DisableMetrics:false} &{Name: IP: Port:8555 KubernetesVersion:v1.23.5 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0512 01:00:22.782102    2560 start.go:131] createHost starting for "" (driver="docker")
	I0512 01:00:22.785508    2560 out.go:204] * Creating docker container (CPUs=2, Memory=2048MB) ...
	I0512 01:00:22.785508    2560 start.go:165] libmachine.API.Create for "cert-options-20220512010013-7184" (driver="docker")
	I0512 01:00:22.786038    2560 client.go:168] LocalClient.Create starting
	I0512 01:00:22.786304    2560 main.go:134] libmachine: Reading certificate data from C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem
	I0512 01:00:22.786304    2560 main.go:134] libmachine: Decoding PEM data...
	I0512 01:00:22.786304    2560 main.go:134] libmachine: Parsing certificate...
	I0512 01:00:22.786304    2560 main.go:134] libmachine: Reading certificate data from C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem
	I0512 01:00:22.786860    2560 main.go:134] libmachine: Decoding PEM data...
	I0512 01:00:22.786899    2560 main.go:134] libmachine: Parsing certificate...
	I0512 01:00:22.795153    2560 cli_runner.go:164] Run: docker network inspect cert-options-20220512010013-7184 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0512 01:00:21.702773    9720 config.go:178] Loaded profile config "pause-20220512005140-7184": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.5
	I0512 01:00:21.703939    9720 driver.go:358] Setting default libvirt URI to qemu:///system
	I0512 01:00:25.084035    9720 docker.go:137] docker version: linux-20.10.14
	I0512 01:00:25.093157    9720 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0512 01:00:27.451330    9720 cli_runner.go:217] Completed: docker system info --format "{{json .}}": (2.357964s)
	I0512 01:00:27.452657    9720 info.go:265] docker info: {ID:5MOX:W55Z:6RSS:V5PU:46KT:D723:NTM4:N7FK:USOO:URA3:TW6J:2PNT Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:70 OomKillDisable:true NGoroutines:71 SystemTime:2022-05-12 01:00:26.3237667 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.102.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_
64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:53902323712 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.14 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3df54a852345ae127d1fa3092b95168e4a88e2f8 Expected:3df54a852345ae127d1fa3092b95168e4a88e2f8} RuncCommit:{ID:v1.0.3-0-gf46b6ba Expected:v1.0.3-0-gf46b6ba} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,
profile=default] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.5.0] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:C:\Program Files\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc.
Version:v0.17.0]] Warnings:<nil>}}
	I0512 01:00:27.456531    9720 out.go:177] * Using the docker driver based on existing profile
	W0512 01:00:23.931606    2560 cli_runner.go:211] docker network inspect cert-options-20220512010013-7184 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0512 01:00:24.564597    2560 cli_runner.go:217] Completed: docker network inspect cert-options-20220512010013-7184 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}": (1.136395s)
	I0512 01:00:24.576495    2560 network_create.go:272] running [docker network inspect cert-options-20220512010013-7184] to gather additional debugging logs...
	I0512 01:00:24.576495    2560 cli_runner.go:164] Run: docker network inspect cert-options-20220512010013-7184
	W0512 01:00:26.044362    2560 cli_runner.go:211] docker network inspect cert-options-20220512010013-7184 returned with exit code 1
	I0512 01:00:26.044362    2560 cli_runner.go:217] Completed: docker network inspect cert-options-20220512010013-7184: (1.4677914s)
	I0512 01:00:26.044362    2560 network_create.go:275] error running [docker network inspect cert-options-20220512010013-7184]: docker network inspect cert-options-20220512010013-7184: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: cert-options-20220512010013-7184
	I0512 01:00:26.044362    2560 network_create.go:277] output of [docker network inspect cert-options-20220512010013-7184]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: cert-options-20220512010013-7184
	
	** /stderr **
	I0512 01:00:26.054098    2560 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0512 01:00:27.218635    2560 cli_runner.go:217] Completed: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}": (1.1644771s)
	I0512 01:00:27.243640    2560 network.go:288] reserving subnet 192.168.49.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[] amended:true}} dirty:map[192.168.49.0:0xc000120598] misses:0}
	I0512 01:00:27.243640    2560 network.go:235] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0512 01:00:27.243640    2560 network_create.go:115] attempt to create docker network cert-options-20220512010013-7184 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0512 01:00:27.250634    2560 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true cert-options-20220512010013-7184
	W0512 01:00:28.458578    2560 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true cert-options-20220512010013-7184 returned with exit code 1
	I0512 01:00:28.458578    2560 cli_runner.go:217] Completed: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true cert-options-20220512010013-7184: (1.2077512s)
	W0512 01:00:28.458913    2560 network_create.go:107] failed to create docker network cert-options-20220512010013-7184 192.168.49.0/24, will retry: subnet is taken
	I0512 01:00:28.480051    2560 network.go:279] skipping subnet 192.168.49.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc000120598] amended:false}} dirty:map[] misses:0}
	I0512 01:00:28.480051    2560 network.go:238] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0512 01:00:28.501065    2560 network.go:288] reserving subnet 192.168.58.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc000120598] amended:true}} dirty:map[192.168.49.0:0xc000120598 192.168.58.0:0xc000006740] misses:0}
	I0512 01:00:28.501065    2560 network.go:235] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0512 01:00:28.501065    2560 network_create.go:115] attempt to create docker network cert-options-20220512010013-7184 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
	I0512 01:00:28.510056    2560 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true cert-options-20220512010013-7184
	I0512 01:00:27.458527    9720 start.go:284] selected driver: docker
	I0512 01:00:27.458527    9720 start.go:801] validating driver "docker" against &{Name:pause-20220512005140-7184 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1652251400-14138@sha256:8c847a4aa2afc5a7fc659f9731046bf9cc7e788283deecc83c8633014fb0828a Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.5 ClusterName:pause-20220512005140-7184 Namespace:default APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.23.5 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube4:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Di
sableOptimizations:false DisableMetrics:false}
	I0512 01:00:27.458804    9720 start.go:812] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0512 01:00:27.486708    9720 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0512 01:00:29.736946    9720 cli_runner.go:217] Completed: docker system info --format "{{json .}}": (2.2500185s)
	I0512 01:00:29.737367    9720 info.go:265] docker info: {ID:5MOX:W55Z:6RSS:V5PU:46KT:D723:NTM4:N7FK:USOO:URA3:TW6J:2PNT Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:69 OomKillDisable:true NGoroutines:67 SystemTime:2022-05-12 01:00:28.5688262 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.102.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_
64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:53902323712 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.14 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3df54a852345ae127d1fa3092b95168e4a88e2f8 Expected:3df54a852345ae127d1fa3092b95168e4a88e2f8} RuncCommit:{ID:v1.0.3-0-gf46b6ba Expected:v1.0.3-0-gf46b6ba} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,
profile=default] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.5.0] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:C:\Program Files\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc.
Version:v0.17.0]] Warnings:<nil>}}
	I0512 01:00:29.798386    9720 cni.go:95] Creating CNI manager for ""
	I0512 01:00:29.798386    9720 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0512 01:00:29.798386    9720 start_flags.go:306] config:
	{Name:pause-20220512005140-7184 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1652251400-14138@sha256:8c847a4aa2afc5a7fc659f9731046bf9cc7e788283deecc83c8633014fb0828a Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.5 ClusterName:pause-20220512005140-7184 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.23.5 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube4:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0512 01:00:29.803376    9720 out.go:177] * Starting control plane node pause-20220512005140-7184 in cluster pause-20220512005140-7184
	I0512 01:00:29.806378    9720 cache.go:120] Beginning downloading kic base image for docker with docker
	I0512 01:00:29.810380    9720 out.go:177] * Pulling base image ...
	I0512 01:00:27.171250    8484 cli_runner.go:217] Completed: docker run --rm --name docker-flags-20220512005959-7184-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=docker-flags-20220512005959-7184 --entrypoint /usr/bin/test -v docker-flags-20220512005959-7184:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1652251400-14138@sha256:8c847a4aa2afc5a7fc659f9731046bf9cc7e788283deecc83c8633014fb0828a -d /var/lib: (8.7110453s)
	I0512 01:00:27.171250    8484 oci.go:107] Successfully prepared a docker volume docker-flags-20220512005959-7184
	I0512 01:00:27.171250    8484 preload.go:132] Checking if preload exists for k8s version v1.23.5 and runtime docker
	I0512 01:00:27.171631    8484 kic.go:179] Starting extracting preloaded images to volume ...
	I0512 01:00:27.179634    8484 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.23.5-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v docker-flags-20220512005959-7184:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1652251400-14138@sha256:8c847a4aa2afc5a7fc659f9731046bf9cc7e788283deecc83c8633014fb0828a -I lz4 -xf /preloaded.tar -C /extractDir
	I0512 01:00:29.812376    9720 preload.go:132] Checking if preload exists for k8s version v1.23.5 and runtime docker
	I0512 01:00:29.812376    9720 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1652251400-14138@sha256:8c847a4aa2afc5a7fc659f9731046bf9cc7e788283deecc83c8633014fb0828a in local docker daemon
	I0512 01:00:29.812376    9720 preload.go:148] Found local preload: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.23.5-docker-overlay2-amd64.tar.lz4
	I0512 01:00:29.812376    9720 cache.go:57] Caching tarball of preloaded images
	I0512 01:00:29.812376    9720 preload.go:174] Found C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.23.5-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0512 01:00:29.812376    9720 cache.go:60] Finished verifying existence of preloaded tar for  v1.23.5 on docker
	I0512 01:00:29.813388    9720 profile.go:148] Saving config to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\pause-20220512005140-7184\config.json ...
	I0512 01:00:30.931114    9720 image.go:79] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1652251400-14138@sha256:8c847a4aa2afc5a7fc659f9731046bf9cc7e788283deecc83c8633014fb0828a in local docker daemon, skipping pull
	I0512 01:00:30.931114    9720 cache.go:141] gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1652251400-14138@sha256:8c847a4aa2afc5a7fc659f9731046bf9cc7e788283deecc83c8633014fb0828a exists in daemon, skipping load
	I0512 01:00:30.931114    9720 cache.go:206] Successfully downloaded all kic artifacts
	I0512 01:00:30.931114    9720 start.go:352] acquiring machines lock for pause-20220512005140-7184: {Name:mk3327eaa9951f77c6b8356d0562285f66d4de7a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0512 01:00:30.931114    9720 start.go:356] acquired machines lock for "pause-20220512005140-7184" in 0s
	I0512 01:00:30.931114    9720 start.go:94] Skipping create...Using existing machine configuration
	I0512 01:00:30.931114    9720 fix.go:55] fixHost starting: 
	I0512 01:00:30.960315    9720 cli_runner.go:164] Run: docker container inspect pause-20220512005140-7184 --format={{.State.Status}}
	I0512 01:00:32.118282    9720 cli_runner.go:217] Completed: docker container inspect pause-20220512005140-7184 --format={{.State.Status}}: (1.1579072s)
	I0512 01:00:32.118282    9720 fix.go:103] recreateIfNeeded on pause-20220512005140-7184: state=Running err=<nil>
	W0512 01:00:32.118282    9720 fix.go:129] unexpected machine state, will restart: <nil>
	I0512 01:00:32.120285    9720 out.go:177] * Updating the running docker "pause-20220512005140-7184" container ...
	W0512 01:00:29.720799    2560 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true cert-options-20220512010013-7184 returned with exit code 1
	I0512 01:00:29.720799    2560 cli_runner.go:217] Completed: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true cert-options-20220512010013-7184: (1.2106809s)
	W0512 01:00:29.720799    2560 network_create.go:107] failed to create docker network cert-options-20220512010013-7184 192.168.58.0/24, will retry: subnet is taken
	I0512 01:00:29.742593    2560 network.go:279] skipping subnet 192.168.58.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc000120598] amended:true}} dirty:map[192.168.49.0:0xc000120598 192.168.58.0:0xc000006740] misses:1}
	I0512 01:00:29.742593    2560 network.go:238] skipping subnet 192.168.58.0/24 that is reserved: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0512 01:00:29.770954    2560 network.go:288] reserving subnet 192.168.67.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc000120598] amended:true}} dirty:map[192.168.49.0:0xc000120598 192.168.58.0:0xc000006740 192.168.67.0:0xc000120658] misses:1}
	I0512 01:00:29.770954    2560 network.go:235] using free private subnet 192.168.67.0/24: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0512 01:00:29.770954    2560 network_create.go:115] attempt to create docker network cert-options-20220512010013-7184 192.168.67.0/24 with gateway 192.168.67.1 and MTU of 1500 ...
	I0512 01:00:29.780150    2560 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true cert-options-20220512010013-7184
	W0512 01:00:30.884200    2560 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true cert-options-20220512010013-7184 returned with exit code 1
	I0512 01:00:30.884200    2560 cli_runner.go:217] Completed: docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true cert-options-20220512010013-7184: (1.1034338s)
	W0512 01:00:30.884200    2560 network_create.go:107] failed to create docker network cert-options-20220512010013-7184 192.168.67.0/24, will retry: subnet is taken
	I0512 01:00:30.904175    2560 network.go:279] skipping subnet 192.168.67.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc000120598] amended:true}} dirty:map[192.168.49.0:0xc000120598 192.168.58.0:0xc000006740 192.168.67.0:0xc000120658] misses:2}
	I0512 01:00:30.905330    2560 network.go:238] skipping subnet 192.168.67.0/24 that is reserved: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0512 01:00:30.925672    2560 network.go:288] reserving subnet 192.168.76.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc000120598] amended:true}} dirty:map[192.168.49.0:0xc000120598 192.168.58.0:0xc000006740 192.168.67.0:0xc000120658 192.168.76.0:0xc0000084b8] misses:2}
	I0512 01:00:30.925672    2560 network.go:235] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0512 01:00:30.925672    2560 network_create.go:115] attempt to create docker network cert-options-20220512010013-7184 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I0512 01:00:30.936956    2560 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true cert-options-20220512010013-7184
	I0512 01:00:32.163703    2560 cli_runner.go:217] Completed: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true cert-options-20220512010013-7184: (1.2266834s)
	I0512 01:00:32.163703    2560 network_create.go:99] docker network cert-options-20220512010013-7184 192.168.76.0/24 created
	I0512 01:00:32.163703    2560 kic.go:106] calculated static IP "192.168.76.2" for the "cert-options-20220512010013-7184" container
	I0512 01:00:32.181344    2560 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0512 01:00:33.221074    2560 cli_runner.go:217] Completed: docker ps -a --format {{.Names}}: (1.0386712s)
	I0512 01:00:33.230209    2560 cli_runner.go:164] Run: docker volume create cert-options-20220512010013-7184 --label name.minikube.sigs.k8s.io=cert-options-20220512010013-7184 --label created_by.minikube.sigs.k8s.io=true
	I0512 01:00:32.123369    9720 machine.go:88] provisioning docker machine ...
	I0512 01:00:32.123369    9720 ubuntu.go:169] provisioning hostname "pause-20220512005140-7184"
	I0512 01:00:32.131649    9720 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-20220512005140-7184
	I0512 01:00:33.205821    9720 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-20220512005140-7184: (1.0741157s)
	I0512 01:00:33.211278    9720 main.go:134] libmachine: Using SSH client type: native
	I0512 01:00:33.212285    9720 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x422e20] 0x425c80 <nil>  [] 0s} 127.0.0.1 49879 <nil> <nil>}
	I0512 01:00:33.212285    9720 main.go:134] libmachine: About to run SSH command:
	sudo hostname pause-20220512005140-7184 && echo "pause-20220512005140-7184" | sudo tee /etc/hostname
	I0512 01:00:33.441753    9720 main.go:134] libmachine: SSH cmd err, output: <nil>: pause-20220512005140-7184
	
	I0512 01:00:33.451179    9720 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-20220512005140-7184
	I0512 01:00:34.619225    9720 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-20220512005140-7184: (1.167205s)
	I0512 01:00:34.623631    9720 main.go:134] libmachine: Using SSH client type: native
	I0512 01:00:34.623631    9720 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x422e20] 0x425c80 <nil>  [] 0s} 127.0.0.1 49879 <nil> <nil>}
	I0512 01:00:34.623631    9720 main.go:134] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\spause-20220512005140-7184' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 pause-20220512005140-7184/g' /etc/hosts;
				else 
					echo '127.0.1.1 pause-20220512005140-7184' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0512 01:00:34.763536    9720 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I0512 01:00:34.763536    9720 ubuntu.go:175] set auth options {CertDir:C:\Users\jenkins.minikube4\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube4\minikube-integration\.minikube}
	I0512 01:00:34.763536    9720 ubuntu.go:177] setting up certificates
	I0512 01:00:34.763536    9720 provision.go:83] configureAuth start
	I0512 01:00:34.772548    9720 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" pause-20220512005140-7184
	I0512 01:00:35.875930    9720 cli_runner.go:217] Completed: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" pause-20220512005140-7184: (1.1033257s)
	I0512 01:00:35.875930    9720 provision.go:138] copyHostCerts
	I0512 01:00:35.875930    9720 exec_runner.go:144] found C:\Users\jenkins.minikube4\minikube-integration\.minikube/ca.pem, removing ...
	I0512 01:00:35.875930    9720 exec_runner.go:207] rm: C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.pem
	I0512 01:00:35.876744    9720 exec_runner.go:151] cp: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube4\minikube-integration\.minikube/ca.pem (1078 bytes)
	I0512 01:00:35.877687    9720 exec_runner.go:144] found C:\Users\jenkins.minikube4\minikube-integration\.minikube/cert.pem, removing ...
	I0512 01:00:35.877687    9720 exec_runner.go:207] rm: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cert.pem
	I0512 01:00:35.878488    9720 exec_runner.go:151] cp: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube4\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0512 01:00:35.879470    9720 exec_runner.go:144] found C:\Users\jenkins.minikube4\minikube-integration\.minikube/key.pem, removing ...
	I0512 01:00:35.879470    9720 exec_runner.go:207] rm: C:\Users\jenkins.minikube4\minikube-integration\.minikube\key.pem
	I0512 01:00:35.879470    9720 exec_runner.go:151] cp: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube4\minikube-integration\.minikube/key.pem (1675 bytes)
	I0512 01:00:35.880701    9720 provision.go:112] generating server cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.pause-20220512005140-7184 san=[192.168.67.2 127.0.0.1 localhost 127.0.0.1 minikube pause-20220512005140-7184]
	I0512 01:00:36.047848    9720 provision.go:172] copyRemoteCerts
	I0512 01:00:36.057611    9720 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0512 01:00:36.063600    9720 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-20220512005140-7184
	I0512 01:00:34.335226    2560 cli_runner.go:217] Completed: docker volume create cert-options-20220512010013-7184 --label name.minikube.sigs.k8s.io=cert-options-20220512010013-7184 --label created_by.minikube.sigs.k8s.io=true: (1.10496s)
	I0512 01:00:34.335226    2560 oci.go:103] Successfully created a docker volume cert-options-20220512010013-7184
	I0512 01:00:34.344687    2560 cli_runner.go:164] Run: docker run --rm --name cert-options-20220512010013-7184-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=cert-options-20220512010013-7184 --entrypoint /usr/bin/test -v cert-options-20220512010013-7184:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1652251400-14138@sha256:8c847a4aa2afc5a7fc659f9731046bf9cc7e788283deecc83c8633014fb0828a -d /var/lib
	I0512 01:00:37.280228    9720 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-20220512005140-7184: (1.2165655s)
	I0512 01:00:40.301352    9720 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49879 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\pause-20220512005140-7184\id_rsa Username:docker}
	I0512 01:00:40.455812    9720 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.3974422s)
	I0512 01:00:40.456242    9720 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0512 01:00:40.576187    9720 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1249 bytes)
	I0512 01:00:40.632325    9720 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0512 01:00:40.687770    9720 provision.go:86] duration metric: configureAuth took 5.9239278s
	I0512 01:00:40.687770    9720 ubuntu.go:193] setting minikube options for container-runtime
	I0512 01:00:40.688788    9720 config.go:178] Loaded profile config "pause-20220512005140-7184": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.5
	I0512 01:00:40.698717    9720 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-20220512005140-7184
	I0512 01:00:41.785454    9720 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-20220512005140-7184: (1.0866808s)
	I0512 01:00:41.789462    9720 main.go:134] libmachine: Using SSH client type: native
	I0512 01:00:41.790463    9720 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x422e20] 0x425c80 <nil>  [] 0s} 127.0.0.1 49879 <nil> <nil>}
	I0512 01:00:41.790463    9720 main.go:134] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0512 01:00:41.972697    9720 main.go:134] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0512 01:00:41.972697    9720 ubuntu.go:71] root file system type: overlay
	I0512 01:00:41.972697    9720 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0512 01:00:41.979784    9720 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-20220512005140-7184
	I0512 01:00:43.050362    9720 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-20220512005140-7184: (1.0702512s)
	I0512 01:00:43.056713    9720 main.go:134] libmachine: Using SSH client type: native
	I0512 01:00:43.056713    9720 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x422e20] 0x425c80 <nil>  [] 0s} 127.0.0.1 49879 <nil> <nil>}
	I0512 01:00:43.056713    9720 main.go:134] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0512 01:00:43.258865    9720 main.go:134] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0512 01:00:43.267382    9720 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-20220512005140-7184
	I0512 01:00:44.355201    9720 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-20220512005140-7184: (1.0872181s)
	I0512 01:00:44.358192    9720 main.go:134] libmachine: Using SSH client type: native
	I0512 01:00:44.359201    9720 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x422e20] 0x425c80 <nil>  [] 0s} 127.0.0.1 49879 <nil> <nil>}
	I0512 01:00:44.359201    9720 main.go:134] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0512 01:00:44.573776    9720 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I0512 01:00:44.574305    9720 machine.go:91] provisioned docker machine in 12.4502931s
	I0512 01:00:44.574305    9720 start.go:306] post-start starting for "pause-20220512005140-7184" (driver="docker")
	I0512 01:00:44.574376    9720 start.go:316] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0512 01:00:44.590283    9720 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0512 01:00:44.600266    9720 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-20220512005140-7184
	I0512 01:00:45.751578    9720 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-20220512005140-7184: (1.1512523s)
	I0512 01:00:45.751578    9720 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49879 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\pause-20220512005140-7184\id_rsa Username:docker}
	I0512 01:00:45.837435    9720 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (1.2470253s)
	I0512 01:00:45.852441    9720 ssh_runner.go:195] Run: cat /etc/os-release
	I0512 01:00:45.863443    9720 main.go:134] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0512 01:00:45.863443    9720 main.go:134] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0512 01:00:45.863443    9720 main.go:134] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0512 01:00:45.863443    9720 info.go:137] Remote host: Ubuntu 20.04.4 LTS
	I0512 01:00:45.863443    9720 filesync.go:126] Scanning C:\Users\jenkins.minikube4\minikube-integration\.minikube\addons for local assets ...
	I0512 01:00:45.865619    9720 filesync.go:126] Scanning C:\Users\jenkins.minikube4\minikube-integration\.minikube\files for local assets ...
	I0512 01:00:45.866430    9720 filesync.go:149] local asset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\71842.pem -> 71842.pem in /etc/ssl/certs
	I0512 01:00:45.877430    9720 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0512 01:00:45.955497    9720 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\71842.pem --> /etc/ssl/certs/71842.pem (1708 bytes)
	I0512 01:00:46.064856    9720 start.go:309] post-start completed in 1.4904036s
	I0512 01:00:46.076590    9720 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0512 01:00:46.085221    9720 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-20220512005140-7184
	I0512 01:00:45.122117    6824 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.23.5-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v cert-expiration-20220512005951-7184:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1652251400-14138@sha256:8c847a4aa2afc5a7fc659f9731046bf9cc7e788283deecc83c8633014fb0828a -I lz4 -xf /preloaded.tar -C /extractDir: (31.5493632s)
	I0512 01:00:45.122117    6824 kic.go:188] duration metric: took 31.556362 seconds to extract preloaded images to volume
	I0512 01:00:45.129098    6824 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0512 01:00:48.474582    2560 cli_runner.go:217] Completed: docker run --rm --name cert-options-20220512010013-7184-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=cert-options-20220512010013-7184 --entrypoint /usr/bin/test -v cert-options-20220512010013-7184:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1652251400-14138@sha256:8c847a4aa2afc5a7fc659f9731046bf9cc7e788283deecc83c8633014fb0828a -d /var/lib: (14.1291662s)
	I0512 01:00:48.474582    2560 oci.go:107] Successfully prepared a docker volume cert-options-20220512010013-7184
	I0512 01:00:48.474582    2560 preload.go:132] Checking if preload exists for k8s version v1.23.5 and runtime docker
	I0512 01:00:48.474829    2560 kic.go:179] Starting extracting preloaded images to volume ...
	I0512 01:00:48.481947    2560 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.23.5-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v cert-options-20220512010013-7184:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1652251400-14138@sha256:8c847a4aa2afc5a7fc659f9731046bf9cc7e788283deecc83c8633014fb0828a -I lz4 -xf /preloaded.tar -C /extractDir
	I0512 01:00:47.156416    9720 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-20220512005140-7184: (1.07114s)
	I0512 01:00:47.156416    9720 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49879 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\pause-20220512005140-7184\id_rsa Username:docker}
	I0512 01:00:47.262011    9720 ssh_runner.go:235] Completed: sh -c "df -h /var | awk 'NR==2{print $5}'": (1.1853597s)
	I0512 01:00:47.271928    9720 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0512 01:00:47.282929    9720 fix.go:57] fixHost completed within 16.3509717s
	I0512 01:00:47.282929    9720 start.go:81] releasing machines lock for "pause-20220512005140-7184", held for 16.3509717s
	I0512 01:00:47.289932    9720 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" pause-20220512005140-7184
	I0512 01:00:48.346520    9720 cli_runner.go:217] Completed: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" pause-20220512005140-7184: (1.0565337s)
	I0512 01:00:48.351156    9720 ssh_runner.go:195] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0512 01:00:48.359830    9720 ssh_runner.go:195] Run: systemctl --version
	I0512 01:00:48.362836    9720 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-20220512005140-7184
	I0512 01:00:48.367839    9720 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-20220512005140-7184
	I0512 01:00:49.441356    9720 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-20220512005140-7184: (1.0784644s)
	I0512 01:00:49.441356    9720 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49879 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\pause-20220512005140-7184\id_rsa Username:docker}
	I0512 01:00:49.456707    9720 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-20220512005140-7184: (1.0888118s)
	I0512 01:00:49.456707    9720 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49879 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\pause-20220512005140-7184\id_rsa Username:docker}
	I0512 01:00:49.608714    9720 ssh_runner.go:235] Completed: curl -sS -m 2 https://k8s.gcr.io/: (1.2574265s)
	I0512 01:00:49.608714    9720 ssh_runner.go:235] Completed: systemctl --version: (1.2488191s)
	I0512 01:00:49.620717    9720 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0512 01:00:49.656708    9720 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0512 01:00:49.683741    9720 cruntime.go:273] skipping containerd shutdown because we are bound to it
	I0512 01:00:49.694906    9720 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0512 01:00:49.725785    9720 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/dockershim.sock
	image-endpoint: unix:///var/run/dockershim.sock
	" | sudo tee /etc/crictl.yaml"
	I0512 01:00:50.064014    9720 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0512 01:00:50.276641    9720 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0512 01:00:50.469789    9720 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0512 01:00:50.509463    9720 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0512 01:00:50.766373    9720 ssh_runner.go:195] Run: sudo systemctl start docker
	I0512 01:00:50.799999    9720 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0512 01:00:50.892676    9720 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0512 01:00:50.976553    9720 out.go:204] * Preparing Kubernetes v1.23.5 on Docker 20.10.15 ...
	I0512 01:00:50.983568    9720 cli_runner.go:164] Run: docker exec -t pause-20220512005140-7184 dig +short host.docker.internal
	I0512 01:00:47.314929    6824 cli_runner.go:217] Completed: docker system info --format "{{json .}}": (2.1857182s)
	I0512 01:00:47.314929    6824 info.go:265] docker info: {ID:5MOX:W55Z:6RSS:V5PU:46KT:D723:NTM4:N7FK:USOO:URA3:TW6J:2PNT Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:69 OomKillDisable:true NGoroutines:71 SystemTime:2022-05-12 01:00:46.2203529 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.102.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_
64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:53902323712 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.14 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3df54a852345ae127d1fa3092b95168e4a88e2f8 Expected:3df54a852345ae127d1fa3092b95168e4a88e2f8} RuncCommit:{ID:v1.0.3-0-gf46b6ba Expected:v1.0.3-0-gf46b6ba} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,
profile=default] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.5.0] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:C:\Program Files\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc.
Version:v0.17.0]] Warnings:<nil>}}
	I0512 01:00:47.321932    6824 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0512 01:00:49.488697    6824 cli_runner.go:217] Completed: docker info --format "'{{json .SecurityOptions}}'": (2.166653s)
	I0512 01:00:49.499697    6824 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname cert-expiration-20220512005951-7184 --name cert-expiration-20220512005951-7184 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=cert-expiration-20220512005951-7184 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=cert-expiration-20220512005951-7184 --network cert-expiration-20220512005951-7184 --ip 192.168.58.2 --volume cert-expiration-20220512005951-7184:/var --security-opt apparmor=unconfined --memory=2048mb --memory-swap=2048mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1652251400-14138@sha256:8c847a4aa2afc5a7fc659f9731046bf9cc7e788283deecc83c8633014fb0828a
	I0512 01:00:51.928393    8484 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.23.5-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v docker-flags-20220512005959-7184:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1652251400-14138@sha256:8c847a4aa2afc5a7fc659f9731046bf9cc7e788283deecc83c8633014fb0828a -I lz4 -xf /preloaded.tar -C /extractDir: (24.7474816s)
	I0512 01:00:51.928393    8484 kic.go:188] duration metric: took 24.755485 seconds to extract preloaded images to volume
	I0512 01:00:51.936391    8484 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0512 01:00:54.332242    8484 cli_runner.go:217] Completed: docker system info --format "{{json .}}": (2.3957281s)
	I0512 01:00:54.332242    8484 info.go:265] docker info: {ID:5MOX:W55Z:6RSS:V5PU:46KT:D723:NTM4:N7FK:USOO:URA3:TW6J:2PNT Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:81 OomKillDisable:true NGoroutines:70 SystemTime:2022-05-12 01:00:53.1429681 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.102.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_
64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:53902323712 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.14 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3df54a852345ae127d1fa3092b95168e4a88e2f8 Expected:3df54a852345ae127d1fa3092b95168e4a88e2f8} RuncCommit:{ID:v1.0.3-0-gf46b6ba Expected:v1.0.3-0-gf46b6ba} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,
profile=default] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.5.0] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:C:\Program Files\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc.
Version:v0.17.0]] Warnings:<nil>}}
	I0512 01:00:54.340232    8484 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0512 01:00:52.305372    9720 cli_runner.go:217] Completed: docker exec -t pause-20220512005140-7184 dig +short host.docker.internal: (1.3217351s)
	I0512 01:00:52.305372    9720 network.go:96] got host ip for mount in container by digging dns: 192.168.65.2
	I0512 01:00:52.316362    9720 ssh_runner.go:195] Run: grep 192.168.65.2	host.minikube.internal$ /etc/hosts
	I0512 01:00:52.337396    9720 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" pause-20220512005140-7184
	I0512 01:00:53.511603    9720 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" pause-20220512005140-7184: (1.1741461s)
	I0512 01:00:53.511603    9720 preload.go:132] Checking if preload exists for k8s version v1.23.5 and runtime docker
	I0512 01:00:53.527598    9720 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0512 01:00:53.610948    9720 docker.go:610] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.23.5
	k8s.gcr.io/kube-proxy:v1.23.5
	k8s.gcr.io/kube-scheduler:v1.23.5
	k8s.gcr.io/kube-controller-manager:v1.23.5
	k8s.gcr.io/etcd:3.5.1-0
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0512 01:00:53.610948    9720 docker.go:541] Images already preloaded, skipping extraction
	I0512 01:00:53.625001    9720 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0512 01:00:53.689181    9720 docker.go:610] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.23.5
	k8s.gcr.io/kube-proxy:v1.23.5
	k8s.gcr.io/kube-scheduler:v1.23.5
	k8s.gcr.io/kube-controller-manager:v1.23.5
	k8s.gcr.io/etcd:3.5.1-0
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0512 01:00:53.689181    9720 cache_images.go:84] Images are preloaded, skipping loading
	I0512 01:00:53.697168    9720 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0512 01:00:53.897956    9720 cni.go:95] Creating CNI manager for ""
	I0512 01:00:53.898962    9720 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0512 01:00:53.898962    9720 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0512 01:00:53.898962    9720 kubeadm.go:158] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.67.2 APIServerPort:8443 KubernetesVersion:v1.23.5 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:pause-20220512005140-7184 NodeName:pause-20220512005140-7184 DNSDomain:cluster.local CRISocket:/var/run/dockershim.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.67.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.67.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/mini
kube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	I0512 01:00:53.898962    9720 kubeadm.go:162] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.67.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/dockershim.sock
	  name: "pause-20220512005140-7184"
	  kubeletExtraArgs:
	    node-ip: 192.168.67.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.67.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.23.5
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0512 01:00:53.898962    9720 kubeadm.go:936] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.23.5/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=docker --hostname-override=pause-20220512005140-7184 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.67.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.23.5 ClusterName:pause-20220512005140-7184 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0512 01:00:53.913934    9720 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.23.5
	I0512 01:00:53.939131    9720 binaries.go:44] Found k8s binaries, skipping transfer
	I0512 01:00:53.952465    9720 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0512 01:00:53.973427    9720 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (351 bytes)
	I0512 01:00:54.008390    9720 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0512 01:00:54.047384    9720 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2047 bytes)
	I0512 01:00:54.094406    9720 ssh_runner.go:195] Run: grep 192.168.67.2	control-plane.minikube.internal$ /etc/hosts
	I0512 01:00:54.104417    9720 certs.go:54] Setting up C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\pause-20220512005140-7184 for IP: 192.168.67.2
	I0512 01:00:54.104417    9720 certs.go:182] skipping minikubeCA CA generation: C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.key
	I0512 01:00:54.104417    9720 certs.go:182] skipping proxyClientCA CA generation: C:\Users\jenkins.minikube4\minikube-integration\.minikube\proxy-client-ca.key
	I0512 01:00:54.105401    9720 certs.go:298] skipping minikube-user signed cert generation: C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\pause-20220512005140-7184\client.key
	I0512 01:00:54.105401    9720 certs.go:298] skipping minikube signed cert generation: C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\pause-20220512005140-7184\apiserver.key.c7fa3a9e
	I0512 01:00:54.106392    9720 certs.go:298] skipping aggregator signed cert generation: C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\pause-20220512005140-7184\proxy-client.key
	I0512 01:00:54.107386    9720 certs.go:388] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\7184.pem (1338 bytes)
	W0512 01:00:54.107386    9720 certs.go:384] ignoring C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\7184_empty.pem, impossibly tiny 0 bytes
	I0512 01:00:54.107386    9720 certs.go:388] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca-key.pem (1675 bytes)
	I0512 01:00:54.107386    9720 certs.go:388] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem (1078 bytes)
	I0512 01:00:54.107386    9720 certs.go:388] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0512 01:00:54.108398    9720 certs.go:388] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\key.pem (1675 bytes)
	I0512 01:00:54.108398    9720 certs.go:388] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\71842.pem (1708 bytes)
	I0512 01:00:54.109397    9720 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\pause-20220512005140-7184\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0512 01:00:54.166032    9720 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\pause-20220512005140-7184\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0512 01:00:54.234233    9720 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\pause-20220512005140-7184\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0512 01:00:54.288933    9720 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\pause-20220512005140-7184\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0512 01:00:54.346244    9720 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0512 01:00:54.398234    9720 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0512 01:00:54.451231    9720 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0512 01:00:54.501447    9720 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0512 01:00:54.545503    9720 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\71842.pem --> /usr/share/ca-certificates/71842.pem (1708 bytes)
	I0512 01:00:54.596181    9720 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0512 01:00:54.648264    9720 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\7184.pem --> /usr/share/ca-certificates/7184.pem (1338 bytes)
	I0512 01:00:54.710433    9720 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0512 01:00:54.770198    9720 ssh_runner.go:195] Run: openssl version
	I0512 01:00:54.822088    9720 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/71842.pem && ln -fs /usr/share/ca-certificates/71842.pem /etc/ssl/certs/71842.pem"
	I0512 01:00:54.863766    9720 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/71842.pem
	I0512 01:00:54.878108    9720 certs.go:431] hashing: -rw-r--r-- 1 root root 1708 May 11 23:10 /usr/share/ca-certificates/71842.pem
	I0512 01:00:54.890978    9720 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/71842.pem
	I0512 01:00:54.929069    9720 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/71842.pem /etc/ssl/certs/3ec20f2e.0"
	I0512 01:00:54.972779    9720 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0512 01:00:55.019413    9720 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0512 01:00:55.032411    9720 certs.go:431] hashing: -rw-r--r-- 1 root root 1111 May 11 23:00 /usr/share/ca-certificates/minikubeCA.pem
	I0512 01:00:55.047410    9720 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0512 01:00:55.073427    9720 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0512 01:00:55.110515    9720 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/7184.pem && ln -fs /usr/share/ca-certificates/7184.pem /etc/ssl/certs/7184.pem"
	I0512 01:00:55.142523    9720 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/7184.pem
	I0512 01:00:55.151527    9720 certs.go:431] hashing: -rw-r--r-- 1 root root 1338 May 11 23:10 /usr/share/ca-certificates/7184.pem
	I0512 01:00:55.165516    9720 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/7184.pem
	I0512 01:00:55.188519    9720 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/7184.pem /etc/ssl/certs/51391683.0"
	I0512 01:00:55.208521    9720 kubeadm.go:391] StartCluster: {Name:pause-20220512005140-7184 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1652251400-14138@sha256:8c847a4aa2afc5a7fc659f9731046bf9cc7e788283deecc83c8633014fb0828a Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.5 ClusterName:pause-20220512005140-7184 Namespace:default APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.23.5 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube4:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:f
alse DisableMetrics:false}
	I0512 01:00:55.216527    9720 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0512 01:00:55.303174    9720 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0512 01:00:55.327156    9720 kubeadm.go:402] found existing configuration files, will attempt cluster restart
	I0512 01:00:55.327156    9720 kubeadm.go:601] restartCluster start
	I0512 01:00:55.339160    9720 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0512 01:00:55.358153    9720 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0512 01:00:55.365155    9720 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" pause-20220512005140-7184
	I0512 01:00:53.049183    6824 cli_runner.go:217] Completed: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname cert-expiration-20220512005951-7184 --name cert-expiration-20220512005951-7184 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=cert-expiration-20220512005951-7184 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=cert-expiration-20220512005951-7184 --network cert-expiration-20220512005951-7184 --ip 192.168.58.2 --volume cert-expiration-20220512005951-7184:/var --security-opt apparmor=unconfined --memory=2048mb --memory-swap=2048mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1652251400-14138@sha256:8c847a4aa2afc5a7fc659f9731046bf9cc7e788283deecc83c8633014fb0828a: (3.5493032s)
	I0512 01:00:53.060165    6824 cli_runner.go:164] Run: docker container inspect cert-expiration-20220512005951-7184 --format={{.State.Running}}
	I0512 01:00:54.316963    6824 cli_runner.go:217] Completed: docker container inspect cert-expiration-20220512005951-7184 --format={{.State.Running}}: (1.2567331s)
	I0512 01:00:54.328336    6824 cli_runner.go:164] Run: docker container inspect cert-expiration-20220512005951-7184 --format={{.State.Status}}
	I0512 01:00:55.530314    6824 cli_runner.go:217] Completed: docker container inspect cert-expiration-20220512005951-7184 --format={{.State.Status}}: (1.2019159s)
	I0512 01:00:55.538167    6824 cli_runner.go:164] Run: docker exec cert-expiration-20220512005951-7184 stat /var/lib/dpkg/alternatives/iptables
	I0512 01:00:56.500255    9720 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" pause-20220512005140-7184: (1.1350414s)
	I0512 01:00:56.501247    9720 kubeconfig.go:92] found "pause-20220512005140-7184" server: "https://127.0.0.1:49878"
	I0512 01:00:56.502267    9720 kapi.go:59] client config for pause-20220512005140-7184: &rest.Config{Host:"https://127.0.0.1:49878", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\pause-20220512005140-7184\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\pause-20220512005140-7184\\client.key", CAFile:"C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), K
eyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1315600), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0512 01:00:56.513256    9720 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0512 01:00:56.536270    9720 api_server.go:165] Checking apiserver status ...
	I0512 01:00:56.548267    9720 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0512 01:00:56.586256    9720 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1806/cgroup
	I0512 01:00:56.610260    9720 api_server.go:181] apiserver freezer: "20:freezer:/docker/18e2eed271b9010aeba677455559d5fa350c421d241ea6643e75bf1b1295b98a/kubepods/burstable/pod5dbc247a18a40cde52945b4c8d27dc67/34d00adfe03d17f24e700df04cfc476471de50e1834344088f04a8b6e8af0bc9"
	I0512 01:00:56.621258    9720 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/18e2eed271b9010aeba677455559d5fa350c421d241ea6643e75bf1b1295b98a/kubepods/burstable/pod5dbc247a18a40cde52945b4c8d27dc67/34d00adfe03d17f24e700df04cfc476471de50e1834344088f04a8b6e8af0bc9/freezer.state
	I0512 01:00:56.642249    9720 api_server.go:203] freezer state: "THAWED"
	I0512 01:00:56.642249    9720 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:49878/healthz ...
	I0512 01:00:56.659291    9720 api_server.go:266] https://127.0.0.1:49878/healthz returned 200:
	ok
	I0512 01:00:56.686257    9720 system_pods.go:86] 6 kube-system pods found
	I0512 01:00:56.687259    9720 system_pods.go:89] "coredns-64897985d-6rqbl" [7d6e3981-4ff9-4593-83b1-57b703abd918] Running
	I0512 01:00:56.687259    9720 system_pods.go:89] "etcd-pause-20220512005140-7184" [62c0faef-19ea-4696-97ab-48e84baedea3] Running
	I0512 01:00:56.687259    9720 system_pods.go:89] "kube-apiserver-pause-20220512005140-7184" [83c3db73-94bd-4f33-83e9-6c42f62f4d4b] Running
	I0512 01:00:56.687259    9720 system_pods.go:89] "kube-controller-manager-pause-20220512005140-7184" [054f4a92-3568-4023-a22b-617612d6b1fb] Running
	I0512 01:00:56.687259    9720 system_pods.go:89] "kube-proxy-sk8qh" [f28d65ac-6d94-41fd-ad5c-dfc02902ee82] Running
	I0512 01:00:56.687259    9720 system_pods.go:89] "kube-scheduler-pause-20220512005140-7184" [ffdf2485-8fe5-44b1-b98c-7e4e039bcac0] Running
	I0512 01:00:56.690268    9720 api_server.go:140] control plane version: v1.23.5
	I0512 01:00:56.690268    9720 kubeadm.go:595] The running cluster does not require reconfiguration: 127.0.0.1
	I0512 01:00:56.690268    9720 kubeadm.go:649] Taking a shortcut, as the cluster seems to be properly configured
	I0512 01:00:56.690268    9720 kubeadm.go:605] restartCluster took 1.3630421s
	I0512 01:00:56.690268    9720 kubeadm.go:393] StartCluster complete in 1.4816705s
	I0512 01:00:56.690268    9720 settings.go:142] acquiring lock: {Name:mk5d8710830d010adb6db61f855b0ef766a8622c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0512 01:00:56.690268    9720 settings.go:150] Updating kubeconfig:  C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	I0512 01:00:56.692280    9720 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\kubeconfig: {Name:mk97c09b788e5010ffd4c9dd9525f9245d5edd25 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0512 01:00:56.707354    9720 kapi.go:59] client config for pause-20220512005140-7184: &rest.Config{Host:"https://127.0.0.1:49878", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\pause-20220512005140-7184\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\pause-20220512005140-7184\\client.key", CAFile:"C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), K
eyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1315600), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0512 01:00:56.716136    9720 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "pause-20220512005140-7184" rescaled to 1
	I0512 01:00:56.716136    9720 start.go:208] Will wait 6m0s for node &{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.23.5 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0512 01:00:56.719132    9720 out.go:177] * Verifying Kubernetes components...
	I0512 01:00:56.716136    9720 addons.go:415] enableAddons start: toEnable=map[], additional=[]
	I0512 01:00:56.716136    9720 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.23.5/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0512 01:00:56.716136    9720 config.go:178] Loaded profile config "pause-20220512005140-7184": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.5
	I0512 01:00:56.720106    9720 addons.go:65] Setting storage-provisioner=true in profile "pause-20220512005140-7184"
	I0512 01:00:56.720106    9720 addons.go:65] Setting default-storageclass=true in profile "pause-20220512005140-7184"
	I0512 01:00:56.722118    9720 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "pause-20220512005140-7184"
	I0512 01:00:56.720106    9720 addons.go:153] Setting addon storage-provisioner=true in "pause-20220512005140-7184"
	W0512 01:00:56.722118    9720 addons.go:165] addon storage-provisioner should already be in state true
	I0512 01:00:56.722118    9720 host.go:66] Checking if "pause-20220512005140-7184" exists ...
	I0512 01:00:56.733108    9720 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0512 01:00:56.738108    9720 cli_runner.go:164] Run: docker container inspect pause-20220512005140-7184 --format={{.State.Status}}
	I0512 01:00:56.742116    9720 cli_runner.go:164] Run: docker container inspect pause-20220512005140-7184 --format={{.State.Status}}
	I0512 01:00:56.924712    9720 start.go:795] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I0512 01:00:56.933696    9720 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" pause-20220512005140-7184
	I0512 01:00:57.924508    9720 cli_runner.go:217] Completed: docker container inspect pause-20220512005140-7184 --format={{.State.Status}}: (1.1823308s)
	I0512 01:00:57.940033    9720 cli_runner.go:217] Completed: docker container inspect pause-20220512005140-7184 --format={{.State.Status}}: (1.2018626s)
	I0512 01:00:58.081757    9720 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" pause-20220512005140-7184: (1.147887s)
	I0512 01:00:58.088589    9720 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0512 01:00:58.089442    9720 node_ready.go:35] waiting up to 6m0s for node "pause-20220512005140-7184" to be "Ready" ...
	I0512 01:00:56.516249    8484 cli_runner.go:217] Completed: docker info --format "'{{json .SecurityOptions}}'": (2.1759055s)
	I0512 01:00:56.525279    8484 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname docker-flags-20220512005959-7184 --name docker-flags-20220512005959-7184 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=docker-flags-20220512005959-7184 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=docker-flags-20220512005959-7184 --network docker-flags-20220512005959-7184 --ip 192.168.49.2 --volume docker-flags-20220512005959-7184:/var --security-opt apparmor=unconfined --memory=2048mb --memory-swap=2048mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1652251400-14138@sha256:8c847a4aa2afc5a7fc659f9731046bf9cc7e788283deecc83c8633014fb0828a
	I0512 01:00:58.090251    9720 kapi.go:59] client config for pause-20220512005140-7184: &rest.Config{Host:"https://127.0.0.1:49878", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\pause-20220512005140-7184\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\pause-20220512005140-7184\\client.key", CAFile:"C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), K
eyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1315600), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0512 01:00:58.108338    9720 node_ready.go:49] node "pause-20220512005140-7184" has status "Ready":"True"
	I0512 01:00:58.274591    9720 node_ready.go:38] duration metric: took 185.1387ms waiting for node "pause-20220512005140-7184" to be "Ready" ...
	I0512 01:00:58.274591    9720 addons.go:348] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0512 01:00:58.274591    9720 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0512 01:00:58.274591    9720 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0512 01:00:58.295857    9720 pod_ready.go:78] waiting up to 6m0s for pod "coredns-64897985d-6rqbl" in "kube-system" namespace to be "Ready" ...
	I0512 01:00:58.297359    9720 addons.go:153] Setting addon default-storageclass=true in "pause-20220512005140-7184"
	W0512 01:00:58.297391    9720 addons.go:165] addon default-storageclass should already be in state true
	I0512 01:00:58.297391    9720 host.go:66] Checking if "pause-20220512005140-7184" exists ...
	I0512 01:00:58.300649    9720 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-20220512005140-7184
	I0512 01:00:58.321627    9720 pod_ready.go:92] pod "coredns-64897985d-6rqbl" in "kube-system" namespace has status "Ready":"True"
	I0512 01:00:58.321627    9720 pod_ready.go:81] duration metric: took 25.6769ms waiting for pod "coredns-64897985d-6rqbl" in "kube-system" namespace to be "Ready" ...
	I0512 01:00:58.321627    9720 pod_ready.go:78] waiting up to 6m0s for pod "etcd-pause-20220512005140-7184" in "kube-system" namespace to be "Ready" ...
	I0512 01:00:58.324600    9720 cli_runner.go:164] Run: docker container inspect pause-20220512005140-7184 --format={{.State.Status}}
	I0512 01:00:58.345074    9720 pod_ready.go:92] pod "etcd-pause-20220512005140-7184" in "kube-system" namespace has status "Ready":"True"
	I0512 01:00:58.345074    9720 pod_ready.go:81] duration metric: took 23.4465ms waiting for pod "etcd-pause-20220512005140-7184" in "kube-system" namespace to be "Ready" ...
	I0512 01:00:58.345074    9720 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-pause-20220512005140-7184" in "kube-system" namespace to be "Ready" ...
	I0512 01:00:58.363580    9720 pod_ready.go:92] pod "kube-apiserver-pause-20220512005140-7184" in "kube-system" namespace has status "Ready":"True"
	I0512 01:00:58.363580    9720 pod_ready.go:81] duration metric: took 18.5046ms waiting for pod "kube-apiserver-pause-20220512005140-7184" in "kube-system" namespace to be "Ready" ...
	I0512 01:00:58.363580    9720 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-pause-20220512005140-7184" in "kube-system" namespace to be "Ready" ...
	I0512 01:00:58.378601    9720 pod_ready.go:92] pod "kube-controller-manager-pause-20220512005140-7184" in "kube-system" namespace has status "Ready":"True"
	I0512 01:00:58.378601    9720 pod_ready.go:81] duration metric: took 15.0198ms waiting for pod "kube-controller-manager-pause-20220512005140-7184" in "kube-system" namespace to be "Ready" ...
	I0512 01:00:58.378601    9720 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-sk8qh" in "kube-system" namespace to be "Ready" ...
	I0512 01:00:58.503272    9720 pod_ready.go:92] pod "kube-proxy-sk8qh" in "kube-system" namespace has status "Ready":"True"
	I0512 01:00:58.503272    9720 pod_ready.go:81] duration metric: took 124.6649ms waiting for pod "kube-proxy-sk8qh" in "kube-system" namespace to be "Ready" ...
	I0512 01:00:58.503272    9720 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-pause-20220512005140-7184" in "kube-system" namespace to be "Ready" ...
	I0512 01:00:58.911679    9720 pod_ready.go:92] pod "kube-scheduler-pause-20220512005140-7184" in "kube-system" namespace has status "Ready":"True"
	I0512 01:00:58.911679    9720 pod_ready.go:81] duration metric: took 408.3859ms waiting for pod "kube-scheduler-pause-20220512005140-7184" in "kube-system" namespace to be "Ready" ...
	I0512 01:00:58.911679    9720 pod_ready.go:38] duration metric: took 637.0554ms for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0512 01:00:58.912228    9720 api_server.go:51] waiting for apiserver process to appear ...
	I0512 01:00:58.926149    9720 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0512 01:00:58.954274    9720 api_server.go:71] duration metric: took 2.2380222s to wait for apiserver process to appear ...
	I0512 01:00:58.954274    9720 api_server.go:87] waiting for apiserver healthz status ...
	I0512 01:00:58.954274    9720 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:49878/healthz ...
	I0512 01:00:58.974978    9720 api_server.go:266] https://127.0.0.1:49878/healthz returned 200:
	ok
	I0512 01:00:58.983737    9720 api_server.go:140] control plane version: v1.23.5
	I0512 01:00:58.983737    9720 api_server.go:130] duration metric: took 29.4614ms to wait for apiserver health ...
	I0512 01:00:58.983737    9720 system_pods.go:43] waiting for kube-system pods to appear ...
	I0512 01:00:59.117869    9720 system_pods.go:59] 6 kube-system pods found
	I0512 01:00:59.117869    9720 system_pods.go:61] "coredns-64897985d-6rqbl" [7d6e3981-4ff9-4593-83b1-57b703abd918] Running
	I0512 01:00:59.117869    9720 system_pods.go:61] "etcd-pause-20220512005140-7184" [62c0faef-19ea-4696-97ab-48e84baedea3] Running
	I0512 01:00:59.117869    9720 system_pods.go:61] "kube-apiserver-pause-20220512005140-7184" [83c3db73-94bd-4f33-83e9-6c42f62f4d4b] Running
	I0512 01:00:59.117869    9720 system_pods.go:61] "kube-controller-manager-pause-20220512005140-7184" [054f4a92-3568-4023-a22b-617612d6b1fb] Running
	I0512 01:00:59.117869    9720 system_pods.go:61] "kube-proxy-sk8qh" [f28d65ac-6d94-41fd-ad5c-dfc02902ee82] Running
	I0512 01:00:59.117869    9720 system_pods.go:61] "kube-scheduler-pause-20220512005140-7184" [ffdf2485-8fe5-44b1-b98c-7e4e039bcac0] Running
	I0512 01:00:59.117869    9720 system_pods.go:74] duration metric: took 134.1256ms to wait for pod list to return data ...
	I0512 01:00:59.117869    9720 default_sa.go:34] waiting for default service account to be created ...
	I0512 01:00:59.306002    9720 default_sa.go:45] found service account: "default"
	I0512 01:00:59.306002    9720 default_sa.go:55] duration metric: took 188.1237ms for default service account to be created ...
	I0512 01:00:59.306002    9720 system_pods.go:116] waiting for k8s-apps to be running ...
	I0512 01:00:59.450976    9720 cli_runner.go:217] Completed: docker container inspect pause-20220512005140-7184 --format={{.State.Status}}: (1.1263183s)
	I0512 01:00:59.450976    9720 addons.go:348] installing /etc/kubernetes/addons/storageclass.yaml
	I0512 01:00:59.450976    9720 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0512 01:00:59.461594    9720 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-20220512005140-7184
	I0512 01:00:59.467106    9720 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-20220512005140-7184: (1.1663965s)
	I0512 01:00:59.467106    9720 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49879 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\pause-20220512005140-7184\id_rsa Username:docker}
	I0512 01:00:59.608881    9720 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.5/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0512 01:01:00.194012    9720 system_pods.go:86] 6 kube-system pods found
	I0512 01:01:00.194012    9720 system_pods.go:89] "coredns-64897985d-6rqbl" [7d6e3981-4ff9-4593-83b1-57b703abd918] Running
	I0512 01:01:00.194012    9720 system_pods.go:89] "etcd-pause-20220512005140-7184" [62c0faef-19ea-4696-97ab-48e84baedea3] Running
	I0512 01:01:00.194012    9720 system_pods.go:89] "kube-apiserver-pause-20220512005140-7184" [83c3db73-94bd-4f33-83e9-6c42f62f4d4b] Running
	I0512 01:01:00.194012    9720 system_pods.go:89] "kube-controller-manager-pause-20220512005140-7184" [054f4a92-3568-4023-a22b-617612d6b1fb] Running
	I0512 01:01:00.194012    9720 system_pods.go:89] "kube-proxy-sk8qh" [f28d65ac-6d94-41fd-ad5c-dfc02902ee82] Running
	I0512 01:01:00.194012    9720 system_pods.go:89] "kube-scheduler-pause-20220512005140-7184" [ffdf2485-8fe5-44b1-b98c-7e4e039bcac0] Running
	I0512 01:01:00.194012    9720 system_pods.go:126] duration metric: took 887.9638ms to wait for k8s-apps to be running ...
	I0512 01:01:00.194012    9720 system_svc.go:44] waiting for kubelet service to be running ....
	I0512 01:01:00.212032    9720 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0512 01:01:00.625126    9720 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.5/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.016193s)
	I0512 01:01:00.625126    9720 system_svc.go:56] duration metric: took 431.092ms WaitForService to wait for kubelet.
	I0512 01:01:00.626104    9720 kubeadm.go:548] duration metric: took 3.9097665s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0512 01:01:00.626104    9720 node_conditions.go:102] verifying NodePressure condition ...
	I0512 01:01:00.639108    9720 node_conditions.go:122] node storage ephemeral capacity is 263174212Ki
	I0512 01:01:00.639108    9720 node_conditions.go:123] node cpu capacity is 16
	I0512 01:01:00.639108    9720 node_conditions.go:105] duration metric: took 13.0031ms to run NodePressure ...
	I0512 01:01:00.639108    9720 start.go:213] waiting for startup goroutines ...
	I0512 01:01:00.669107    9720 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-20220512005140-7184: (1.2073842s)
	I0512 01:01:00.669107    9720 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49879 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\pause-20220512005140-7184\id_rsa Username:docker}
	I0512 01:01:00.967034    9720 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.5/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0512 01:00:56.848727    6824 cli_runner.go:217] Completed: docker exec cert-expiration-20220512005951-7184 stat /var/lib/dpkg/alternatives/iptables: (1.3104927s)
	I0512 01:00:56.848727    6824 oci.go:247] the created container "cert-expiration-20220512005951-7184" has a running status.
	I0512 01:00:56.848727    6824 kic.go:210] Creating ssh key for kic: C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\cert-expiration-20220512005951-7184\id_rsa...
	I0512 01:00:57.182119    6824 kic_runner.go:191] docker (temp): C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\cert-expiration-20220512005951-7184\id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0512 01:00:58.500455    6824 cli_runner.go:164] Run: docker container inspect cert-expiration-20220512005951-7184 --format={{.State.Status}}
	I0512 01:00:59.625010    6824 cli_runner.go:217] Completed: docker container inspect cert-expiration-20220512005951-7184 --format={{.State.Status}}: (1.1244965s)
	I0512 01:00:59.649506    6824 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0512 01:00:59.649506    6824 kic_runner.go:114] Args: [docker exec --privileged cert-expiration-20220512005951-7184 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0512 01:01:01.128559    6824 kic_runner.go:123] Done: [docker exec --privileged cert-expiration-20220512005951-7184 chown docker:docker /home/docker/.ssh/authorized_keys]: (1.4789767s)
	I0512 01:01:01.131534    6824 kic.go:250] ensuring only current user has permissions to key file located at : C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\cert-expiration-20220512005951-7184\id_rsa...
	I0512 01:01:01.927594    9720 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0512 01:01:01.931600    9720 addons.go:417] enableAddons completed in 5.2151954s
	I0512 01:01:02.207223    9720 start.go:499] kubectl: 1.18.2, cluster: 1.23.5 (minor skew: 5)
	I0512 01:01:02.210231    9720 out.go:177] 
	W0512 01:01:02.213242    9720 out.go:239] ! C:\ProgramData\chocolatey\bin\kubectl.exe is version 1.18.2, which may have incompatibilites with Kubernetes 1.23.5.
	I0512 01:01:02.224237    9720 out.go:177]   - Want kubectl v1.23.5? Try 'minikube kubectl -- get pods -A'
	I0512 01:01:02.232235    9720 out.go:177] * Done! kubectl is now configured to use "pause-20220512005140-7184" cluster and "default" namespace by default
	I0512 01:01:01.873595    8484 cli_runner.go:217] Completed: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname docker-flags-20220512005959-7184 --name docker-flags-20220512005959-7184 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=docker-flags-20220512005959-7184 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=docker-flags-20220512005959-7184 --network docker-flags-20220512005959-7184 --ip 192.168.49.2 --volume docker-flags-20220512005959-7184:/var --security-opt apparmor=unconfined --memory=2048mb --memory-swap=2048mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1652251400-14138@sha256:8c847a4aa2afc5a7fc659f9731046bf9cc7e788283deecc83c8633014fb0828a: (5.3480401s)
	I0512 01:01:01.882597    8484 cli_runner.go:164] Run: docker container inspect docker-flags-20220512005959-7184 --format={{.State.Running}}
	I0512 01:01:03.195519    8484 cli_runner.go:217] Completed: docker container inspect docker-flags-20220512005959-7184 --format={{.State.Running}}: (1.3128541s)
	I0512 01:01:03.202509    8484 cli_runner.go:164] Run: docker container inspect docker-flags-20220512005959-7184 --format={{.State.Status}}
	I0512 01:01:04.524320    8484 cli_runner.go:217] Completed: docker container inspect docker-flags-20220512005959-7184 --format={{.State.Status}}: (1.3217435s)
	I0512 01:01:04.532316    8484 cli_runner.go:164] Run: docker exec docker-flags-20220512005959-7184 stat /var/lib/dpkg/alternatives/iptables
	I0512 01:01:01.700503    6824 cli_runner.go:164] Run: docker container inspect cert-expiration-20220512005951-7184 --format={{.State.Status}}
	I0512 01:01:03.004335    6824 cli_runner.go:217] Completed: docker container inspect cert-expiration-20220512005951-7184 --format={{.State.Status}}: (1.3037647s)
	I0512 01:01:03.004335    6824 machine.go:88] provisioning docker machine ...
	I0512 01:01:03.004335    6824 ubuntu.go:169] provisioning hostname "cert-expiration-20220512005951-7184"
	I0512 01:01:03.012202    6824 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-expiration-20220512005951-7184
	I0512 01:01:04.319916    6824 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-expiration-20220512005951-7184: (1.3076471s)
	I0512 01:01:04.324901    6824 main.go:134] libmachine: Using SSH client type: native
	I0512 01:01:04.333895    6824 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x422e20] 0x425c80 <nil>  [] 0s} 127.0.0.1 50111 <nil> <nil>}
	I0512 01:01:04.333895    6824 main.go:134] libmachine: About to run SSH command:
	sudo hostname cert-expiration-20220512005951-7184 && echo "cert-expiration-20220512005951-7184" | sudo tee /etc/hostname
	I0512 01:01:04.551470    6824 main.go:134] libmachine: SSH cmd err, output: <nil>: cert-expiration-20220512005951-7184
	
	I0512 01:01:04.564303    6824 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-expiration-20220512005951-7184
	I0512 01:01:05.697486    6824 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-expiration-20220512005951-7184: (1.1331246s)
	I0512 01:01:05.701487    6824 main.go:134] libmachine: Using SSH client type: native
	I0512 01:01:05.701487    6824 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x422e20] 0x425c80 <nil>  [] 0s} 127.0.0.1 50111 <nil> <nil>}
	I0512 01:01:05.701487    6824 main.go:134] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\scert-expiration-20220512005951-7184' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 cert-expiration-20220512005951-7184/g' /etc/hosts;
				else 
					echo '127.0.1.1 cert-expiration-20220512005951-7184' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0512 01:01:05.935294    6824 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I0512 01:01:05.935294    6824 ubuntu.go:175] set auth options {CertDir:C:\Users\jenkins.minikube4\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube4\minikube-integration\.minikube}
	I0512 01:01:05.935294    6824 ubuntu.go:177] setting up certificates
	I0512 01:01:05.935294    6824 provision.go:83] configureAuth start
	I0512 01:01:05.944290    6824 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" cert-expiration-20220512005951-7184
	I0512 01:01:05.806016    8484 cli_runner.go:217] Completed: docker exec docker-flags-20220512005959-7184 stat /var/lib/dpkg/alternatives/iptables: (1.2735155s)
	I0512 01:01:05.806060    8484 oci.go:247] the created container "docker-flags-20220512005959-7184" has a running status.
	I0512 01:01:05.806301    8484 kic.go:210] Creating ssh key for kic: C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\docker-flags-20220512005959-7184\id_rsa...
	I0512 01:01:06.013793    8484 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\docker-flags-20220512005959-7184\id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I0512 01:01:06.020780    8484 kic_runner.go:191] docker (temp): C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\docker-flags-20220512005959-7184\id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0512 01:01:07.225364    8484 cli_runner.go:164] Run: docker container inspect docker-flags-20220512005959-7184 --format={{.State.Status}}
	I0512 01:01:08.395109    8484 cli_runner.go:217] Completed: docker container inspect docker-flags-20220512005959-7184 --format={{.State.Status}}: (1.1695839s)
	I0512 01:01:08.412760    8484 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0512 01:01:08.412760    8484 kic_runner.go:114] Args: [docker exec --privileged docker-flags-20220512005959-7184 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0512 01:01:09.752607    8484 kic_runner.go:123] Done: [docker exec --privileged docker-flags-20220512005959-7184 chown docker:docker /home/docker/.ssh/authorized_keys]: (1.3397782s)
	I0512 01:01:09.756575    8484 kic.go:250] ensuring only current user has permissions to key file located at : C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\docker-flags-20220512005959-7184\id_rsa...
	I0512 01:01:07.104406    6824 cli_runner.go:217] Completed: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" cert-expiration-20220512005951-7184: (1.1600568s)
	I0512 01:01:07.104406    6824 provision.go:138] copyHostCerts
	I0512 01:01:07.104406    6824 exec_runner.go:144] found C:\Users\jenkins.minikube4\minikube-integration\.minikube/key.pem, removing ...
	I0512 01:01:07.104406    6824 exec_runner.go:207] rm: C:\Users\jenkins.minikube4\minikube-integration\.minikube\key.pem
	I0512 01:01:07.104406    6824 exec_runner.go:151] cp: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube4\minikube-integration\.minikube/key.pem (1675 bytes)
	I0512 01:01:07.105409    6824 exec_runner.go:144] found C:\Users\jenkins.minikube4\minikube-integration\.minikube/ca.pem, removing ...
	I0512 01:01:07.105409    6824 exec_runner.go:207] rm: C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.pem
	I0512 01:01:07.106410    6824 exec_runner.go:151] cp: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube4\minikube-integration\.minikube/ca.pem (1078 bytes)
	I0512 01:01:07.107416    6824 exec_runner.go:144] found C:\Users\jenkins.minikube4\minikube-integration\.minikube/cert.pem, removing ...
	I0512 01:01:07.107416    6824 exec_runner.go:207] rm: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cert.pem
	I0512 01:01:07.107416    6824 exec_runner.go:151] cp: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube4\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0512 01:01:07.108420    6824 provision.go:112] generating server cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.cert-expiration-20220512005951-7184 san=[192.168.58.2 127.0.0.1 localhost 127.0.0.1 minikube cert-expiration-20220512005951-7184]
	I0512 01:01:07.456711    6824 provision.go:172] copyRemoteCerts
	I0512 01:01:07.467759    6824 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0512 01:01:07.477988    6824 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-expiration-20220512005951-7184
	I0512 01:01:08.632777    6824 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-expiration-20220512005951-7184: (1.1547292s)
	I0512 01:01:08.633499    6824 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50111 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\cert-expiration-20220512005951-7184\id_rsa Username:docker}
	I0512 01:01:08.774373    6824 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (1.3065461s)
	I0512 01:01:08.775371    6824 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0512 01:01:08.825445    6824 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0512 01:01:08.871683    6824 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1277 bytes)
	I0512 01:01:08.947899    6824 provision.go:86] duration metric: configureAuth took 3.012393s
	I0512 01:01:08.947899    6824 ubuntu.go:193] setting minikube options for container-runtime
	I0512 01:01:08.947899    6824 config.go:178] Loaded profile config "cert-expiration-20220512005951-7184": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.5
	I0512 01:01:08.958932    6824 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-expiration-20220512005951-7184
	I0512 01:01:10.152616    6824 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-expiration-20220512005951-7184: (1.1936227s)
	I0512 01:01:10.156609    6824 main.go:134] libmachine: Using SSH client type: native
	I0512 01:01:10.156609    6824 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x422e20] 0x425c80 <nil>  [] 0s} 127.0.0.1 50111 <nil> <nil>}
	I0512 01:01:10.156609    6824 main.go:134] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0512 01:01:10.347539    6824 main.go:134] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0512 01:01:10.347539    6824 ubuntu.go:71] root file system type: overlay
	I0512 01:01:10.347539    6824 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0512 01:01:10.356883    6824 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-expiration-20220512005951-7184
	I0512 01:01:11.498388    6824 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-expiration-20220512005951-7184: (1.1413833s)
	I0512 01:01:11.504495    6824 main.go:134] libmachine: Using SSH client type: native
	I0512 01:01:11.505234    6824 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x422e20] 0x425c80 <nil>  [] 0s} 127.0.0.1 50111 <nil> <nil>}
	I0512 01:01:11.505234    6824 main.go:134] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0512 01:01:10.382579    8484 cli_runner.go:164] Run: docker container inspect docker-flags-20220512005959-7184 --format={{.State.Status}}
	I0512 01:01:11.498514    8484 cli_runner.go:217] Completed: docker container inspect docker-flags-20220512005959-7184 --format={{.State.Status}}: (1.1157515s)
	I0512 01:01:11.498514    8484 machine.go:88] provisioning docker machine ...
	I0512 01:01:11.498514    8484 ubuntu.go:169] provisioning hostname "docker-flags-20220512005959-7184"
	I0512 01:01:11.511399    8484 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-20220512005959-7184
	I0512 01:01:12.614413    8484 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-20220512005959-7184: (1.1029192s)
	I0512 01:01:12.618411    8484 main.go:134] libmachine: Using SSH client type: native
	I0512 01:01:12.618411    8484 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x422e20] 0x425c80 <nil>  [] 0s} 127.0.0.1 50123 <nil> <nil>}
	I0512 01:01:12.618411    8484 main.go:134] libmachine: About to run SSH command:
	sudo hostname docker-flags-20220512005959-7184 && echo "docker-flags-20220512005959-7184" | sudo tee /etc/hostname
	I0512 01:01:12.845911    8484 main.go:134] libmachine: SSH cmd err, output: <nil>: docker-flags-20220512005959-7184
	
	I0512 01:01:12.853912    8484 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-20220512005959-7184
	I0512 01:01:14.044358    8484 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-20220512005959-7184: (1.1902352s)
	I0512 01:01:14.048988    8484 main.go:134] libmachine: Using SSH client type: native
	I0512 01:01:14.051240    8484 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x422e20] 0x425c80 <nil>  [] 0s} 127.0.0.1 50123 <nil> <nil>}
	I0512 01:01:14.051240    8484 main.go:134] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdocker-flags-20220512005959-7184' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 docker-flags-20220512005959-7184/g' /etc/hosts;
				else 
					echo '127.0.1.1 docker-flags-20220512005959-7184' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0512 01:01:14.249319    8484 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I0512 01:01:14.249319    8484 ubuntu.go:175] set auth options {CertDir:C:\Users\jenkins.minikube4\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube4\minikube-integration\.minikube}
	I0512 01:01:14.249319    8484 ubuntu.go:177] setting up certificates
	I0512 01:01:14.249319    8484 provision.go:83] configureAuth start
	I0512 01:01:14.259922    8484 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" docker-flags-20220512005959-7184
	I0512 01:01:11.736682    6824 main.go:134] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0512 01:01:11.744715    6824 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-expiration-20220512005951-7184
	I0512 01:01:12.897920    6824 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-expiration-20220512005951-7184: (1.1531454s)
	I0512 01:01:12.903919    6824 main.go:134] libmachine: Using SSH client type: native
	I0512 01:01:12.903919    6824 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x422e20] 0x425c80 <nil>  [] 0s} 127.0.0.1 50111 <nil> <nil>}
	I0512 01:01:12.903919    6824 main.go:134] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	
	* 
	* ==> Docker <==
	* -- Logs begin at Thu 2022-05-12 00:58:27 UTC, end at Thu 2022-05-12 01:01:24 UTC. --
	May 12 00:58:46 pause-20220512005140-7184 dockerd[508]: time="2022-05-12T00:58:46.600079000Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc
	May 12 00:58:46 pause-20220512005140-7184 dockerd[508]: time="2022-05-12T00:58:46.600122800Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///run/containerd/containerd.sock  <nil> 0 <nil>}] <nil> <nil>}" module=grpc
	May 12 00:58:46 pause-20220512005140-7184 dockerd[508]: time="2022-05-12T00:58:46.600144700Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc
	May 12 00:58:46 pause-20220512005140-7184 dockerd[508]: time="2022-05-12T00:58:46.603242100Z" level=info msg="parsed scheme: \"unix\"" module=grpc
	May 12 00:58:46 pause-20220512005140-7184 dockerd[508]: time="2022-05-12T00:58:46.603380800Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc
	May 12 00:58:46 pause-20220512005140-7184 dockerd[508]: time="2022-05-12T00:58:46.603422900Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///run/containerd/containerd.sock  <nil> 0 <nil>}] <nil> <nil>}" module=grpc
	May 12 00:58:46 pause-20220512005140-7184 dockerd[508]: time="2022-05-12T00:58:46.603450500Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc
	May 12 00:58:46 pause-20220512005140-7184 dockerd[508]: time="2022-05-12T00:58:46.829452600Z" level=info msg="[graphdriver] using prior storage driver: overlay2"
	May 12 00:58:46 pause-20220512005140-7184 dockerd[508]: time="2022-05-12T00:58:46.859866300Z" level=warning msg="Your kernel does not support cgroup blkio weight"
	May 12 00:58:46 pause-20220512005140-7184 dockerd[508]: time="2022-05-12T00:58:46.859968900Z" level=warning msg="Your kernel does not support cgroup blkio weight_device"
	May 12 00:58:46 pause-20220512005140-7184 dockerd[508]: time="2022-05-12T00:58:46.859991300Z" level=warning msg="Your kernel does not support cgroup blkio throttle.read_bps_device"
	May 12 00:58:46 pause-20220512005140-7184 dockerd[508]: time="2022-05-12T00:58:46.859999400Z" level=warning msg="Your kernel does not support cgroup blkio throttle.write_bps_device"
	May 12 00:58:46 pause-20220512005140-7184 dockerd[508]: time="2022-05-12T00:58:46.860010200Z" level=warning msg="Your kernel does not support cgroup blkio throttle.read_iops_device"
	May 12 00:58:46 pause-20220512005140-7184 dockerd[508]: time="2022-05-12T00:58:46.860017400Z" level=warning msg="Your kernel does not support cgroup blkio throttle.write_iops_device"
	May 12 00:58:46 pause-20220512005140-7184 dockerd[508]: time="2022-05-12T00:58:46.860268100Z" level=info msg="Loading containers: start."
	May 12 00:58:47 pause-20220512005140-7184 dockerd[508]: time="2022-05-12T00:58:47.064954200Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	May 12 00:58:47 pause-20220512005140-7184 dockerd[508]: time="2022-05-12T00:58:47.168245600Z" level=info msg="Loading containers: done."
	May 12 00:58:47 pause-20220512005140-7184 dockerd[508]: time="2022-05-12T00:58:47.287750800Z" level=info msg="Docker daemon" commit=4433bf6 graphdriver(s)=overlay2 version=20.10.15
	May 12 00:58:47 pause-20220512005140-7184 dockerd[508]: time="2022-05-12T00:58:47.287901600Z" level=info msg="Daemon has completed initialization"
	May 12 00:58:47 pause-20220512005140-7184 systemd[1]: Started Docker Application Container Engine.
	May 12 00:58:47 pause-20220512005140-7184 dockerd[508]: time="2022-05-12T00:58:47.353224900Z" level=info msg="API listen on [::]:2376"
	May 12 00:58:47 pause-20220512005140-7184 dockerd[508]: time="2022-05-12T00:58:47.359564200Z" level=info msg="API listen on /var/run/docker.sock"
	May 12 00:59:43 pause-20220512005140-7184 dockerd[508]: time="2022-05-12T00:59:43.626822600Z" level=info msg="ignoring event" container=18a71909db628e98999d5e631afcee12b0535efc00d4ad63d9e6d8d03f0fca72 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	May 12 01:00:22 pause-20220512005140-7184 dockerd[508]: time="2022-05-12T01:00:22.852574500Z" level=info msg="ignoring event" container=93c48e3561a563768d9850597bce373e6d471c669bab3559fc5f6127eb8cbead module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	May 12 01:00:24 pause-20220512005140-7184 dockerd[508]: time="2022-05-12T01:00:24.806323600Z" level=info msg="ignoring event" container=d2c19d84bf254bb6896aaf87a79d1237501c216f6232e2c8f4fda3cd9ce82963 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE               CREATED              STATE               NAME                      ATTEMPT             POD ID
	e6a72fa21d5f9       6e38f40d628db       22 seconds ago       Running             storage-provisioner       0                   7f2018d24dcb9
	fb0603d9b195e       a4ca41631cc7a       About a minute ago   Running             coredns                   0                   ef7464c48a751
	b075ccd54d6d7       3c53fa8541f95       About a minute ago   Running             kube-proxy                0                   fab7a31437673
	52389506cb8f7       b0c9e5e4dbb14       About a minute ago   Running             kube-controller-manager   1                   668b95fd5cf75
	2fc40eb3e688c       884d49d6d8c9f       About a minute ago   Running             kube-scheduler            0                   0a97c038a0ef4
	a5606a57f2a0c       25f8c7f3da61c       About a minute ago   Running             etcd                      0                   8fb3af689928d
	18a71909db628       b0c9e5e4dbb14       About a minute ago   Exited              kube-controller-manager   0                   668b95fd5cf75
	34d00adfe03d1       3fc1d62d65872       About a minute ago   Running             kube-apiserver            0                   fc0e6231e879c
	
	* 
	* ==> coredns [fb0603d9b195] <==
	* .:53
	[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	CoreDNS-1.8.6
	linux/amd64, go1.17.1, 13a9191
	[INFO] Reloading
	[INFO] plugin/health: Going into lameduck mode for 5s
	[INFO] plugin/reload: Running configuration MD5 = c23ed519c17e71ee396ed052e6209e94
	[INFO] Reloading complete
	
	* 
	* ==> describe nodes <==
	* Name:               pause-20220512005140-7184
	Roles:              control-plane,master
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=pause-20220512005140-7184
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=50a7977b568d2ad3e04003527a57f4502d6177a0
	                    minikube.k8s.io/name=pause-20220512005140-7184
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2022_05_12T00_59_46_0700
	                    minikube.k8s.io/version=v1.25.2
	                    node-role.kubernetes.io/control-plane=
	                    node-role.kubernetes.io/master=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 12 May 2022 00:59:33 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-20220512005140-7184
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 12 May 2022 01:01:20 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 12 May 2022 01:00:09 +0000   Thu, 12 May 2022 00:59:29 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 12 May 2022 01:00:09 +0000   Thu, 12 May 2022 00:59:29 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 12 May 2022 01:00:09 +0000   Thu, 12 May 2022 00:59:29 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 12 May 2022 01:00:09 +0000   Thu, 12 May 2022 00:59:59 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.67.2
	  Hostname:    pause-20220512005140-7184
	Capacity:
	  cpu:                16
	  ephemeral-storage:  263174212Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             52638988Ki
	  pods:               110
	Allocatable:
	  cpu:                16
	  ephemeral-storage:  263174212Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             52638988Ki
	  pods:               110
	System Info:
	  Machine ID:                 8556a0a9a0e64ba4b825f672d2dce0b9
	  System UUID:                8556a0a9a0e64ba4b825f672d2dce0b9
	  Boot ID:                    10186544-b659-4889-afdb-c2512535b797
	  Kernel Version:             5.10.102.1-microsoft-standard-WSL2
	  OS Image:                   Ubuntu 20.04.4 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://20.10.15
	  Kubelet Version:            v1.23.5
	  Kube-Proxy Version:         v1.23.5
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                 ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-64897985d-6rqbl                              100m (0%!)(MISSING)     0 (0%!)(MISSING)      70Mi (0%!)(MISSING)        170Mi (0%!)(MISSING)     75s
	  kube-system                 etcd-pause-20220512005140-7184                       100m (0%!)(MISSING)     0 (0%!)(MISSING)      100Mi (0%!)(MISSING)       0 (0%!)(MISSING)         106s
	  kube-system                 kube-apiserver-pause-20220512005140-7184             250m (1%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         96s
	  kube-system                 kube-controller-manager-pause-20220512005140-7184    200m (1%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         104s
	  kube-system                 kube-proxy-sk8qh                                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         75s
	  kube-system                 kube-scheduler-pause-20220512005140-7184             100m (0%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         106s
	  kube-system                 storage-provisioner                                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         25s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (4%!)(MISSING)   0 (0%!)(MISSING)
	  memory             170Mi (0%!)(MISSING)  170Mi (0%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                  From        Message
	  ----    ------                   ----                 ----        -------
	  Normal  Starting                 70s                  kube-proxy  
	  Normal  NodeHasSufficientMemory  2m2s (x8 over 2m3s)  kubelet     Node pause-20220512005140-7184 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m2s (x8 over 2m3s)  kubelet     Node pause-20220512005140-7184 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m2s (x7 over 2m3s)  kubelet     Node pause-20220512005140-7184 status is now: NodeHasSufficientPID
	  Normal  Starting                 98s                  kubelet     Starting kubelet.
	  Normal  NodeHasNoDiskPressure    97s                  kubelet     Node pause-20220512005140-7184 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     97s                  kubelet     Node pause-20220512005140-7184 status is now: NodeHasSufficientPID
	  Normal  NodeNotReady             97s                  kubelet     Node pause-20220512005140-7184 status is now: NodeNotReady
	  Normal  NodeHasSufficientMemory  97s                  kubelet     Node pause-20220512005140-7184 status is now: NodeHasSufficientMemory
	  Normal  NodeAllocatableEnforced  96s                  kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeReady                86s                  kubelet     Node pause-20220512005140-7184 status is now: NodeReady
	
	* 
	* ==> dmesg <==
	* [May12 00:41] WSL2: Performing memory compaction.
	[May12 00:42] WSL2: Performing memory compaction.
	[May12 00:43] WSL2: Performing memory compaction.
	[May12 00:44] WSL2: Performing memory compaction.
	[May12 00:45] WSL2: Performing memory compaction.
	[May12 00:46] WSL2: Performing memory compaction.
	[May12 00:47] WSL2: Performing memory compaction.
	[May12 00:48] WSL2: Performing memory compaction.
	[May12 00:49] process 'docker/tmp/qemu-check071081722/check' started with executable stack
	[ +21.082981] WSL2: Performing memory compaction.
	[May12 00:51] WSL2: Performing memory compaction.
	[May12 00:52] WSL2: Performing memory compaction.
	[May12 00:54] WSL2: Performing memory compaction.
	[May12 00:56] overlayfs: upperdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
	[  +0.000004] overlayfs: workdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
	[  +0.010948] overlayfs: upperdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
	[  +0.000005] overlayfs: workdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
	[  +0.036593] overlayfs: upperdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
	[  +0.000001] overlayfs: upperdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
	[  +0.000004] overlayfs: workdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
	[  +0.000001] overlayfs: workdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
	[May12 00:57] WSL2: Performing memory compaction.
	[May12 00:58] WSL2: Performing memory compaction.
	[May12 01:00] WSL2: Performing memory compaction.
	[May12 01:01] WSL2: Performing memory compaction.
	
	* 
	* ==> etcd [a5606a57f2a0] <==
	* {"level":"warn","ts":"2022-05-12T01:01:15.872Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"931.0164ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2022-05-12T01:01:15.872Z","caller":"traceutil/trace.go:171","msg":"trace[1207236342] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:521; }","duration":"931.0624ms","start":"2022-05-12T01:01:14.941Z","end":"2022-05-12T01:01:15.872Z","steps":["trace[1207236342] 'range keys from in-memory index tree'  (duration: 930.8781ms)"],"step_count":1}
	{"level":"warn","ts":"2022-05-12T01:01:15.872Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2022-05-12T01:01:14.310Z","time spent":"1.5618874s","remote":"127.0.0.1:54086","response type":"/etcdserverpb.KV/Range","request count":0,"request size":30,"response count":1,"response size":367,"request content":"key:\"/registry/namespaces/default\" "}
	{"level":"warn","ts":"2022-05-12T01:01:15.872Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2022-05-12T01:01:14.941Z","time spent":"931.1066ms","remote":"127.0.0.1:54112","response type":"/etcdserverpb.KV/Range","request count":0,"request size":18,"response count":0,"response size":29,"request content":"key:\"/registry/health\" "}
	{"level":"warn","ts":"2022-05-12T01:01:15.872Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"140.5704ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/csinodes/\" range_end:\"/registry/csinodes0\" count_only:true ","response":"range_response_count:0 size:7"}
	{"level":"info","ts":"2022-05-12T01:01:15.872Z","caller":"traceutil/trace.go:171","msg":"trace[360830156] range","detail":"{range_begin:/registry/csinodes/; range_end:/registry/csinodes0; response_count:0; response_revision:521; }","duration":"141.0913ms","start":"2022-05-12T01:01:15.731Z","end":"2022-05-12T01:01:15.872Z","steps":["trace[360830156] 'count revisions from in-memory index tree'  (duration: 140.4751ms)"],"step_count":1}
	{"level":"warn","ts":"2022-05-12T01:01:15.872Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"775.2498ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/storageclasses/\" range_end:\"/registry/storageclasses0\" count_only:true ","response":"range_response_count:0 size:7"}
	{"level":"info","ts":"2022-05-12T01:01:15.872Z","caller":"traceutil/trace.go:171","msg":"trace[501765133] range","detail":"{range_begin:/registry/storageclasses/; range_end:/registry/storageclasses0; response_count:0; response_revision:521; }","duration":"775.9528ms","start":"2022-05-12T01:01:15.096Z","end":"2022-05-12T01:01:15.872Z","steps":["trace[501765133] 'count revisions from in-memory index tree'  (duration: 775.137ms)"],"step_count":1}
	{"level":"warn","ts":"2022-05-12T01:01:15.873Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2022-05-12T01:01:15.096Z","time spent":"776.0933ms","remote":"127.0.0.1:54180","response type":"/etcdserverpb.KV/Range","request count":0,"request size":56,"response count":1,"response size":31,"request content":"key:\"/registry/storageclasses/\" range_end:\"/registry/storageclasses0\" count_only:true "}
	{"level":"warn","ts":"2022-05-12T01:01:16.909Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"627.7362ms","expected-duration":"100ms","prefix":"","request":"header:<ID:2289940453759133128 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/masterleases/192.168.67.2\" mod_revision:515 > success:<request_put:<key:\"/registry/masterleases/192.168.67.2\" value_size:67 lease:2289940453759133126 >> failure:<request_range:<key:\"/registry/masterleases/192.168.67.2\" > >>","response":"size:16"}
	{"level":"info","ts":"2022-05-12T01:01:16.909Z","caller":"traceutil/trace.go:171","msg":"trace[1973444395] linearizableReadLoop","detail":"{readStateIndex:554; appliedIndex:553; }","duration":"970.0203ms","start":"2022-05-12T01:01:15.939Z","end":"2022-05-12T01:01:16.909Z","steps":["trace[1973444395] 'read index received'  (duration: 342.0045ms)","trace[1973444395] 'applied index is now lower than readState.Index'  (duration: 628.0119ms)"],"step_count":2}
	{"level":"warn","ts":"2022-05-12T01:01:16.910Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"970.2145ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2022-05-12T01:01:16.910Z","caller":"traceutil/trace.go:171","msg":"trace[134297754] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:522; }","duration":"970.3321ms","start":"2022-05-12T01:01:15.939Z","end":"2022-05-12T01:01:16.910Z","steps":["trace[134297754] 'agreement among raft nodes before linearized reading'  (duration: 970.1351ms)"],"step_count":1}
	{"level":"warn","ts":"2022-05-12T01:01:16.910Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2022-05-12T01:01:15.939Z","time spent":"970.3845ms","remote":"127.0.0.1:54112","response type":"/etcdserverpb.KV/Range","request count":0,"request size":18,"response count":0,"response size":29,"request content":"key:\"/registry/health\" "}
	{"level":"warn","ts":"2022-05-12T01:01:16.910Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"957.0695ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" ","response":"range_response_count:1 size:1127"}
	{"level":"info","ts":"2022-05-12T01:01:16.910Z","caller":"traceutil/trace.go:171","msg":"trace[489068306] range","detail":"{range_begin:/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath; range_end:; response_count:1; response_revision:522; }","duration":"957.1391ms","start":"2022-05-12T01:01:15.953Z","end":"2022-05-12T01:01:16.910Z","steps":["trace[489068306] 'agreement among raft nodes before linearized reading'  (duration: 957.06ms)"],"step_count":1}
	{"level":"warn","ts":"2022-05-12T01:01:16.910Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2022-05-12T01:01:15.953Z","time spent":"957.2418ms","remote":"127.0.0.1:54088","response type":"/etcdserverpb.KV/Range","request count":0,"request size":67,"response count":1,"response size":1151,"request content":"key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" "}
	{"level":"warn","ts":"2022-05-12T01:01:16.910Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"902.4111ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/apiregistration.k8s.io/apiservices/\" range_end:\"/registry/apiregistration.k8s.io/apiservices0\" count_only:true ","response":"range_response_count:0 size:7"}
	{"level":"info","ts":"2022-05-12T01:01:16.910Z","caller":"traceutil/trace.go:171","msg":"trace[728535268] range","detail":"{range_begin:/registry/apiregistration.k8s.io/apiservices/; range_end:/registry/apiregistration.k8s.io/apiservices0; response_count:0; response_revision:522; }","duration":"902.4589ms","start":"2022-05-12T01:01:16.007Z","end":"2022-05-12T01:01:16.910Z","steps":["trace[728535268] 'agreement among raft nodes before linearized reading'  (duration: 902.3709ms)"],"step_count":1}
	{"level":"info","ts":"2022-05-12T01:01:16.910Z","caller":"traceutil/trace.go:171","msg":"trace[1105670514] transaction","detail":"{read_only:false; response_revision:522; number_of_response:1; }","duration":"1.0286107s","start":"2022-05-12T01:01:15.881Z","end":"2022-05-12T01:01:16.910Z","steps":["trace[1105670514] 'process raft request'  (duration: 400.4977ms)","trace[1105670514] 'compare'  (duration: 626.8637ms)"],"step_count":2}
	{"level":"warn","ts":"2022-05-12T01:01:16.910Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2022-05-12T01:01:15.881Z","time spent":"1.029035s","remote":"127.0.0.1:54064","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":118,"response count":0,"response size":40,"request content":"compare:<target:MOD key:\"/registry/masterleases/192.168.67.2\" mod_revision:515 > success:<request_put:<key:\"/registry/masterleases/192.168.67.2\" value_size:67 lease:2289940453759133126 >> failure:<request_range:<key:\"/registry/masterleases/192.168.67.2\" > >"}
	{"level":"warn","ts":"2022-05-12T01:01:16.910Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2022-05-12T01:01:16.007Z","time spent":"902.504ms","remote":"127.0.0.1:54216","response type":"/etcdserverpb.KV/Range","request count":0,"request size":96,"response count":29,"response size":31,"request content":"key:\"/registry/apiregistration.k8s.io/apiservices/\" range_end:\"/registry/apiregistration.k8s.io/apiservices0\" count_only:true "}
	{"level":"info","ts":"2022-05-12T01:01:17.126Z","caller":"traceutil/trace.go:171","msg":"trace[39428066] linearizableReadLoop","detail":"{readStateIndex:555; appliedIndex:555; }","duration":"182.2554ms","start":"2022-05-12T01:01:16.944Z","end":"2022-05-12T01:01:17.126Z","steps":["trace[39428066] 'read index received'  (duration: 182.2416ms)","trace[39428066] 'applied index is now lower than readState.Index'  (duration: 10.8µs)"],"step_count":2}
	{"level":"warn","ts":"2022-05-12T01:01:17.187Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"243.3007ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2022-05-12T01:01:17.187Z","caller":"traceutil/trace.go:171","msg":"trace[357678386] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:523; }","duration":"243.5336ms","start":"2022-05-12T01:01:16.944Z","end":"2022-05-12T01:01:17.187Z","steps":["trace[357678386] 'agreement among raft nodes before linearized reading'  (duration: 182.4481ms)","trace[357678386] 'range keys from in-memory index tree'  (duration: 60.8249ms)"],"step_count":2}
	
	* 
	* ==> kernel <==
	*  01:01:25 up  2:09,  0 users,  load average: 4.65, 4.71, 3.40
	Linux pause-20220512005140-7184 5.10.102.1-microsoft-standard-WSL2 #1 SMP Wed Mar 2 00:30:59 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 20.04.4 LTS"
	
	* 
	* ==> kube-apiserver [34d00adfe03d] <==
	* I0512 01:00:07.929566       1 trace.go:205] Trace[1096747061]: "Get" url:/api/v1/namespaces/default/serviceaccounts/default,user-agent:kubectl/v1.23.5 (linux/amd64) kubernetes/c285e78,audit-id:792b9dc1-3ed2-41ce-ad1f-142fdf130a04,client:127.0.0.1,accept:application/json;as=Table;v=v1;g=meta.k8s.io,application/json;as=Table;v=v1beta1;g=meta.k8s.io,application/json,protocol:HTTP/2.0 (12-May-2022 01:00:02.055) (total time: 5873ms):
	Trace[1096747061]: [5.8735741s] [5.8735741s] END
	I0512 01:00:07.929157       1 trace.go:205] Trace[115992878]: "Create" url:/api/v1/namespaces/kube-system/serviceaccounts,user-agent:kube-controller-manager/v1.23.5 (linux/amd64) kubernetes/c285e78/kube-controller-manager,audit-id:a8df53b7-c4b1-4a35-9a57-fce024d6e22c,client:192.168.67.2,accept:application/vnd.kubernetes.protobuf, */*,protocol:HTTP/2.0 (12-May-2022 01:00:01.822) (total time: 6106ms):
	Trace[115992878]: ---"Object stored in database" 6105ms (01:00:07.928)
	Trace[115992878]: [6.1062849s] [6.1062849s] END
	I0512 01:00:09.812754       1 controller.go:611] quota admission added evaluator for: replicasets.apps
	I0512 01:00:09.902316       1 controller.go:611] quota admission added evaluator for: controllerrevisions.apps
	I0512 01:00:14.726740       1 controller.go:611] quota admission added evaluator for: events.events.k8s.io
	I0512 01:00:44.986351       1 trace.go:205] Trace[1192640284]: "GuaranteedUpdate etcd3" type:*v1.Endpoints (12-May-2022 01:00:44.314) (total time: 672ms):
	Trace[1192640284]: ---"Transaction committed" 668ms (01:00:44.986)
	Trace[1192640284]: [672.1393ms] [672.1393ms] END
	I0512 01:01:00.176257       1 trace.go:205] Trace[1230668400]: "List etcd3" key:/pods/kube-system,resourceVersion:,resourceVersionMatch:,limit:0,continue: (12-May-2022 01:00:59.494) (total time: 681ms):
	Trace[1230668400]: [681.9148ms] [681.9148ms] END
	I0512 01:01:00.178306       1 trace.go:205] Trace[1401788407]: "List" url:/api/v1/namespaces/kube-system/pods,user-agent:minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format,audit-id:82410fa3-3d8f-4f14-a39e-1eddc9eac436,client:192.168.67.1,accept:application/json, */*,protocol:HTTP/2.0 (12-May-2022 01:00:59.494) (total time: 683ms):
	Trace[1401788407]: ---"Listing from storage done" 682ms (01:01:00.176)
	Trace[1401788407]: [683.9758ms] [683.9758ms] END
	I0512 01:01:15.873525       1 trace.go:205] Trace[973272629]: "Get" url:/api/v1/namespaces/default,user-agent:kube-apiserver/v1.23.5 (linux/amd64) kubernetes/c285e78,audit-id:d1153530-27d6-412a-8213-836eba1e2be8,client:127.0.0.1,accept:application/vnd.kubernetes.protobuf, */*,protocol:HTTP/2.0 (12-May-2022 01:01:14.309) (total time: 1564ms):
	Trace[973272629]: ---"About to write a response" 1564ms (01:01:15.873)
	Trace[973272629]: [1.5641768s] [1.5641768s] END
	I0512 01:01:16.912222       1 trace.go:205] Trace[648684563]: "Get" url:/api/v1/namespaces/kube-system/endpoints/k8s.io-minikube-hostpath,user-agent:storage-provisioner/v0.0.0 (linux/amd64) kubernetes/$Format,audit-id:f22c37cc-e9c0-43fb-94aa-8269f7b17ea7,client:192.168.67.2,accept:application/json, */*,protocol:HTTP/2.0 (12-May-2022 01:01:15.951) (total time: 960ms):
	Trace[648684563]: ---"About to write a response" 960ms (01:01:16.911)
	Trace[648684563]: [960.2776ms] [960.2776ms] END
	I0512 01:01:16.912963       1 trace.go:205] Trace[711113022]: "GuaranteedUpdate etcd3" type:*v1.Endpoints (12-May-2022 01:01:15.877) (total time: 1035ms):
	Trace[711113022]: ---"Transaction committed" 1032ms (01:01:16.912)
	Trace[711113022]: [1.0356261s] [1.0356261s] END
	
	* 
	* ==> kube-controller-manager [18a71909db62] <==
	* 	/usr/local/go/src/bytes/buffer.go:204 +0x98
	crypto/tls.(*Conn).readFromUntil(0xc0009c6e00, {0x4d4fe80, 0xc000128018}, 0x8ef)
		/usr/local/go/src/crypto/tls/conn.go:799 +0xe5
	crypto/tls.(*Conn).readRecordOrCCS(0xc0009c6e00, 0x0)
		/usr/local/go/src/crypto/tls/conn.go:606 +0x112
	crypto/tls.(*Conn).readRecord(...)
		/usr/local/go/src/crypto/tls/conn.go:574
	crypto/tls.(*Conn).Read(0xc0009c6e00, {0xc00128b000, 0x1000, 0x919560})
		/usr/local/go/src/crypto/tls/conn.go:1277 +0x16f
	bufio.(*Reader).Read(0xc0003c9440, {0xc00128c040, 0x9, 0x934bc2})
		/usr/local/go/src/bufio/bufio.go:227 +0x1b4
	io.ReadAtLeast({0x4d47860, 0xc0003c9440}, {0xc00128c040, 0x9, 0x9}, 0x9)
		/usr/local/go/src/io/io.go:328 +0x9a
	io.ReadFull(...)
		/usr/local/go/src/io/io.go:347
	k8s.io/kubernetes/vendor/golang.org/x/net/http2.readFrameHeader({0xc00128c040, 0x9, 0xc00102b110}, {0x4d47860, 0xc0003c9440})
		/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/golang.org/x/net/http2/frame.go:237 +0x6e
	k8s.io/kubernetes/vendor/golang.org/x/net/http2.(*Framer).ReadFrame(0xc00128c000)
		/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/golang.org/x/net/http2/frame.go:498 +0x95
	k8s.io/kubernetes/vendor/golang.org/x/net/http2.(*clientConnReadLoop).run(0xc00063ff98)
		/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/golang.org/x/net/http2/transport.go:2101 +0x130
	k8s.io/kubernetes/vendor/golang.org/x/net/http2.(*ClientConn).readLoop(0xc001288000)
		/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/golang.org/x/net/http2/transport.go:1997 +0x6f
	created by k8s.io/kubernetes/vendor/golang.org/x/net/http2.(*Transport).newClientConn
		/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/golang.org/x/net/http2/transport.go:725 +0xac5
	
	* 
	* ==> kube-controller-manager [52389506cb8f] <==
	* I0512 01:00:09.503610       1 disruption.go:371] Sending events to api server.
	I0512 01:00:09.514563       1 shared_informer.go:247] Caches are synced for job 
	I0512 01:00:09.522374       1 shared_informer.go:240] Waiting for caches to sync for garbage collector
	I0512 01:00:09.602674       1 shared_informer.go:247] Caches are synced for resource quota 
	I0512 01:00:09.602675       1 shared_informer.go:247] Caches are synced for certificate-csrsigning-kube-apiserver-client 
	I0512 01:00:09.602675       1 shared_informer.go:247] Caches are synced for resource quota 
	I0512 01:00:09.602769       1 shared_informer.go:247] Caches are synced for certificate-csrsigning-legacy-unknown 
	I0512 01:00:09.603302       1 shared_informer.go:247] Caches are synced for certificate-csrsigning-kubelet-serving 
	I0512 01:00:09.603484       1 shared_informer.go:247] Caches are synced for TTL after finished 
	I0512 01:00:09.603553       1 shared_informer.go:247] Caches are synced for persistent volume 
	I0512 01:00:09.603689       1 shared_informer.go:247] Caches are synced for HPA 
	I0512 01:00:09.603757       1 shared_informer.go:247] Caches are synced for certificate-csrapproving 
	I0512 01:00:09.603904       1 shared_informer.go:247] Caches are synced for certificate-csrsigning-kubelet-client 
	I0512 01:00:09.610406       1 shared_informer.go:247] Caches are synced for cronjob 
	I0512 01:00:09.705079       1 range_allocator.go:374] Set node pause-20220512005140-7184 PodCIDR to [10.244.0.0/24]
	I0512 01:00:09.923057       1 shared_informer.go:247] Caches are synced for garbage collector 
	I0512 01:00:09.923243       1 event.go:294] "Event occurred" object="kube-system/coredns" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-64897985d to 2"
	I0512 01:00:09.928867       1 shared_informer.go:247] Caches are synced for garbage collector 
	I0512 01:00:09.928966       1 garbagecollector.go:155] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I0512 01:00:10.120209       1 event.go:294] "Event occurred" object="kube-system/kube-proxy" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-sk8qh"
	I0512 01:00:10.306992       1 event.go:294] "Event occurred" object="kube-system/coredns-64897985d" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-64897985d-6rqbl"
	I0512 01:00:10.424887       1 event.go:294] "Event occurred" object="kube-system/coredns-64897985d" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-64897985d-jt5dx"
	I0512 01:00:10.809631       1 event.go:294] "Event occurred" object="kube-system/coredns" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set coredns-64897985d to 1"
	E0512 01:00:10.903903       1 replica_set.go:536] sync "kube-system/coredns-64897985d" failed with Operation cannot be fulfilled on replicasets.apps "coredns-64897985d": the object has been modified; please apply your changes to the latest version and try again
	I0512 01:00:10.914019       1 event.go:294] "Event occurred" object="kube-system/coredns-64897985d" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-64897985d-jt5dx"
	
	* 
	* ==> kube-proxy [b075ccd54d6d] <==
	* E0512 01:00:13.919024       1 proxier.go:647] "Failed to read builtin modules file, you can ignore this message when kube-proxy is running inside container without mounting /lib/modules" err="open /lib/modules/5.10.102.1-microsoft-standard-WSL2/modules.builtin: no such file or directory" filePath="/lib/modules/5.10.102.1-microsoft-standard-WSL2/modules.builtin"
	I0512 01:00:13.923387       1 proxier.go:657] "Failed to load kernel module with modprobe, you can ignore this message when kube-proxy is running inside container without mounting /lib/modules" moduleName="ip_vs"
	I0512 01:00:14.005854       1 proxier.go:657] "Failed to load kernel module with modprobe, you can ignore this message when kube-proxy is running inside container without mounting /lib/modules" moduleName="ip_vs_rr"
	I0512 01:00:14.009902       1 proxier.go:657] "Failed to load kernel module with modprobe, you can ignore this message when kube-proxy is running inside container without mounting /lib/modules" moduleName="ip_vs_wrr"
	I0512 01:00:14.014553       1 proxier.go:657] "Failed to load kernel module with modprobe, you can ignore this message when kube-proxy is running inside container without mounting /lib/modules" moduleName="ip_vs_sh"
	I0512 01:00:14.017702       1 proxier.go:657] "Failed to load kernel module with modprobe, you can ignore this message when kube-proxy is running inside container without mounting /lib/modules" moduleName="nf_conntrack"
	I0512 01:00:14.208358       1 node.go:163] Successfully retrieved node IP: 192.168.67.2
	I0512 01:00:14.208510       1 server_others.go:138] "Detected node IP" address="192.168.67.2"
	I0512 01:00:14.208590       1 server_others.go:561] "Unknown proxy mode, assuming iptables proxy" proxyMode=""
	I0512 01:00:14.716981       1 server_others.go:206] "Using iptables Proxier"
	I0512 01:00:14.717126       1 server_others.go:213] "kube-proxy running in dual-stack mode" ipFamily=IPv4
	I0512 01:00:14.717147       1 server_others.go:214] "Creating dualStackProxier for iptables"
	I0512 01:00:14.717188       1 server_others.go:491] "Detect-local-mode set to ClusterCIDR, but no IPv6 cluster CIDR defined, , defaulting to no-op detect-local for IPv6"
	I0512 01:00:14.718482       1 server.go:656] "Version info" version="v1.23.5"
	I0512 01:00:14.719990       1 config.go:226] "Starting endpoint slice config controller"
	I0512 01:00:14.720321       1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
	I0512 01:00:14.721226       1 config.go:317] "Starting service config controller"
	I0512 01:00:14.721253       1 shared_informer.go:240] Waiting for caches to sync for service config
	I0512 01:00:14.821982       1 shared_informer.go:247] Caches are synced for service config 
	I0512 01:00:14.902572       1 shared_informer.go:247] Caches are synced for endpoint slice config 
	
	* 
	* ==> kube-scheduler [2fc40eb3e688] <==
	* E0512 00:59:40.581820       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0512 00:59:40.796618       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1beta1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0512 00:59:40.796748       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.CSIStorageCapacity: failed to list *v1beta1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0512 00:59:40.826087       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0512 00:59:40.826224       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0512 00:59:41.108608       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0512 00:59:41.108718       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0512 00:59:41.174238       1 reflector.go:324] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:205: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0512 00:59:41.174354       1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:205: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0512 00:59:41.325504       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0512 00:59:41.325656       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0512 00:59:41.608211       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0512 00:59:41.608381       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0512 00:59:42.342260       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0512 00:59:42.342418       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0512 00:59:42.439027       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0512 00:59:42.439185       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0512 00:59:42.609349       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0512 00:59:42.609510       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0512 00:59:52.520252       1 plugin.go:138] "getting namespace, assuming empty set of namespace labels" err="namespace \"kube-system\" not found" namespace="kube-system"
	E0512 00:59:52.520441       1 plugin.go:138] "getting namespace, assuming empty set of namespace labels" err="namespace \"kube-system\" not found" namespace="kube-system"
	E0512 00:59:52.520602       1 plugin.go:138] "getting namespace, assuming empty set of namespace labels" err="namespace \"kube-system\" not found" namespace="kube-system"
	E0512 00:59:52.520733       1 plugin.go:138] "getting namespace, assuming empty set of namespace labels" err="namespace \"kube-system\" not found" namespace="kube-system"
	E0512 00:59:52.669964       1 plugin.go:138] "getting namespace, assuming empty set of namespace labels" err="namespace \"kube-system\" not found" namespace="kube-system"
	I0512 00:59:53.417197       1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	
	* 
	* ==> kubelet <==
	* -- Logs begin at Thu 2022-05-12 00:58:27 UTC, end at Thu 2022-05-12 01:01:25 UTC. --
	May 12 01:00:10 pause-20220512005140-7184 kubelet[2193]: I0512 01:00:10.708417    2193 reconciler.go:221] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g7pdx\" (UniqueName: \"kubernetes.io/projected/7d6e3981-4ff9-4593-83b1-57b703abd918-kube-api-access-g7pdx\") pod \"coredns-64897985d-6rqbl\" (UID: \"7d6e3981-4ff9-4593-83b1-57b703abd918\") " pod="kube-system/coredns-64897985d-6rqbl"
	May 12 01:00:10 pause-20220512005140-7184 kubelet[2193]: I0512 01:00:10.708745    2193 reconciler.go:221] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7d6e3981-4ff9-4593-83b1-57b703abd918-config-volume\") pod \"coredns-64897985d-6rqbl\" (UID: \"7d6e3981-4ff9-4593-83b1-57b703abd918\") " pod="kube-system/coredns-64897985d-6rqbl"
	May 12 01:00:10 pause-20220512005140-7184 kubelet[2193]: I0512 01:00:10.708811    2193 reconciler.go:221] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b705adaf-83b8-4176-93de-8ba6de4e179c-config-volume\") pod \"coredns-64897985d-jt5dx\" (UID: \"b705adaf-83b8-4176-93de-8ba6de4e179c\") " pod="kube-system/coredns-64897985d-jt5dx"
	May 12 01:00:10 pause-20220512005140-7184 kubelet[2193]: I0512 01:00:10.708862    2193 reconciler.go:221] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xz5cc\" (UniqueName: \"kubernetes.io/projected/b705adaf-83b8-4176-93de-8ba6de4e179c-kube-api-access-xz5cc\") pod \"coredns-64897985d-jt5dx\" (UID: \"b705adaf-83b8-4176-93de-8ba6de4e179c\") " pod="kube-system/coredns-64897985d-jt5dx"
	May 12 01:00:11 pause-20220512005140-7184 kubelet[2193]: I0512 01:00:11.135263    2193 pod_container_deletor.go:79] "Container not found in pod's containers" containerID="fab7a3143767358f8d7a387cd21544605514770ffb9ca5b47ed3ac673503e170"
	May 12 01:00:14 pause-20220512005140-7184 kubelet[2193]: I0512 01:00:14.304828    2193 pod_container_deletor.go:79] "Container not found in pod's containers" containerID="ef7464c48a75136dee847722beaa7efa7f944daa601c7c98e19e56d4ebfc3f6c"
	May 12 01:00:14 pause-20220512005140-7184 kubelet[2193]: I0512 01:00:14.305239    2193 docker_sandbox.go:402] "Failed to read pod IP from plugin/docker" err="Couldn't find network status for kube-system/coredns-64897985d-6rqbl through plugin: invalid network status for"
	May 12 01:00:14 pause-20220512005140-7184 kubelet[2193]: I0512 01:00:14.805187    2193 pod_container_deletor.go:79] "Container not found in pod's containers" containerID="d2c19d84bf254bb6896aaf87a79d1237501c216f6232e2c8f4fda3cd9ce82963"
	May 12 01:00:14 pause-20220512005140-7184 kubelet[2193]: I0512 01:00:14.807736    2193 docker_sandbox.go:402] "Failed to read pod IP from plugin/docker" err="Couldn't find network status for kube-system/coredns-64897985d-jt5dx through plugin: invalid network status for"
	May 12 01:00:15 pause-20220512005140-7184 kubelet[2193]: I0512 01:00:15.909086    2193 docker_sandbox.go:402] "Failed to read pod IP from plugin/docker" err="Couldn't find network status for kube-system/coredns-64897985d-6rqbl through plugin: invalid network status for"
	May 12 01:00:16 pause-20220512005140-7184 kubelet[2193]: I0512 01:00:16.113088    2193 docker_sandbox.go:402] "Failed to read pod IP from plugin/docker" err="Couldn't find network status for kube-system/coredns-64897985d-jt5dx through plugin: invalid network status for"
	May 12 01:00:17 pause-20220512005140-7184 kubelet[2193]: I0512 01:00:17.620473    2193 docker_sandbox.go:402] "Failed to read pod IP from plugin/docker" err="Couldn't find network status for kube-system/coredns-64897985d-jt5dx through plugin: invalid network status for"
	May 12 01:00:26 pause-20220512005140-7184 kubelet[2193]: I0512 01:00:26.203230    2193 reconciler.go:192] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xz5cc\" (UniqueName: \"kubernetes.io/projected/b705adaf-83b8-4176-93de-8ba6de4e179c-kube-api-access-xz5cc\") pod \"b705adaf-83b8-4176-93de-8ba6de4e179c\" (UID: \"b705adaf-83b8-4176-93de-8ba6de4e179c\") "
	May 12 01:00:26 pause-20220512005140-7184 kubelet[2193]: I0512 01:00:26.203566    2193 reconciler.go:192] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b705adaf-83b8-4176-93de-8ba6de4e179c-config-volume\") pod \"b705adaf-83b8-4176-93de-8ba6de4e179c\" (UID: \"b705adaf-83b8-4176-93de-8ba6de4e179c\") "
	May 12 01:00:26 pause-20220512005140-7184 kubelet[2193]: W0512 01:00:26.204081    2193 empty_dir.go:517] Warning: Failed to clear quota on /var/lib/kubelet/pods/b705adaf-83b8-4176-93de-8ba6de4e179c/volumes/kubernetes.io~configmap/config-volume: clearQuota called, but quotas disabled
	May 12 01:00:26 pause-20220512005140-7184 kubelet[2193]: I0512 01:00:26.204913    2193 operation_generator.go:910] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b705adaf-83b8-4176-93de-8ba6de4e179c-config-volume" (OuterVolumeSpecName: "config-volume") pod "b705adaf-83b8-4176-93de-8ba6de4e179c" (UID: "b705adaf-83b8-4176-93de-8ba6de4e179c"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue ""
	May 12 01:00:26 pause-20220512005140-7184 kubelet[2193]: I0512 01:00:26.211627    2193 operation_generator.go:910] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b705adaf-83b8-4176-93de-8ba6de4e179c-kube-api-access-xz5cc" (OuterVolumeSpecName: "kube-api-access-xz5cc") pod "b705adaf-83b8-4176-93de-8ba6de4e179c" (UID: "b705adaf-83b8-4176-93de-8ba6de4e179c"). InnerVolumeSpecName "kube-api-access-xz5cc". PluginName "kubernetes.io/projected", VolumeGidValue ""
	May 12 01:00:26 pause-20220512005140-7184 kubelet[2193]: I0512 01:00:26.304645    2193 reconciler.go:300] "Volume detached for volume \"kube-api-access-xz5cc\" (UniqueName: \"kubernetes.io/projected/b705adaf-83b8-4176-93de-8ba6de4e179c-kube-api-access-xz5cc\") on node \"pause-20220512005140-7184\" DevicePath \"\""
	May 12 01:00:26 pause-20220512005140-7184 kubelet[2193]: I0512 01:00:26.304817    2193 reconciler.go:300] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b705adaf-83b8-4176-93de-8ba6de4e179c-config-volume\") on node \"pause-20220512005140-7184\" DevicePath \"\""
	May 12 01:00:26 pause-20220512005140-7184 kubelet[2193]: I0512 01:00:26.780988    2193 scope.go:110] "RemoveContainer" containerID="93c48e3561a563768d9850597bce373e6d471c669bab3559fc5f6127eb8cbead"
	May 12 01:00:28 pause-20220512005140-7184 kubelet[2193]: I0512 01:00:28.506333    2193 kubelet_volumes.go:160] "Cleaned up orphaned pod volumes dir" podUID=b705adaf-83b8-4176-93de-8ba6de4e179c path="/var/lib/kubelet/pods/b705adaf-83b8-4176-93de-8ba6de4e179c/volumes"
	May 12 01:01:00 pause-20220512005140-7184 kubelet[2193]: I0512 01:01:00.629833    2193 topology_manager.go:200] "Topology Admit Handler"
	May 12 01:01:00 pause-20220512005140-7184 kubelet[2193]: I0512 01:01:00.707270    2193 reconciler.go:221] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xv68m\" (UniqueName: \"kubernetes.io/projected/12a7c8dc-d760-4250-8e03-260022384d31-kube-api-access-xv68m\") pod \"storage-provisioner\" (UID: \"12a7c8dc-d760-4250-8e03-260022384d31\") " pod="kube-system/storage-provisioner"
	May 12 01:01:00 pause-20220512005140-7184 kubelet[2193]: I0512 01:01:00.707452    2193 reconciler.go:221] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/12a7c8dc-d760-4250-8e03-260022384d31-tmp\") pod \"storage-provisioner\" (UID: \"12a7c8dc-d760-4250-8e03-260022384d31\") " pod="kube-system/storage-provisioner"
	May 12 01:01:02 pause-20220512005140-7184 kubelet[2193]: E0512 01:01:02.125794    2193 kuberuntime_manager.go:1065] "PodSandboxStatus of sandbox for pod" err="rpc error: code = Unknown desc = Error: No such container: 7f2018d24dcb9031421b74f1f5e2a4d5aa8fc01e982ab32110afe8cbf6235c17" podSandboxID="7f2018d24dcb9031421b74f1f5e2a4d5aa8fc01e982ab32110afe8cbf6235c17" pod="kube-system/storage-provisioner"
	
	* 
	* ==> storage-provisioner [e6a72fa21d5f] <==
	* I0512 01:01:03.687115       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0512 01:01:03.732723       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0512 01:01:03.732922       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0512 01:01:03.776074       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0512 01:01:03.776531       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_pause-20220512005140-7184_79c0bd3a-7d46-426c-bc29-8ccb891e3a6f!
	I0512 01:01:03.776536       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"78896ed9-a2ba-43cf-b67f-5cc8ac1c18c0", APIVersion:"v1", ResourceVersion:"512", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' pause-20220512005140-7184_79c0bd3a-7d46-426c-bc29-8ccb891e3a6f became leader
	I0512 01:01:03.877681       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_pause-20220512005140-7184_79c0bd3a-7d46-426c-bc29-8ccb891e3a6f!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p pause-20220512005140-7184 -n pause-20220512005140-7184
helpers_test.go:254: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p pause-20220512005140-7184 -n pause-20220512005140-7184: (6.8951965s)
helpers_test.go:261: (dbg) Run:  kubectl --context pause-20220512005140-7184 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:270: non-running pods: 
helpers_test.go:272: ======> post-mortem[TestPause/serial/Pause]: describe non-running pods <======
helpers_test.go:275: (dbg) Run:  kubectl --context pause-20220512005140-7184 describe pod 
helpers_test.go:275: (dbg) Non-zero exit: kubectl --context pause-20220512005140-7184 describe pod : exit status 1 (273.9986ms)

                                                
                                                
** stderr ** 
	error: resource name may not be empty

                                                
                                                
** /stderr **
helpers_test.go:277: kubectl --context pause-20220512005140-7184 describe pod : exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestPause/serial/Pause]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect pause-20220512005140-7184
helpers_test.go:231: (dbg) Done: docker inspect pause-20220512005140-7184: (1.0800605s)
helpers_test.go:235: (dbg) docker inspect pause-20220512005140-7184:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "18e2eed271b9010aeba677455559d5fa350c421d241ea6643e75bf1b1295b98a",
	        "Created": "2022-05-12T00:58:24.8559004Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 173863,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2022-05-12T00:58:26.7948067Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:8a42e1145657f551cd435eddb43b96ab44d0facbe44106da934225366eeb7757",
	        "ResolvConfPath": "/var/lib/docker/containers/18e2eed271b9010aeba677455559d5fa350c421d241ea6643e75bf1b1295b98a/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/18e2eed271b9010aeba677455559d5fa350c421d241ea6643e75bf1b1295b98a/hostname",
	        "HostsPath": "/var/lib/docker/containers/18e2eed271b9010aeba677455559d5fa350c421d241ea6643e75bf1b1295b98a/hosts",
	        "LogPath": "/var/lib/docker/containers/18e2eed271b9010aeba677455559d5fa350c421d241ea6643e75bf1b1295b98a/18e2eed271b9010aeba677455559d5fa350c421d241ea6643e75bf1b1295b98a-json.log",
	        "Name": "/pause-20220512005140-7184",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "pause-20220512005140-7184:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "pause-20220512005140-7184",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2147483648,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 2147483648,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/e8d5531512b7a911f6cfe7100e18d8756ac20aba06f91fca74ca5931c044b75b-init/diff:/var/lib/docker/overlay2/f5e9ce82d74c36e0b6de7ac5d28dc4eacb2abae094b3550db66974795ad74446/diff:/var/lib/docker/overlay2/63fa2edc88f635760dd847938696c8fc50aad8a0c51ab6c6f93b0aa9a6fcefe6/diff:/var/lib/docker/overlay2/5fcaace21e215fd120a775470b4a78ef06d9f466e024f0b1baef234ddf87f04f/diff:/var/lib/docker/overlay2/6c9accb62919ca088333f8b3a193f43886e2c3042c5ec726e1c7fd19ee183024/diff:/var/lib/docker/overlay2/a9a1aea640018bd9e524c36422ea97cc97a391601b718810fed779260e48c87a/diff:/var/lib/docker/overlay2/2476b6e8d656e43d76d4288098236bc765cb37fa9dde696f09bfce485e99229e/diff:/var/lib/docker/overlay2/6cdf08ddc61561e961de67f3e14478397868c53a4d8552909a5b84e35b28db1f/diff:/var/lib/docker/overlay2/6f6f8b6686cc7838a52ce30fdc4e938cde2fb68b161e09a9bc81fa11011619a6/diff:/var/lib/docker/overlay2/9f55b91f762ea60dc3da71cf56e5ca24181e712389d99cb8d972bba474f5d6a4/diff:/var/lib/docker/overlay2/4c269b
df57eae0d21d2034dc086308278f9b00f2301c726b6de876b9ff97298d/diff:/var/lib/docker/overlay2/5fb8ed9b9e765df8150f27b7847ec7333b2c7d978dbc1161da97d630ec7e43e2/diff:/var/lib/docker/overlay2/3a297e9f6ab51d930ef61c49a0fea772cdc2a2704a077db6adb142eb044d9a93/diff:/var/lib/docker/overlay2/2068464f4655627fb513b31660ab34c938e559da10d44fd723ce9e1d744a037d/diff:/var/lib/docker/overlay2/f783596106daebadefbb7774015c369d757665d434c96581b426b2e5f5b453c4/diff:/var/lib/docker/overlay2/ac8927d3cc7829cc82e4c0214dd4adee97aedbe2b7d992cbbc08288443c8e320/diff:/var/lib/docker/overlay2/91eff4612dd56b2c82f4357b879f9cdcbc13704bf8f6dcbfc56febb104774843/diff:/var/lib/docker/overlay2/ee7366acc162efb7b878c4c56df021a8ef756fa595230ffe898cd0dd0355eb44/diff:/var/lib/docker/overlay2/ab5df115d2ec8cd71172942a2e449de32b93a3b6b5d90122c0e734c5e11d6bb2/diff:/var/lib/docker/overlay2/59fcbe9b552129cdc5c96e9e8ff27f4b88a12645aec3cf8f48f28d91521760e0/diff:/var/lib/docker/overlay2/0684681eb5880654b43e8803ef8f17b85e6129c85ff81c13b509563184a77625/diff:/var/lib/d
ocker/overlay2/8bbd24801b480df6ca8545e8f8bb09c17b7598c2868fb94ea5b8775ce2f311b4/diff:/var/lib/docker/overlay2/f28553cc59fccccfdfc5c24b7b8dfe4055c625d0a004731911c34b4ba32a9dfb/diff:/var/lib/docker/overlay2/2e47a8ef6e4481885d71f57a1d9ef99898b741644addd2796de5c2f4c696edb0/diff:/var/lib/docker/overlay2/e1f1eaaa809c974dabd197f590d19be05325f506a53a9a1f8ba29defd7096f60/diff:/var/lib/docker/overlay2/83ce12af60df76f98283ed8f3450cd5727b42d06055b18f04a18068b105ae128/diff:/var/lib/docker/overlay2/5fd34820f54e7f8f0c898c21b5d9d030e5b82c65c901897306c3db475481167f/diff:/var/lib/docker/overlay2/3ecb6f46fa47a8906ff5de1da5a63be9c664ff5bc66faf870126868d36bb77c6/diff:/var/lib/docker/overlay2/ccb92f12dd3e84b11b2c9b1ef6a0581ad5894648432ebe7cb5d16d48c7aacf6e/diff:/var/lib/docker/overlay2/7c6d11dc9abdd4916f3759c8ae4db8c3011cff872f2fd3cc502e7f663e496765/diff:/var/lib/docker/overlay2/b865b0351704115fa113e25f7651d1dc1e2f0348c332552e555e898094f34802/diff:/var/lib/docker/overlay2/bbcf207462c3f88368214d8e4ca222f28a828bd30661741d421665b4d10
80f07/diff:/var/lib/docker/overlay2/b554a32e9a2e4d3773e918754c27a1b32bc7ec5327d3bd1f52d7a146a07fa2c5/diff:/var/lib/docker/overlay2/d0a997bacfa9b1b54f61c62f00ad2797616ea9bb55182aad68ed805f96f5f72b/diff:/var/lib/docker/overlay2/e0c168ecfe6a93618f4f653c1aba422023114f242ab1045591d0c8454573d5c2/diff:/var/lib/docker/overlay2/fb67af38a46ef55935fcfb4f1be5f34b45b3d0e1c571538828117f23eedea417/diff:/var/lib/docker/overlay2/e96ed0776e5f27ef225469ac5f5e8ed2e299c72d5db88782599c0fdd1cec2fe3/diff:/var/lib/docker/overlay2/91b77e60e0a7864ace4f5a4d65f465bd7fe862616a87a74ee9fee21dc5dceb07/diff:/var/lib/docker/overlay2/9829211293f70b356dfa8d07b5dbbc3a6d05415cbd2840fd9dd948b8b315bf18/diff:/var/lib/docker/overlay2/dc35dda36e34a2f4f3a5d958b1a7d4d75db8655c4bc7b4b3d9591f43f9a645fc/diff:/var/lib/docker/overlay2/968c2bb04f641a9c8bd30d38659dc28973b31bfd577bb1aa355ae6c2ab4a0d34/diff:/var/lib/docker/overlay2/37432c6ae0b10a52e95b215fdd2e256362060f32c4a52f0d2021b6e10b3ed77b/diff:/var/lib/docker/overlay2/77687f9734b19f3e8a5bb08b07067e0b572775
20867b7a6ad80b67ffebe332d7/diff",
	                "MergedDir": "/var/lib/docker/overlay2/e8d5531512b7a911f6cfe7100e18d8756ac20aba06f91fca74ca5931c044b75b/merged",
	                "UpperDir": "/var/lib/docker/overlay2/e8d5531512b7a911f6cfe7100e18d8756ac20aba06f91fca74ca5931c044b75b/diff",
	                "WorkDir": "/var/lib/docker/overlay2/e8d5531512b7a911f6cfe7100e18d8756ac20aba06f91fca74ca5931c044b75b/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "pause-20220512005140-7184",
	                "Source": "/var/lib/docker/volumes/pause-20220512005140-7184/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "pause-20220512005140-7184",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1652251400-14138@sha256:8c847a4aa2afc5a7fc659f9731046bf9cc7e788283deecc83c8633014fb0828a",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "pause-20220512005140-7184",
	                "name.minikube.sigs.k8s.io": "pause-20220512005140-7184",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "5fc86e33a51dd5e3f8a6f4418511d60ecf2eedb16bc3a9b28d55bc8d4edf64db",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49879"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49880"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49881"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49877"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49878"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/5fc86e33a51d",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "pause-20220512005140-7184": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.67.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "18e2eed271b9",
	                        "pause-20220512005140-7184"
	                    ],
	                    "NetworkID": "a9929553bfb020a9e4bf303619ae9b575309dee125013399d5cd8de3ba117e4b",
	                    "EndpointID": "663cda03432b757552b9a422b19dc422d6ee77f8fcff664417a7d7ae476fad45",
	                    "Gateway": "192.168.67.1",
	                    "IPAddress": "192.168.67.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:43:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p pause-20220512005140-7184 -n pause-20220512005140-7184
helpers_test.go:239: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.Host}} -p pause-20220512005140-7184 -n pause-20220512005140-7184: (7.1896819s)
helpers_test.go:244: <<< TestPause/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestPause/serial/Pause]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-windows-amd64.exe -p pause-20220512005140-7184 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-windows-amd64.exe -p pause-20220512005140-7184 logs -n 25: (8.2221207s)
helpers_test.go:252: TestPause/serial/Pause logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|------------------------------------------|------------------------------------------|-------------------|---------|---------------------|---------------------|
	| Command |                   Args                   |                 Profile                  |       User        | Version |     Start Time      |      End Time       |
	|---------|------------------------------------------|------------------------------------------|-------------------|---------|---------------------|---------------------|
	| delete  | -p                                       | insufficient-storage-20220512004557-7184 | minikube4\jenkins | v1.25.2 | 12 May 22 00:47 GMT | 12 May 22 00:47 GMT |
	|         | insufficient-storage-20220512004557-7184 |                                          |                   |         |                     |                     |
	| start   | -p                                       | NoKubernetes-20220512004748-7184         | minikube4\jenkins | v1.25.2 | 12 May 22 00:47 GMT | 12 May 22 00:50 GMT |
	|         | NoKubernetes-20220512004748-7184         |                                          |                   |         |                     |                     |
	|         | --driver=docker                          |                                          |                   |         |                     |                     |
	| start   | -p                                       | force-systemd-flag-20220512004748-7184   | minikube4\jenkins | v1.25.2 | 12 May 22 00:47 GMT | 12 May 22 00:50 GMT |
	|         | force-systemd-flag-20220512004748-7184   |                                          |                   |         |                     |                     |
	|         | --memory=2048 --force-systemd            |                                          |                   |         |                     |                     |
	|         | --alsologtostderr -v=5 --driver=docker   |                                          |                   |         |                     |                     |
	| ssh     | force-systemd-flag-20220512004748-7184   | force-systemd-flag-20220512004748-7184   | minikube4\jenkins | v1.25.2 | 12 May 22 00:50 GMT | 12 May 22 00:51 GMT |
	|         | ssh docker info --format                 |                                          |                   |         |                     |                     |
	|         | {{.CgroupDriver}}                        |                                          |                   |         |                     |                     |
	| start   | -p                                       | offline-docker-20220512004748-7184       | minikube4\jenkins | v1.25.2 | 12 May 22 00:47 GMT | 12 May 22 00:51 GMT |
	|         | offline-docker-20220512004748-7184       |                                          |                   |         |                     |                     |
	|         | --alsologtostderr -v=1                   |                                          |                   |         |                     |                     |
	|         | --memory=2048 --wait=true                |                                          |                   |         |                     |                     |
	|         | --driver=docker                          |                                          |                   |         |                     |                     |
	| delete  | -p                                       | force-systemd-flag-20220512004748-7184   | minikube4\jenkins | v1.25.2 | 12 May 22 00:51 GMT | 12 May 22 00:51 GMT |
	|         | force-systemd-flag-20220512004748-7184   |                                          |                   |         |                     |                     |
	| delete  | -p                                       | offline-docker-20220512004748-7184       | minikube4\jenkins | v1.25.2 | 12 May 22 00:51 GMT | 12 May 22 00:51 GMT |
	|         | offline-docker-20220512004748-7184       |                                          |                   |         |                     |                     |
	| start   | -p                                       | NoKubernetes-20220512004748-7184         | minikube4\jenkins | v1.25.2 | 12 May 22 00:51 GMT | 12 May 22 00:51 GMT |
	|         | NoKubernetes-20220512004748-7184         |                                          |                   |         |                     |                     |
	|         | --no-kubernetes --driver=docker          |                                          |                   |         |                     |                     |
	| delete  | -p                                       | NoKubernetes-20220512004748-7184         | minikube4\jenkins | v1.25.2 | 12 May 22 00:51 GMT | 12 May 22 00:52 GMT |
	|         | NoKubernetes-20220512004748-7184         |                                          |                   |         |                     |                     |
	| delete  | -p                                       | NoKubernetes-20220512004748-7184         | minikube4\jenkins | v1.25.2 | 12 May 22 00:52 GMT | 12 May 22 00:53 GMT |
	|         | NoKubernetes-20220512004748-7184         |                                          |                   |         |                     |                     |
	| start   | -p                                       | stopped-upgrade-20220512004748-7184      | minikube4\jenkins | v1.25.2 | 12 May 22 00:53 GMT | 12 May 22 00:54 GMT |
	|         | stopped-upgrade-20220512004748-7184      |                                          |                   |         |                     |                     |
	|         | --memory=2200 --alsologtostderr          |                                          |                   |         |                     |                     |
	|         | -v=1 --driver=docker                     |                                          |                   |         |                     |                     |
	| logs    | -p                                       | stopped-upgrade-20220512004748-7184      | minikube4\jenkins | v1.25.2 | 12 May 22 00:54 GMT | 12 May 22 00:54 GMT |
	|         | stopped-upgrade-20220512004748-7184      |                                          |                   |         |                     |                     |
	| delete  | -p                                       | stopped-upgrade-20220512004748-7184      | minikube4\jenkins | v1.25.2 | 12 May 22 00:54 GMT | 12 May 22 00:55 GMT |
	|         | stopped-upgrade-20220512004748-7184      |                                          |                   |         |                     |                     |
	| start   | -p                                       | kubernetes-upgrade-20220512005507-7184   | minikube4\jenkins | v1.25.2 | 12 May 22 00:55 GMT | 12 May 22 00:57 GMT |
	|         | kubernetes-upgrade-20220512005507-7184   |                                          |                   |         |                     |                     |
	|         | --memory=2200                            |                                          |                   |         |                     |                     |
	|         | --kubernetes-version=v1.16.0             |                                          |                   |         |                     |                     |
	|         | --alsologtostderr -v=1 --driver=docker   |                                          |                   |         |                     |                     |
	| stop    | -p                                       | kubernetes-upgrade-20220512005507-7184   | minikube4\jenkins | v1.25.2 | 12 May 22 00:57 GMT | 12 May 22 00:57 GMT |
	|         | kubernetes-upgrade-20220512005507-7184   |                                          |                   |         |                     |                     |
	| start   | -p                                       | kubernetes-upgrade-20220512005507-7184   | minikube4\jenkins | v1.25.2 | 12 May 22 00:57 GMT | 12 May 22 00:59 GMT |
	|         | kubernetes-upgrade-20220512005507-7184   |                                          |                   |         |                     |                     |
	|         | --memory=2200                            |                                          |                   |         |                     |                     |
	|         | --kubernetes-version=v1.23.6-rc.0        |                                          |                   |         |                     |                     |
	|         | --alsologtostderr -v=1 --driver=docker   |                                          |                   |         |                     |                     |
	| logs    | running-upgrade-20220512005137-7184      | running-upgrade-20220512005137-7184      | minikube4\jenkins | v1.25.2 | 12 May 22 00:58 GMT | 12 May 22 00:59 GMT |
	|         | logs -n 25                               |                                          |                   |         |                     |                     |
	| start   | -p                                       | missing-upgrade-20220512005316-7184      | minikube4\jenkins | v1.25.2 | 12 May 22 00:56 GMT | 12 May 22 00:59 GMT |
	|         | missing-upgrade-20220512005316-7184      |                                          |                   |         |                     |                     |
	|         | --memory=2200 --alsologtostderr          |                                          |                   |         |                     |                     |
	|         | -v=1 --driver=docker                     |                                          |                   |         |                     |                     |
	| start   | -p                                       | kubernetes-upgrade-20220512005507-7184   | minikube4\jenkins | v1.25.2 | 12 May 22 00:59 GMT | 12 May 22 00:59 GMT |
	|         | kubernetes-upgrade-20220512005507-7184   |                                          |                   |         |                     |                     |
	|         | --memory=2200                            |                                          |                   |         |                     |                     |
	|         | --kubernetes-version=v1.23.6-rc.0        |                                          |                   |         |                     |                     |
	|         | --alsologtostderr -v=1 --driver=docker   |                                          |                   |         |                     |                     |
	| delete  | -p                                       | missing-upgrade-20220512005316-7184      | minikube4\jenkins | v1.25.2 | 12 May 22 00:59 GMT | 12 May 22 00:59 GMT |
	|         | missing-upgrade-20220512005316-7184      |                                          |                   |         |                     |                     |
	| delete  | -p                                       | running-upgrade-20220512005137-7184      | minikube4\jenkins | v1.25.2 | 12 May 22 00:59 GMT | 12 May 22 00:59 GMT |
	|         | running-upgrade-20220512005137-7184      |                                          |                   |         |                     |                     |
	| delete  | -p                                       | kubernetes-upgrade-20220512005507-7184   | minikube4\jenkins | v1.25.2 | 12 May 22 00:59 GMT | 12 May 22 01:00 GMT |
	|         | kubernetes-upgrade-20220512005507-7184   |                                          |                   |         |                     |                     |
	| start   | -p pause-20220512005140-7184             | pause-20220512005140-7184                | minikube4\jenkins | v1.25.2 | 12 May 22 00:51 GMT | 12 May 22 01:00 GMT |
	|         | --memory=2048                            |                                          |                   |         |                     |                     |
	|         | --install-addons=false                   |                                          |                   |         |                     |                     |
	|         | --wait=all --driver=docker               |                                          |                   |         |                     |                     |
	| start   | -p pause-20220512005140-7184             | pause-20220512005140-7184                | minikube4\jenkins | v1.25.2 | 12 May 22 01:00 GMT | 12 May 22 01:01 GMT |
	|         | --alsologtostderr -v=1                   |                                          |                   |         |                     |                     |
	|         | --driver=docker                          |                                          |                   |         |                     |                     |
	| logs    | pause-20220512005140-7184 logs           | pause-20220512005140-7184                | minikube4\jenkins | v1.25.2 | 12 May 22 01:01 GMT | 12 May 22 01:01 GMT |
	|         | -n 25                                    |                                          |                   |         |                     |                     |
	|---------|------------------------------------------|------------------------------------------|-------------------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2022/05/12 01:00:21
	Running on machine: minikube4
	Binary: Built with gc go1.18.1 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0512 01:00:21.213283    9720 out.go:296] Setting OutFile to fd 1688 ...
	I0512 01:00:21.282534    9720 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0512 01:00:21.282534    9720 out.go:309] Setting ErrFile to fd 1656...
	I0512 01:00:21.282534    9720 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0512 01:00:21.305200    9720 out.go:303] Setting JSON to false
	I0512 01:00:21.308301    9720 start.go:115] hostinfo: {"hostname":"minikube4","uptime":15674,"bootTime":1652301547,"procs":172,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19042 Build 19042","kernelVersion":"10.0.19042 Build 19042","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"df6bfb5f-73f2-4acb-9365-df7854ecbb28"}
	W0512 01:00:21.308301    9720 start.go:123] gopshost.Virtualization returned error: not implemented yet
	I0512 01:00:21.660884    9720 out.go:177] * [pause-20220512005140-7184] minikube v1.25.2 on Microsoft Windows 10 Enterprise N 10.0.19042 Build 19042
	I0512 01:00:21.672856    9720 notify.go:193] Checking for updates...
	I0512 01:00:21.675391    9720 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	I0512 01:00:21.684694    9720 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube4\minikube-integration\.minikube
	I0512 01:00:21.690598    9720 out.go:177]   - MINIKUBE_LOCATION=13639
	I0512 01:00:21.696642    9720 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0512 01:00:19.429679    2560 start.go:284] selected driver: docker
	I0512 01:00:19.430675    2560 start.go:801] validating driver "docker" against <nil>
	I0512 01:00:19.430735    2560 start.go:812] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0512 01:00:19.508422    2560 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0512 01:00:21.694798    2560 cli_runner.go:217] Completed: docker system info --format "{{json .}}": (2.1860777s)
	I0512 01:00:21.695172    2560 info.go:265] docker info: {ID:5MOX:W55Z:6RSS:V5PU:46KT:D723:NTM4:N7FK:USOO:URA3:TW6J:2PNT Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:69 OomKillDisable:true NGoroutines:67 SystemTime:2022-05-12 01:00:20.6005639 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.102.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_
64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:53902323712 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.14 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3df54a852345ae127d1fa3092b95168e4a88e2f8 Expected:3df54a852345ae127d1fa3092b95168e4a88e2f8} RuncCommit:{ID:v1.0.3-0-gf46b6ba Expected:v1.0.3-0-gf46b6ba} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,
profile=default] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.5.0] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:C:\Program Files\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc.
Version:v0.17.0]] Warnings:<nil>}}
	I0512 01:00:21.695733    2560 start_flags.go:292] no existing cluster config was found, will generate one from the flags 
	I0512 01:00:21.696642    2560 start_flags.go:829] Wait components to verify : map[apiserver:true system_pods:true]
	I0512 01:00:21.702107    2560 out.go:177] * Using Docker Desktop driver with the root privilege
	I0512 01:00:21.706636    2560 cni.go:95] Creating CNI manager for ""
	I0512 01:00:21.706636    2560 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0512 01:00:21.706692    2560 start_flags.go:306] config:
	{Name:cert-options-20220512010013-7184 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1652251400-14138@sha256:8c847a4aa2afc5a7fc659f9731046bf9cc7e788283deecc83c8633014fb0828a Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.5 ClusterName:cert-options-20220512010013-7184 Namespace:default APIServerName:localhost APIServerNames:[localhost www.google.com] APIS
erverIPs:[127.0.0.1 192.168.15.15] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8555 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube4:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0512 01:00:21.711205    2560 out.go:177] * Starting control plane node cert-options-20220512010013-7184 in cluster cert-options-20220512010013-7184
	I0512 01:00:21.714196    2560 cache.go:120] Beginning downloading kic base image for docker with docker
	I0512 01:00:21.716234    2560 out.go:177] * Pulling base image ...
	I0512 01:00:21.720203    2560 preload.go:132] Checking if preload exists for k8s version v1.23.5 and runtime docker
	I0512 01:00:21.720203    2560 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1652251400-14138@sha256:8c847a4aa2afc5a7fc659f9731046bf9cc7e788283deecc83c8633014fb0828a in local docker daemon
	I0512 01:00:21.720203    2560 preload.go:148] Found local preload: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.23.5-docker-overlay2-amd64.tar.lz4
	I0512 01:00:21.720203    2560 cache.go:57] Caching tarball of preloaded images
	I0512 01:00:21.720203    2560 preload.go:174] Found C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.23.5-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0512 01:00:21.721226    2560 cache.go:60] Finished verifying existence of preloaded tar for  v1.23.5 on docker
	I0512 01:00:21.721226    2560 profile.go:148] Saving config to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\cert-options-20220512010013-7184\config.json ...
	I0512 01:00:21.721226    2560 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\cert-options-20220512010013-7184\config.json: {Name:mkf0687d73aad6be387e3af041b729cff9e41140 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0512 01:00:22.782049    2560 image.go:79] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1652251400-14138@sha256:8c847a4aa2afc5a7fc659f9731046bf9cc7e788283deecc83c8633014fb0828a in local docker daemon, skipping pull
	I0512 01:00:22.782049    2560 cache.go:141] gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1652251400-14138@sha256:8c847a4aa2afc5a7fc659f9731046bf9cc7e788283deecc83c8633014fb0828a exists in daemon, skipping load
	I0512 01:00:22.782102    2560 cache.go:206] Successfully downloaded all kic artifacts
	I0512 01:00:22.782102    2560 start.go:352] acquiring machines lock for cert-options-20220512010013-7184: {Name:mkc9630d6bf42b39fa8bbbbf1e40af095a872c10 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0512 01:00:22.782102    2560 start.go:356] acquired machines lock for "cert-options-20220512010013-7184" in 0s
	I0512 01:00:22.782102    2560 start.go:91] Provisioning new machine with config: &{Name:cert-options-20220512010013-7184 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1652251400-14138@sha256:8c847a4aa2afc5a7fc659f9731046bf9cc7e788283deecc83c8633014fb0828a Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.5 ClusterName:cert-options-20220512010013-7184 Namesp
ace:default APIServerName:localhost APIServerNames:[localhost www.google.com] APIServerIPs:[127.0.0.1 192.168.15.15] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8555 NodeName:} Nodes:[{Name: IP: Port:8555 KubernetesVersion:v1.23.5 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube4:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false
DisableMetrics:false} &{Name: IP: Port:8555 KubernetesVersion:v1.23.5 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0512 01:00:22.782102    2560 start.go:131] createHost starting for "" (driver="docker")
	I0512 01:00:22.785508    2560 out.go:204] * Creating docker container (CPUs=2, Memory=2048MB) ...
	I0512 01:00:22.785508    2560 start.go:165] libmachine.API.Create for "cert-options-20220512010013-7184" (driver="docker")
	I0512 01:00:22.786038    2560 client.go:168] LocalClient.Create starting
	I0512 01:00:22.786304    2560 main.go:134] libmachine: Reading certificate data from C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem
	I0512 01:00:22.786304    2560 main.go:134] libmachine: Decoding PEM data...
	I0512 01:00:22.786304    2560 main.go:134] libmachine: Parsing certificate...
	I0512 01:00:22.786304    2560 main.go:134] libmachine: Reading certificate data from C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem
	I0512 01:00:22.786860    2560 main.go:134] libmachine: Decoding PEM data...
	I0512 01:00:22.786899    2560 main.go:134] libmachine: Parsing certificate...
	I0512 01:00:22.795153    2560 cli_runner.go:164] Run: docker network inspect cert-options-20220512010013-7184 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0512 01:00:21.702773    9720 config.go:178] Loaded profile config "pause-20220512005140-7184": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.5
	I0512 01:00:21.703939    9720 driver.go:358] Setting default libvirt URI to qemu:///system
	I0512 01:00:25.084035    9720 docker.go:137] docker version: linux-20.10.14
	I0512 01:00:25.093157    9720 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0512 01:00:27.451330    9720 cli_runner.go:217] Completed: docker system info --format "{{json .}}": (2.357964s)
	I0512 01:00:27.452657    9720 info.go:265] docker info: {ID:5MOX:W55Z:6RSS:V5PU:46KT:D723:NTM4:N7FK:USOO:URA3:TW6J:2PNT Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:70 OomKillDisable:true NGoroutines:71 SystemTime:2022-05-12 01:00:26.3237667 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.102.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_
64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:53902323712 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.14 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3df54a852345ae127d1fa3092b95168e4a88e2f8 Expected:3df54a852345ae127d1fa3092b95168e4a88e2f8} RuncCommit:{ID:v1.0.3-0-gf46b6ba Expected:v1.0.3-0-gf46b6ba} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,
profile=default] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.5.0] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:C:\Program Files\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc.
Version:v0.17.0]] Warnings:<nil>}}
	I0512 01:00:27.456531    9720 out.go:177] * Using the docker driver based on existing profile
	W0512 01:00:23.931606    2560 cli_runner.go:211] docker network inspect cert-options-20220512010013-7184 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0512 01:00:24.564597    2560 cli_runner.go:217] Completed: docker network inspect cert-options-20220512010013-7184 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}": (1.136395s)
	I0512 01:00:24.576495    2560 network_create.go:272] running [docker network inspect cert-options-20220512010013-7184] to gather additional debugging logs...
	I0512 01:00:24.576495    2560 cli_runner.go:164] Run: docker network inspect cert-options-20220512010013-7184
	W0512 01:00:26.044362    2560 cli_runner.go:211] docker network inspect cert-options-20220512010013-7184 returned with exit code 1
	I0512 01:00:26.044362    2560 cli_runner.go:217] Completed: docker network inspect cert-options-20220512010013-7184: (1.4677914s)
	I0512 01:00:26.044362    2560 network_create.go:275] error running [docker network inspect cert-options-20220512010013-7184]: docker network inspect cert-options-20220512010013-7184: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: cert-options-20220512010013-7184
	I0512 01:00:26.044362    2560 network_create.go:277] output of [docker network inspect cert-options-20220512010013-7184]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: cert-options-20220512010013-7184
	
	** /stderr **
	I0512 01:00:26.054098    2560 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0512 01:00:27.218635    2560 cli_runner.go:217] Completed: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}": (1.1644771s)
	I0512 01:00:27.243640    2560 network.go:288] reserving subnet 192.168.49.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[] amended:true}} dirty:map[192.168.49.0:0xc000120598] misses:0}
	I0512 01:00:27.243640    2560 network.go:235] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0512 01:00:27.243640    2560 network_create.go:115] attempt to create docker network cert-options-20220512010013-7184 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0512 01:00:27.250634    2560 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true cert-options-20220512010013-7184
	W0512 01:00:28.458578    2560 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true cert-options-20220512010013-7184 returned with exit code 1
	I0512 01:00:28.458578    2560 cli_runner.go:217] Completed: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true cert-options-20220512010013-7184: (1.2077512s)
	W0512 01:00:28.458913    2560 network_create.go:107] failed to create docker network cert-options-20220512010013-7184 192.168.49.0/24, will retry: subnet is taken
	I0512 01:00:28.480051    2560 network.go:279] skipping subnet 192.168.49.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc000120598] amended:false}} dirty:map[] misses:0}
	I0512 01:00:28.480051    2560 network.go:238] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0512 01:00:28.501065    2560 network.go:288] reserving subnet 192.168.58.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc000120598] amended:true}} dirty:map[192.168.49.0:0xc000120598 192.168.58.0:0xc000006740] misses:0}
	I0512 01:00:28.501065    2560 network.go:235] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0512 01:00:28.501065    2560 network_create.go:115] attempt to create docker network cert-options-20220512010013-7184 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
	I0512 01:00:28.510056    2560 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true cert-options-20220512010013-7184
	I0512 01:00:27.458527    9720 start.go:284] selected driver: docker
	I0512 01:00:27.458527    9720 start.go:801] validating driver "docker" against &{Name:pause-20220512005140-7184 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1652251400-14138@sha256:8c847a4aa2afc5a7fc659f9731046bf9cc7e788283deecc83c8633014fb0828a Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.5 ClusterName:pause-20220512005140-7184 Namespace:default APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.23.5 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube4:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Di
sableOptimizations:false DisableMetrics:false}
	I0512 01:00:27.458804    9720 start.go:812] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0512 01:00:27.486708    9720 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0512 01:00:29.736946    9720 cli_runner.go:217] Completed: docker system info --format "{{json .}}": (2.2500185s)
	I0512 01:00:29.737367    9720 info.go:265] docker info: {ID:5MOX:W55Z:6RSS:V5PU:46KT:D723:NTM4:N7FK:USOO:URA3:TW6J:2PNT Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:69 OomKillDisable:true NGoroutines:67 SystemTime:2022-05-12 01:00:28.5688262 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.102.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_
64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:53902323712 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.14 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3df54a852345ae127d1fa3092b95168e4a88e2f8 Expected:3df54a852345ae127d1fa3092b95168e4a88e2f8} RuncCommit:{ID:v1.0.3-0-gf46b6ba Expected:v1.0.3-0-gf46b6ba} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,
profile=default] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.5.0] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:C:\Program Files\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc.
Version:v0.17.0]] Warnings:<nil>}}
	I0512 01:00:29.798386    9720 cni.go:95] Creating CNI manager for ""
	I0512 01:00:29.798386    9720 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0512 01:00:29.798386    9720 start_flags.go:306] config:
	{Name:pause-20220512005140-7184 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1652251400-14138@sha256:8c847a4aa2afc5a7fc659f9731046bf9cc7e788283deecc83c8633014fb0828a Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.5 ClusterName:pause-20220512005140-7184 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.23.5 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube4:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0512 01:00:29.803376    9720 out.go:177] * Starting control plane node pause-20220512005140-7184 in cluster pause-20220512005140-7184
	I0512 01:00:29.806378    9720 cache.go:120] Beginning downloading kic base image for docker with docker
	I0512 01:00:29.810380    9720 out.go:177] * Pulling base image ...
	I0512 01:00:27.171250    8484 cli_runner.go:217] Completed: docker run --rm --name docker-flags-20220512005959-7184-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=docker-flags-20220512005959-7184 --entrypoint /usr/bin/test -v docker-flags-20220512005959-7184:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1652251400-14138@sha256:8c847a4aa2afc5a7fc659f9731046bf9cc7e788283deecc83c8633014fb0828a -d /var/lib: (8.7110453s)
	I0512 01:00:27.171250    8484 oci.go:107] Successfully prepared a docker volume docker-flags-20220512005959-7184
	I0512 01:00:27.171250    8484 preload.go:132] Checking if preload exists for k8s version v1.23.5 and runtime docker
	I0512 01:00:27.171631    8484 kic.go:179] Starting extracting preloaded images to volume ...
	I0512 01:00:27.179634    8484 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.23.5-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v docker-flags-20220512005959-7184:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1652251400-14138@sha256:8c847a4aa2afc5a7fc659f9731046bf9cc7e788283deecc83c8633014fb0828a -I lz4 -xf /preloaded.tar -C /extractDir
	I0512 01:00:29.812376    9720 preload.go:132] Checking if preload exists for k8s version v1.23.5 and runtime docker
	I0512 01:00:29.812376    9720 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1652251400-14138@sha256:8c847a4aa2afc5a7fc659f9731046bf9cc7e788283deecc83c8633014fb0828a in local docker daemon
	I0512 01:00:29.812376    9720 preload.go:148] Found local preload: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.23.5-docker-overlay2-amd64.tar.lz4
	I0512 01:00:29.812376    9720 cache.go:57] Caching tarball of preloaded images
	I0512 01:00:29.812376    9720 preload.go:174] Found C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.23.5-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0512 01:00:29.812376    9720 cache.go:60] Finished verifying existence of preloaded tar for  v1.23.5 on docker
	I0512 01:00:29.813388    9720 profile.go:148] Saving config to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\pause-20220512005140-7184\config.json ...
	I0512 01:00:30.931114    9720 image.go:79] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1652251400-14138@sha256:8c847a4aa2afc5a7fc659f9731046bf9cc7e788283deecc83c8633014fb0828a in local docker daemon, skipping pull
	I0512 01:00:30.931114    9720 cache.go:141] gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1652251400-14138@sha256:8c847a4aa2afc5a7fc659f9731046bf9cc7e788283deecc83c8633014fb0828a exists in daemon, skipping load
	I0512 01:00:30.931114    9720 cache.go:206] Successfully downloaded all kic artifacts
	I0512 01:00:30.931114    9720 start.go:352] acquiring machines lock for pause-20220512005140-7184: {Name:mk3327eaa9951f77c6b8356d0562285f66d4de7a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0512 01:00:30.931114    9720 start.go:356] acquired machines lock for "pause-20220512005140-7184" in 0s
	I0512 01:00:30.931114    9720 start.go:94] Skipping create...Using existing machine configuration
	I0512 01:00:30.931114    9720 fix.go:55] fixHost starting: 
	I0512 01:00:30.960315    9720 cli_runner.go:164] Run: docker container inspect pause-20220512005140-7184 --format={{.State.Status}}
	I0512 01:00:32.118282    9720 cli_runner.go:217] Completed: docker container inspect pause-20220512005140-7184 --format={{.State.Status}}: (1.1579072s)
	I0512 01:00:32.118282    9720 fix.go:103] recreateIfNeeded on pause-20220512005140-7184: state=Running err=<nil>
	W0512 01:00:32.118282    9720 fix.go:129] unexpected machine state, will restart: <nil>
	I0512 01:00:32.120285    9720 out.go:177] * Updating the running docker "pause-20220512005140-7184" container ...
	W0512 01:00:29.720799    2560 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true cert-options-20220512010013-7184 returned with exit code 1
	I0512 01:00:29.720799    2560 cli_runner.go:217] Completed: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true cert-options-20220512010013-7184: (1.2106809s)
	W0512 01:00:29.720799    2560 network_create.go:107] failed to create docker network cert-options-20220512010013-7184 192.168.58.0/24, will retry: subnet is taken
	I0512 01:00:29.742593    2560 network.go:279] skipping subnet 192.168.58.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc000120598] amended:true}} dirty:map[192.168.49.0:0xc000120598 192.168.58.0:0xc000006740] misses:1}
	I0512 01:00:29.742593    2560 network.go:238] skipping subnet 192.168.58.0/24 that is reserved: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0512 01:00:29.770954    2560 network.go:288] reserving subnet 192.168.67.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc000120598] amended:true}} dirty:map[192.168.49.0:0xc000120598 192.168.58.0:0xc000006740 192.168.67.0:0xc000120658] misses:1}
	I0512 01:00:29.770954    2560 network.go:235] using free private subnet 192.168.67.0/24: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0512 01:00:29.770954    2560 network_create.go:115] attempt to create docker network cert-options-20220512010013-7184 192.168.67.0/24 with gateway 192.168.67.1 and MTU of 1500 ...
	I0512 01:00:29.780150    2560 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true cert-options-20220512010013-7184
	W0512 01:00:30.884200    2560 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true cert-options-20220512010013-7184 returned with exit code 1
	I0512 01:00:30.884200    2560 cli_runner.go:217] Completed: docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true cert-options-20220512010013-7184: (1.1034338s)
	W0512 01:00:30.884200    2560 network_create.go:107] failed to create docker network cert-options-20220512010013-7184 192.168.67.0/24, will retry: subnet is taken
	I0512 01:00:30.904175    2560 network.go:279] skipping subnet 192.168.67.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc000120598] amended:true}} dirty:map[192.168.49.0:0xc000120598 192.168.58.0:0xc000006740 192.168.67.0:0xc000120658] misses:2}
	I0512 01:00:30.905330    2560 network.go:238] skipping subnet 192.168.67.0/24 that is reserved: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0512 01:00:30.925672    2560 network.go:288] reserving subnet 192.168.76.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc000120598] amended:true}} dirty:map[192.168.49.0:0xc000120598 192.168.58.0:0xc000006740 192.168.67.0:0xc000120658 192.168.76.0:0xc0000084b8] misses:2}
	I0512 01:00:30.925672    2560 network.go:235] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0512 01:00:30.925672    2560 network_create.go:115] attempt to create docker network cert-options-20220512010013-7184 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I0512 01:00:30.936956    2560 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true cert-options-20220512010013-7184
	I0512 01:00:32.163703    2560 cli_runner.go:217] Completed: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true cert-options-20220512010013-7184: (1.2266834s)
	I0512 01:00:32.163703    2560 network_create.go:99] docker network cert-options-20220512010013-7184 192.168.76.0/24 created
	I0512 01:00:32.163703    2560 kic.go:106] calculated static IP "192.168.76.2" for the "cert-options-20220512010013-7184" container
	I0512 01:00:32.181344    2560 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0512 01:00:33.221074    2560 cli_runner.go:217] Completed: docker ps -a --format {{.Names}}: (1.0386712s)
	I0512 01:00:33.230209    2560 cli_runner.go:164] Run: docker volume create cert-options-20220512010013-7184 --label name.minikube.sigs.k8s.io=cert-options-20220512010013-7184 --label created_by.minikube.sigs.k8s.io=true
	I0512 01:00:32.123369    9720 machine.go:88] provisioning docker machine ...
	I0512 01:00:32.123369    9720 ubuntu.go:169] provisioning hostname "pause-20220512005140-7184"
	I0512 01:00:32.131649    9720 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-20220512005140-7184
	I0512 01:00:33.205821    9720 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-20220512005140-7184: (1.0741157s)
	I0512 01:00:33.211278    9720 main.go:134] libmachine: Using SSH client type: native
	I0512 01:00:33.212285    9720 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x422e20] 0x425c80 <nil>  [] 0s} 127.0.0.1 49879 <nil> <nil>}
	I0512 01:00:33.212285    9720 main.go:134] libmachine: About to run SSH command:
	sudo hostname pause-20220512005140-7184 && echo "pause-20220512005140-7184" | sudo tee /etc/hostname
	I0512 01:00:33.441753    9720 main.go:134] libmachine: SSH cmd err, output: <nil>: pause-20220512005140-7184
	
	I0512 01:00:33.451179    9720 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-20220512005140-7184
	I0512 01:00:34.619225    9720 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-20220512005140-7184: (1.167205s)
	I0512 01:00:34.623631    9720 main.go:134] libmachine: Using SSH client type: native
	I0512 01:00:34.623631    9720 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x422e20] 0x425c80 <nil>  [] 0s} 127.0.0.1 49879 <nil> <nil>}
	I0512 01:00:34.623631    9720 main.go:134] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\spause-20220512005140-7184' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 pause-20220512005140-7184/g' /etc/hosts;
				else 
					echo '127.0.1.1 pause-20220512005140-7184' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0512 01:00:34.763536    9720 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I0512 01:00:34.763536    9720 ubuntu.go:175] set auth options {CertDir:C:\Users\jenkins.minikube4\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube4\minikube-integration\.minikube}
	I0512 01:00:34.763536    9720 ubuntu.go:177] setting up certificates
	I0512 01:00:34.763536    9720 provision.go:83] configureAuth start
	I0512 01:00:34.772548    9720 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" pause-20220512005140-7184
	I0512 01:00:35.875930    9720 cli_runner.go:217] Completed: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" pause-20220512005140-7184: (1.1033257s)
	I0512 01:00:35.875930    9720 provision.go:138] copyHostCerts
	I0512 01:00:35.875930    9720 exec_runner.go:144] found C:\Users\jenkins.minikube4\minikube-integration\.minikube/ca.pem, removing ...
	I0512 01:00:35.875930    9720 exec_runner.go:207] rm: C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.pem
	I0512 01:00:35.876744    9720 exec_runner.go:151] cp: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube4\minikube-integration\.minikube/ca.pem (1078 bytes)
	I0512 01:00:35.877687    9720 exec_runner.go:144] found C:\Users\jenkins.minikube4\minikube-integration\.minikube/cert.pem, removing ...
	I0512 01:00:35.877687    9720 exec_runner.go:207] rm: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cert.pem
	I0512 01:00:35.878488    9720 exec_runner.go:151] cp: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube4\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0512 01:00:35.879470    9720 exec_runner.go:144] found C:\Users\jenkins.minikube4\minikube-integration\.minikube/key.pem, removing ...
	I0512 01:00:35.879470    9720 exec_runner.go:207] rm: C:\Users\jenkins.minikube4\minikube-integration\.minikube\key.pem
	I0512 01:00:35.879470    9720 exec_runner.go:151] cp: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube4\minikube-integration\.minikube/key.pem (1675 bytes)
	I0512 01:00:35.880701    9720 provision.go:112] generating server cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.pause-20220512005140-7184 san=[192.168.67.2 127.0.0.1 localhost 127.0.0.1 minikube pause-20220512005140-7184]
	I0512 01:00:36.047848    9720 provision.go:172] copyRemoteCerts
	I0512 01:00:36.057611    9720 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0512 01:00:36.063600    9720 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-20220512005140-7184
	I0512 01:00:34.335226    2560 cli_runner.go:217] Completed: docker volume create cert-options-20220512010013-7184 --label name.minikube.sigs.k8s.io=cert-options-20220512010013-7184 --label created_by.minikube.sigs.k8s.io=true: (1.10496s)
	I0512 01:00:34.335226    2560 oci.go:103] Successfully created a docker volume cert-options-20220512010013-7184
	I0512 01:00:34.344687    2560 cli_runner.go:164] Run: docker run --rm --name cert-options-20220512010013-7184-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=cert-options-20220512010013-7184 --entrypoint /usr/bin/test -v cert-options-20220512010013-7184:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1652251400-14138@sha256:8c847a4aa2afc5a7fc659f9731046bf9cc7e788283deecc83c8633014fb0828a -d /var/lib
	I0512 01:00:37.280228    9720 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-20220512005140-7184: (1.2165655s)
	I0512 01:00:40.301352    9720 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49879 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\pause-20220512005140-7184\id_rsa Username:docker}
	I0512 01:00:40.455812    9720 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.3974422s)
	I0512 01:00:40.456242    9720 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0512 01:00:40.576187    9720 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1249 bytes)
	I0512 01:00:40.632325    9720 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0512 01:00:40.687770    9720 provision.go:86] duration metric: configureAuth took 5.9239278s
	I0512 01:00:40.687770    9720 ubuntu.go:193] setting minikube options for container-runtime
	I0512 01:00:40.688788    9720 config.go:178] Loaded profile config "pause-20220512005140-7184": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.5
	I0512 01:00:40.698717    9720 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-20220512005140-7184
	I0512 01:00:41.785454    9720 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-20220512005140-7184: (1.0866808s)
	I0512 01:00:41.789462    9720 main.go:134] libmachine: Using SSH client type: native
	I0512 01:00:41.790463    9720 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x422e20] 0x425c80 <nil>  [] 0s} 127.0.0.1 49879 <nil> <nil>}
	I0512 01:00:41.790463    9720 main.go:134] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0512 01:00:41.972697    9720 main.go:134] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0512 01:00:41.972697    9720 ubuntu.go:71] root file system type: overlay
	I0512 01:00:41.972697    9720 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0512 01:00:41.979784    9720 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-20220512005140-7184
	I0512 01:00:43.050362    9720 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-20220512005140-7184: (1.0702512s)
	I0512 01:00:43.056713    9720 main.go:134] libmachine: Using SSH client type: native
	I0512 01:00:43.056713    9720 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x422e20] 0x425c80 <nil>  [] 0s} 127.0.0.1 49879 <nil> <nil>}
	I0512 01:00:43.056713    9720 main.go:134] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0512 01:00:43.258865    9720 main.go:134] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0512 01:00:43.267382    9720 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-20220512005140-7184
	I0512 01:00:44.355201    9720 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-20220512005140-7184: (1.0872181s)
	I0512 01:00:44.358192    9720 main.go:134] libmachine: Using SSH client type: native
	I0512 01:00:44.359201    9720 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x422e20] 0x425c80 <nil>  [] 0s} 127.0.0.1 49879 <nil> <nil>}
	I0512 01:00:44.359201    9720 main.go:134] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0512 01:00:44.573776    9720 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I0512 01:00:44.574305    9720 machine.go:91] provisioned docker machine in 12.4502931s
	I0512 01:00:44.574305    9720 start.go:306] post-start starting for "pause-20220512005140-7184" (driver="docker")
	I0512 01:00:44.574376    9720 start.go:316] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0512 01:00:44.590283    9720 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0512 01:00:44.600266    9720 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-20220512005140-7184
	I0512 01:00:45.751578    9720 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-20220512005140-7184: (1.1512523s)
	I0512 01:00:45.751578    9720 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49879 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\pause-20220512005140-7184\id_rsa Username:docker}
	I0512 01:00:45.837435    9720 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (1.2470253s)
	I0512 01:00:45.852441    9720 ssh_runner.go:195] Run: cat /etc/os-release
	I0512 01:00:45.863443    9720 main.go:134] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0512 01:00:45.863443    9720 main.go:134] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0512 01:00:45.863443    9720 main.go:134] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0512 01:00:45.863443    9720 info.go:137] Remote host: Ubuntu 20.04.4 LTS
	I0512 01:00:45.863443    9720 filesync.go:126] Scanning C:\Users\jenkins.minikube4\minikube-integration\.minikube\addons for local assets ...
	I0512 01:00:45.865619    9720 filesync.go:126] Scanning C:\Users\jenkins.minikube4\minikube-integration\.minikube\files for local assets ...
	I0512 01:00:45.866430    9720 filesync.go:149] local asset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\71842.pem -> 71842.pem in /etc/ssl/certs
	I0512 01:00:45.877430    9720 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0512 01:00:45.955497    9720 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\71842.pem --> /etc/ssl/certs/71842.pem (1708 bytes)
	I0512 01:00:46.064856    9720 start.go:309] post-start completed in 1.4904036s
	I0512 01:00:46.076590    9720 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0512 01:00:46.085221    9720 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-20220512005140-7184
	I0512 01:00:45.122117    6824 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.23.5-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v cert-expiration-20220512005951-7184:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1652251400-14138@sha256:8c847a4aa2afc5a7fc659f9731046bf9cc7e788283deecc83c8633014fb0828a -I lz4 -xf /preloaded.tar -C /extractDir: (31.5493632s)
	I0512 01:00:45.122117    6824 kic.go:188] duration metric: took 31.556362 seconds to extract preloaded images to volume
	I0512 01:00:45.129098    6824 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0512 01:00:48.474582    2560 cli_runner.go:217] Completed: docker run --rm --name cert-options-20220512010013-7184-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=cert-options-20220512010013-7184 --entrypoint /usr/bin/test -v cert-options-20220512010013-7184:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1652251400-14138@sha256:8c847a4aa2afc5a7fc659f9731046bf9cc7e788283deecc83c8633014fb0828a -d /var/lib: (14.1291662s)
	I0512 01:00:48.474582    2560 oci.go:107] Successfully prepared a docker volume cert-options-20220512010013-7184
	I0512 01:00:48.474582    2560 preload.go:132] Checking if preload exists for k8s version v1.23.5 and runtime docker
	I0512 01:00:48.474829    2560 kic.go:179] Starting extracting preloaded images to volume ...
	I0512 01:00:48.481947    2560 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.23.5-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v cert-options-20220512010013-7184:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1652251400-14138@sha256:8c847a4aa2afc5a7fc659f9731046bf9cc7e788283deecc83c8633014fb0828a -I lz4 -xf /preloaded.tar -C /extractDir
	I0512 01:00:47.156416    9720 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-20220512005140-7184: (1.07114s)
	I0512 01:00:47.156416    9720 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49879 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\pause-20220512005140-7184\id_rsa Username:docker}
	I0512 01:00:47.262011    9720 ssh_runner.go:235] Completed: sh -c "df -h /var | awk 'NR==2{print $5}'": (1.1853597s)
	I0512 01:00:47.271928    9720 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0512 01:00:47.282929    9720 fix.go:57] fixHost completed within 16.3509717s
	I0512 01:00:47.282929    9720 start.go:81] releasing machines lock for "pause-20220512005140-7184", held for 16.3509717s
	I0512 01:00:47.289932    9720 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" pause-20220512005140-7184
	I0512 01:00:48.346520    9720 cli_runner.go:217] Completed: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" pause-20220512005140-7184: (1.0565337s)
	I0512 01:00:48.351156    9720 ssh_runner.go:195] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0512 01:00:48.359830    9720 ssh_runner.go:195] Run: systemctl --version
	I0512 01:00:48.362836    9720 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-20220512005140-7184
	I0512 01:00:48.367839    9720 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-20220512005140-7184
	I0512 01:00:49.441356    9720 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-20220512005140-7184: (1.0784644s)
	I0512 01:00:49.441356    9720 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49879 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\pause-20220512005140-7184\id_rsa Username:docker}
	I0512 01:00:49.456707    9720 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-20220512005140-7184: (1.0888118s)
	I0512 01:00:49.456707    9720 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49879 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\pause-20220512005140-7184\id_rsa Username:docker}
	I0512 01:00:49.608714    9720 ssh_runner.go:235] Completed: curl -sS -m 2 https://k8s.gcr.io/: (1.2574265s)
	I0512 01:00:49.608714    9720 ssh_runner.go:235] Completed: systemctl --version: (1.2488191s)
	I0512 01:00:49.620717    9720 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0512 01:00:49.656708    9720 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0512 01:00:49.683741    9720 cruntime.go:273] skipping containerd shutdown because we are bound to it
	I0512 01:00:49.694906    9720 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0512 01:00:49.725785    9720 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/dockershim.sock
	image-endpoint: unix:///var/run/dockershim.sock
	" | sudo tee /etc/crictl.yaml"
	I0512 01:00:50.064014    9720 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0512 01:00:50.276641    9720 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0512 01:00:50.469789    9720 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0512 01:00:50.509463    9720 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0512 01:00:50.766373    9720 ssh_runner.go:195] Run: sudo systemctl start docker
	I0512 01:00:50.799999    9720 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0512 01:00:50.892676    9720 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0512 01:00:50.976553    9720 out.go:204] * Preparing Kubernetes v1.23.5 on Docker 20.10.15 ...
	I0512 01:00:50.983568    9720 cli_runner.go:164] Run: docker exec -t pause-20220512005140-7184 dig +short host.docker.internal
	I0512 01:00:47.314929    6824 cli_runner.go:217] Completed: docker system info --format "{{json .}}": (2.1857182s)
	I0512 01:00:47.314929    6824 info.go:265] docker info: {ID:5MOX:W55Z:6RSS:V5PU:46KT:D723:NTM4:N7FK:USOO:URA3:TW6J:2PNT Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:69 OomKillDisable:true NGoroutines:71 SystemTime:2022-05-12 01:00:46.2203529 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.102.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_
64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:53902323712 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.14 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3df54a852345ae127d1fa3092b95168e4a88e2f8 Expected:3df54a852345ae127d1fa3092b95168e4a88e2f8} RuncCommit:{ID:v1.0.3-0-gf46b6ba Expected:v1.0.3-0-gf46b6ba} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,
profile=default] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.5.0] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:C:\Program Files\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc.
Version:v0.17.0]] Warnings:<nil>}}
	I0512 01:00:47.321932    6824 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0512 01:00:49.488697    6824 cli_runner.go:217] Completed: docker info --format "'{{json .SecurityOptions}}'": (2.166653s)
	I0512 01:00:49.499697    6824 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname cert-expiration-20220512005951-7184 --name cert-expiration-20220512005951-7184 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=cert-expiration-20220512005951-7184 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=cert-expiration-20220512005951-7184 --network cert-expiration-20220512005951-7184 --ip 192.168.58.2 --volume cert-expiration-20220512005951-7184:/var --security-opt apparmor=unconfined --memory=2048mb --memory-swap=2048mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1652251400-14138@sha256:8c847a4aa2afc5a7fc659f9731046bf9cc7e788283deecc83c8633014fb0828a
	I0512 01:00:51.928393    8484 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.23.5-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v docker-flags-20220512005959-7184:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1652251400-14138@sha256:8c847a4aa2afc5a7fc659f9731046bf9cc7e788283deecc83c8633014fb0828a -I lz4 -xf /preloaded.tar -C /extractDir: (24.7474816s)
	I0512 01:00:51.928393    8484 kic.go:188] duration metric: took 24.755485 seconds to extract preloaded images to volume
	I0512 01:00:51.936391    8484 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0512 01:00:54.332242    8484 cli_runner.go:217] Completed: docker system info --format "{{json .}}": (2.3957281s)
	I0512 01:00:54.332242    8484 info.go:265] docker info: {ID:5MOX:W55Z:6RSS:V5PU:46KT:D723:NTM4:N7FK:USOO:URA3:TW6J:2PNT Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:81 OomKillDisable:true NGoroutines:70 SystemTime:2022-05-12 01:00:53.1429681 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.102.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_
64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:53902323712 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.14 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3df54a852345ae127d1fa3092b95168e4a88e2f8 Expected:3df54a852345ae127d1fa3092b95168e4a88e2f8} RuncCommit:{ID:v1.0.3-0-gf46b6ba Expected:v1.0.3-0-gf46b6ba} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,
profile=default] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.5.0] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:C:\Program Files\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc.
Version:v0.17.0]] Warnings:<nil>}}
	I0512 01:00:54.340232    8484 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0512 01:00:52.305372    9720 cli_runner.go:217] Completed: docker exec -t pause-20220512005140-7184 dig +short host.docker.internal: (1.3217351s)
	I0512 01:00:52.305372    9720 network.go:96] got host ip for mount in container by digging dns: 192.168.65.2
	I0512 01:00:52.316362    9720 ssh_runner.go:195] Run: grep 192.168.65.2	host.minikube.internal$ /etc/hosts
	I0512 01:00:52.337396    9720 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" pause-20220512005140-7184
	I0512 01:00:53.511603    9720 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" pause-20220512005140-7184: (1.1741461s)
	I0512 01:00:53.511603    9720 preload.go:132] Checking if preload exists for k8s version v1.23.5 and runtime docker
	I0512 01:00:53.527598    9720 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0512 01:00:53.610948    9720 docker.go:610] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.23.5
	k8s.gcr.io/kube-proxy:v1.23.5
	k8s.gcr.io/kube-scheduler:v1.23.5
	k8s.gcr.io/kube-controller-manager:v1.23.5
	k8s.gcr.io/etcd:3.5.1-0
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0512 01:00:53.610948    9720 docker.go:541] Images already preloaded, skipping extraction
	I0512 01:00:53.625001    9720 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0512 01:00:53.689181    9720 docker.go:610] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.23.5
	k8s.gcr.io/kube-proxy:v1.23.5
	k8s.gcr.io/kube-scheduler:v1.23.5
	k8s.gcr.io/kube-controller-manager:v1.23.5
	k8s.gcr.io/etcd:3.5.1-0
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0512 01:00:53.689181    9720 cache_images.go:84] Images are preloaded, skipping loading
	I0512 01:00:53.697168    9720 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0512 01:00:53.897956    9720 cni.go:95] Creating CNI manager for ""
	I0512 01:00:53.898962    9720 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0512 01:00:53.898962    9720 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0512 01:00:53.898962    9720 kubeadm.go:158] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.67.2 APIServerPort:8443 KubernetesVersion:v1.23.5 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:pause-20220512005140-7184 NodeName:pause-20220512005140-7184 DNSDomain:cluster.local CRISocket:/var/run/dockershim.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.67.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.67.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/mini
kube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	I0512 01:00:53.898962    9720 kubeadm.go:162] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.67.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/dockershim.sock
	  name: "pause-20220512005140-7184"
	  kubeletExtraArgs:
	    node-ip: 192.168.67.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.67.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.23.5
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0512 01:00:53.898962    9720 kubeadm.go:936] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.23.5/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=docker --hostname-override=pause-20220512005140-7184 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.67.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.23.5 ClusterName:pause-20220512005140-7184 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0512 01:00:53.913934    9720 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.23.5
	I0512 01:00:53.939131    9720 binaries.go:44] Found k8s binaries, skipping transfer
	I0512 01:00:53.952465    9720 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0512 01:00:53.973427    9720 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (351 bytes)
	I0512 01:00:54.008390    9720 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0512 01:00:54.047384    9720 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2047 bytes)
	I0512 01:00:54.094406    9720 ssh_runner.go:195] Run: grep 192.168.67.2	control-plane.minikube.internal$ /etc/hosts
	I0512 01:00:54.104417    9720 certs.go:54] Setting up C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\pause-20220512005140-7184 for IP: 192.168.67.2
	I0512 01:00:54.104417    9720 certs.go:182] skipping minikubeCA CA generation: C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.key
	I0512 01:00:54.104417    9720 certs.go:182] skipping proxyClientCA CA generation: C:\Users\jenkins.minikube4\minikube-integration\.minikube\proxy-client-ca.key
	I0512 01:00:54.105401    9720 certs.go:298] skipping minikube-user signed cert generation: C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\pause-20220512005140-7184\client.key
	I0512 01:00:54.105401    9720 certs.go:298] skipping minikube signed cert generation: C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\pause-20220512005140-7184\apiserver.key.c7fa3a9e
	I0512 01:00:54.106392    9720 certs.go:298] skipping aggregator signed cert generation: C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\pause-20220512005140-7184\proxy-client.key
	I0512 01:00:54.107386    9720 certs.go:388] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\7184.pem (1338 bytes)
	W0512 01:00:54.107386    9720 certs.go:384] ignoring C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\7184_empty.pem, impossibly tiny 0 bytes
	I0512 01:00:54.107386    9720 certs.go:388] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca-key.pem (1675 bytes)
	I0512 01:00:54.107386    9720 certs.go:388] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem (1078 bytes)
	I0512 01:00:54.107386    9720 certs.go:388] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0512 01:00:54.108398    9720 certs.go:388] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\key.pem (1675 bytes)
	I0512 01:00:54.108398    9720 certs.go:388] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\71842.pem (1708 bytes)
	I0512 01:00:54.109397    9720 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\pause-20220512005140-7184\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0512 01:00:54.166032    9720 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\pause-20220512005140-7184\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0512 01:00:54.234233    9720 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\pause-20220512005140-7184\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0512 01:00:54.288933    9720 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\pause-20220512005140-7184\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0512 01:00:54.346244    9720 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0512 01:00:54.398234    9720 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0512 01:00:54.451231    9720 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0512 01:00:54.501447    9720 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0512 01:00:54.545503    9720 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\71842.pem --> /usr/share/ca-certificates/71842.pem (1708 bytes)
	I0512 01:00:54.596181    9720 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0512 01:00:54.648264    9720 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\7184.pem --> /usr/share/ca-certificates/7184.pem (1338 bytes)
	I0512 01:00:54.710433    9720 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0512 01:00:54.770198    9720 ssh_runner.go:195] Run: openssl version
	I0512 01:00:54.822088    9720 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/71842.pem && ln -fs /usr/share/ca-certificates/71842.pem /etc/ssl/certs/71842.pem"
	I0512 01:00:54.863766    9720 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/71842.pem
	I0512 01:00:54.878108    9720 certs.go:431] hashing: -rw-r--r-- 1 root root 1708 May 11 23:10 /usr/share/ca-certificates/71842.pem
	I0512 01:00:54.890978    9720 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/71842.pem
	I0512 01:00:54.929069    9720 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/71842.pem /etc/ssl/certs/3ec20f2e.0"
	I0512 01:00:54.972779    9720 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0512 01:00:55.019413    9720 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0512 01:00:55.032411    9720 certs.go:431] hashing: -rw-r--r-- 1 root root 1111 May 11 23:00 /usr/share/ca-certificates/minikubeCA.pem
	I0512 01:00:55.047410    9720 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0512 01:00:55.073427    9720 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0512 01:00:55.110515    9720 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/7184.pem && ln -fs /usr/share/ca-certificates/7184.pem /etc/ssl/certs/7184.pem"
	I0512 01:00:55.142523    9720 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/7184.pem
	I0512 01:00:55.151527    9720 certs.go:431] hashing: -rw-r--r-- 1 root root 1338 May 11 23:10 /usr/share/ca-certificates/7184.pem
	I0512 01:00:55.165516    9720 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/7184.pem
	I0512 01:00:55.188519    9720 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/7184.pem /etc/ssl/certs/51391683.0"
	I0512 01:00:55.208521    9720 kubeadm.go:391] StartCluster: {Name:pause-20220512005140-7184 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1652251400-14138@sha256:8c847a4aa2afc5a7fc659f9731046bf9cc7e788283deecc83c8633014fb0828a Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.5 ClusterName:pause-20220512005140-7184 Namespace:default APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.23.5 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube4:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:f
alse DisableMetrics:false}
	I0512 01:00:55.216527    9720 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0512 01:00:55.303174    9720 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0512 01:00:55.327156    9720 kubeadm.go:402] found existing configuration files, will attempt cluster restart
	I0512 01:00:55.327156    9720 kubeadm.go:601] restartCluster start
	I0512 01:00:55.339160    9720 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0512 01:00:55.358153    9720 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0512 01:00:55.365155    9720 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" pause-20220512005140-7184
	I0512 01:00:53.049183    6824 cli_runner.go:217] Completed: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname cert-expiration-20220512005951-7184 --name cert-expiration-20220512005951-7184 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=cert-expiration-20220512005951-7184 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=cert-expiration-20220512005951-7184 --network cert-expiration-20220512005951-7184 --ip 192.168.58.2 --volume cert-expiration-20220512005951-7184:/var --security-opt apparmor=unconfined --memory=2048mb --memory-swap=2048mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1652251400-14138@sha256:8c847a4aa2afc5a7fc659f9731046bf9cc7e788283deecc83c8633014fb0828a: (3.5493032s)
	I0512 01:00:53.060165    6824 cli_runner.go:164] Run: docker container inspect cert-expiration-20220512005951-7184 --format={{.State.Running}}
	I0512 01:00:54.316963    6824 cli_runner.go:217] Completed: docker container inspect cert-expiration-20220512005951-7184 --format={{.State.Running}}: (1.2567331s)
	I0512 01:00:54.328336    6824 cli_runner.go:164] Run: docker container inspect cert-expiration-20220512005951-7184 --format={{.State.Status}}
	I0512 01:00:55.530314    6824 cli_runner.go:217] Completed: docker container inspect cert-expiration-20220512005951-7184 --format={{.State.Status}}: (1.2019159s)
	I0512 01:00:55.538167    6824 cli_runner.go:164] Run: docker exec cert-expiration-20220512005951-7184 stat /var/lib/dpkg/alternatives/iptables
	I0512 01:00:56.500255    9720 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" pause-20220512005140-7184: (1.1350414s)
	I0512 01:00:56.501247    9720 kubeconfig.go:92] found "pause-20220512005140-7184" server: "https://127.0.0.1:49878"
	I0512 01:00:56.502267    9720 kapi.go:59] client config for pause-20220512005140-7184: &rest.Config{Host:"https://127.0.0.1:49878", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\pause-20220512005140-7184\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\pause-20220512005140-7184\\client.key", CAFile:"C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), K
eyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1315600), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0512 01:00:56.513256    9720 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0512 01:00:56.536270    9720 api_server.go:165] Checking apiserver status ...
	I0512 01:00:56.548267    9720 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0512 01:00:56.586256    9720 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1806/cgroup
	I0512 01:00:56.610260    9720 api_server.go:181] apiserver freezer: "20:freezer:/docker/18e2eed271b9010aeba677455559d5fa350c421d241ea6643e75bf1b1295b98a/kubepods/burstable/pod5dbc247a18a40cde52945b4c8d27dc67/34d00adfe03d17f24e700df04cfc476471de50e1834344088f04a8b6e8af0bc9"
	I0512 01:00:56.621258    9720 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/18e2eed271b9010aeba677455559d5fa350c421d241ea6643e75bf1b1295b98a/kubepods/burstable/pod5dbc247a18a40cde52945b4c8d27dc67/34d00adfe03d17f24e700df04cfc476471de50e1834344088f04a8b6e8af0bc9/freezer.state
	I0512 01:00:56.642249    9720 api_server.go:203] freezer state: "THAWED"
	I0512 01:00:56.642249    9720 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:49878/healthz ...
	I0512 01:00:56.659291    9720 api_server.go:266] https://127.0.0.1:49878/healthz returned 200:
	ok
	I0512 01:00:56.686257    9720 system_pods.go:86] 6 kube-system pods found
	I0512 01:00:56.687259    9720 system_pods.go:89] "coredns-64897985d-6rqbl" [7d6e3981-4ff9-4593-83b1-57b703abd918] Running
	I0512 01:00:56.687259    9720 system_pods.go:89] "etcd-pause-20220512005140-7184" [62c0faef-19ea-4696-97ab-48e84baedea3] Running
	I0512 01:00:56.687259    9720 system_pods.go:89] "kube-apiserver-pause-20220512005140-7184" [83c3db73-94bd-4f33-83e9-6c42f62f4d4b] Running
	I0512 01:00:56.687259    9720 system_pods.go:89] "kube-controller-manager-pause-20220512005140-7184" [054f4a92-3568-4023-a22b-617612d6b1fb] Running
	I0512 01:00:56.687259    9720 system_pods.go:89] "kube-proxy-sk8qh" [f28d65ac-6d94-41fd-ad5c-dfc02902ee82] Running
	I0512 01:00:56.687259    9720 system_pods.go:89] "kube-scheduler-pause-20220512005140-7184" [ffdf2485-8fe5-44b1-b98c-7e4e039bcac0] Running
	I0512 01:00:56.690268    9720 api_server.go:140] control plane version: v1.23.5
	I0512 01:00:56.690268    9720 kubeadm.go:595] The running cluster does not require reconfiguration: 127.0.0.1
	I0512 01:00:56.690268    9720 kubeadm.go:649] Taking a shortcut, as the cluster seems to be properly configured
	I0512 01:00:56.690268    9720 kubeadm.go:605] restartCluster took 1.3630421s
	I0512 01:00:56.690268    9720 kubeadm.go:393] StartCluster complete in 1.4816705s
	I0512 01:00:56.690268    9720 settings.go:142] acquiring lock: {Name:mk5d8710830d010adb6db61f855b0ef766a8622c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0512 01:00:56.690268    9720 settings.go:150] Updating kubeconfig:  C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	I0512 01:00:56.692280    9720 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\kubeconfig: {Name:mk97c09b788e5010ffd4c9dd9525f9245d5edd25 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0512 01:00:56.707354    9720 kapi.go:59] client config for pause-20220512005140-7184: &rest.Config{Host:"https://127.0.0.1:49878", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\pause-20220512005140-7184\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\pause-20220512005140-7184\\client.key", CAFile:"C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), K
eyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1315600), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0512 01:00:56.716136    9720 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "pause-20220512005140-7184" rescaled to 1
	I0512 01:00:56.716136    9720 start.go:208] Will wait 6m0s for node &{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.23.5 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0512 01:00:56.719132    9720 out.go:177] * Verifying Kubernetes components...
	I0512 01:00:56.716136    9720 addons.go:415] enableAddons start: toEnable=map[], additional=[]
	I0512 01:00:56.716136    9720 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.23.5/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0512 01:00:56.716136    9720 config.go:178] Loaded profile config "pause-20220512005140-7184": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.5
	I0512 01:00:56.720106    9720 addons.go:65] Setting storage-provisioner=true in profile "pause-20220512005140-7184"
	I0512 01:00:56.720106    9720 addons.go:65] Setting default-storageclass=true in profile "pause-20220512005140-7184"
	I0512 01:00:56.722118    9720 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "pause-20220512005140-7184"
	I0512 01:00:56.720106    9720 addons.go:153] Setting addon storage-provisioner=true in "pause-20220512005140-7184"
	W0512 01:00:56.722118    9720 addons.go:165] addon storage-provisioner should already be in state true
	I0512 01:00:56.722118    9720 host.go:66] Checking if "pause-20220512005140-7184" exists ...
	I0512 01:00:56.733108    9720 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0512 01:00:56.738108    9720 cli_runner.go:164] Run: docker container inspect pause-20220512005140-7184 --format={{.State.Status}}
	I0512 01:00:56.742116    9720 cli_runner.go:164] Run: docker container inspect pause-20220512005140-7184 --format={{.State.Status}}
	I0512 01:00:56.924712    9720 start.go:795] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I0512 01:00:56.933696    9720 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" pause-20220512005140-7184
	I0512 01:00:57.924508    9720 cli_runner.go:217] Completed: docker container inspect pause-20220512005140-7184 --format={{.State.Status}}: (1.1823308s)
	I0512 01:00:57.940033    9720 cli_runner.go:217] Completed: docker container inspect pause-20220512005140-7184 --format={{.State.Status}}: (1.2018626s)
	I0512 01:00:58.081757    9720 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" pause-20220512005140-7184: (1.147887s)
	I0512 01:00:58.088589    9720 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0512 01:00:58.089442    9720 node_ready.go:35] waiting up to 6m0s for node "pause-20220512005140-7184" to be "Ready" ...
	I0512 01:00:56.516249    8484 cli_runner.go:217] Completed: docker info --format "'{{json .SecurityOptions}}'": (2.1759055s)
	I0512 01:00:56.525279    8484 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname docker-flags-20220512005959-7184 --name docker-flags-20220512005959-7184 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=docker-flags-20220512005959-7184 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=docker-flags-20220512005959-7184 --network docker-flags-20220512005959-7184 --ip 192.168.49.2 --volume docker-flags-20220512005959-7184:/var --security-opt apparmor=unconfined --memory=2048mb --memory-swap=2048mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1652251400-14138@sha256:8c847a4aa2afc5a7fc659f9731046bf9cc7e788283deecc83c8633014fb0828a
	I0512 01:00:58.090251    9720 kapi.go:59] client config for pause-20220512005140-7184: &rest.Config{Host:"https://127.0.0.1:49878", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\pause-20220512005140-7184\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\pause-20220512005140-7184\\client.key", CAFile:"C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), K
eyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1315600), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0512 01:00:58.108338    9720 node_ready.go:49] node "pause-20220512005140-7184" has status "Ready":"True"
	I0512 01:00:58.274591    9720 node_ready.go:38] duration metric: took 185.1387ms waiting for node "pause-20220512005140-7184" to be "Ready" ...
	I0512 01:00:58.274591    9720 addons.go:348] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0512 01:00:58.274591    9720 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0512 01:00:58.274591    9720 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0512 01:00:58.295857    9720 pod_ready.go:78] waiting up to 6m0s for pod "coredns-64897985d-6rqbl" in "kube-system" namespace to be "Ready" ...
	I0512 01:00:58.297359    9720 addons.go:153] Setting addon default-storageclass=true in "pause-20220512005140-7184"
	W0512 01:00:58.297391    9720 addons.go:165] addon default-storageclass should already be in state true
	I0512 01:00:58.297391    9720 host.go:66] Checking if "pause-20220512005140-7184" exists ...
	I0512 01:00:58.300649    9720 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-20220512005140-7184
	I0512 01:00:58.321627    9720 pod_ready.go:92] pod "coredns-64897985d-6rqbl" in "kube-system" namespace has status "Ready":"True"
	I0512 01:00:58.321627    9720 pod_ready.go:81] duration metric: took 25.6769ms waiting for pod "coredns-64897985d-6rqbl" in "kube-system" namespace to be "Ready" ...
	I0512 01:00:58.321627    9720 pod_ready.go:78] waiting up to 6m0s for pod "etcd-pause-20220512005140-7184" in "kube-system" namespace to be "Ready" ...
	I0512 01:00:58.324600    9720 cli_runner.go:164] Run: docker container inspect pause-20220512005140-7184 --format={{.State.Status}}
	I0512 01:00:58.345074    9720 pod_ready.go:92] pod "etcd-pause-20220512005140-7184" in "kube-system" namespace has status "Ready":"True"
	I0512 01:00:58.345074    9720 pod_ready.go:81] duration metric: took 23.4465ms waiting for pod "etcd-pause-20220512005140-7184" in "kube-system" namespace to be "Ready" ...
	I0512 01:00:58.345074    9720 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-pause-20220512005140-7184" in "kube-system" namespace to be "Ready" ...
	I0512 01:00:58.363580    9720 pod_ready.go:92] pod "kube-apiserver-pause-20220512005140-7184" in "kube-system" namespace has status "Ready":"True"
	I0512 01:00:58.363580    9720 pod_ready.go:81] duration metric: took 18.5046ms waiting for pod "kube-apiserver-pause-20220512005140-7184" in "kube-system" namespace to be "Ready" ...
	I0512 01:00:58.363580    9720 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-pause-20220512005140-7184" in "kube-system" namespace to be "Ready" ...
	I0512 01:00:58.378601    9720 pod_ready.go:92] pod "kube-controller-manager-pause-20220512005140-7184" in "kube-system" namespace has status "Ready":"True"
	I0512 01:00:58.378601    9720 pod_ready.go:81] duration metric: took 15.0198ms waiting for pod "kube-controller-manager-pause-20220512005140-7184" in "kube-system" namespace to be "Ready" ...
	I0512 01:00:58.378601    9720 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-sk8qh" in "kube-system" namespace to be "Ready" ...
	I0512 01:00:58.503272    9720 pod_ready.go:92] pod "kube-proxy-sk8qh" in "kube-system" namespace has status "Ready":"True"
	I0512 01:00:58.503272    9720 pod_ready.go:81] duration metric: took 124.6649ms waiting for pod "kube-proxy-sk8qh" in "kube-system" namespace to be "Ready" ...
	I0512 01:00:58.503272    9720 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-pause-20220512005140-7184" in "kube-system" namespace to be "Ready" ...
	I0512 01:00:58.911679    9720 pod_ready.go:92] pod "kube-scheduler-pause-20220512005140-7184" in "kube-system" namespace has status "Ready":"True"
	I0512 01:00:58.911679    9720 pod_ready.go:81] duration metric: took 408.3859ms waiting for pod "kube-scheduler-pause-20220512005140-7184" in "kube-system" namespace to be "Ready" ...
	I0512 01:00:58.911679    9720 pod_ready.go:38] duration metric: took 637.0554ms for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0512 01:00:58.912228    9720 api_server.go:51] waiting for apiserver process to appear ...
	I0512 01:00:58.926149    9720 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0512 01:00:58.954274    9720 api_server.go:71] duration metric: took 2.2380222s to wait for apiserver process to appear ...
	I0512 01:00:58.954274    9720 api_server.go:87] waiting for apiserver healthz status ...
	I0512 01:00:58.954274    9720 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:49878/healthz ...
	I0512 01:00:58.974978    9720 api_server.go:266] https://127.0.0.1:49878/healthz returned 200:
	ok
	I0512 01:00:58.983737    9720 api_server.go:140] control plane version: v1.23.5
	I0512 01:00:58.983737    9720 api_server.go:130] duration metric: took 29.4614ms to wait for apiserver health ...
	I0512 01:00:58.983737    9720 system_pods.go:43] waiting for kube-system pods to appear ...
	I0512 01:00:59.117869    9720 system_pods.go:59] 6 kube-system pods found
	I0512 01:00:59.117869    9720 system_pods.go:61] "coredns-64897985d-6rqbl" [7d6e3981-4ff9-4593-83b1-57b703abd918] Running
	I0512 01:00:59.117869    9720 system_pods.go:61] "etcd-pause-20220512005140-7184" [62c0faef-19ea-4696-97ab-48e84baedea3] Running
	I0512 01:00:59.117869    9720 system_pods.go:61] "kube-apiserver-pause-20220512005140-7184" [83c3db73-94bd-4f33-83e9-6c42f62f4d4b] Running
	I0512 01:00:59.117869    9720 system_pods.go:61] "kube-controller-manager-pause-20220512005140-7184" [054f4a92-3568-4023-a22b-617612d6b1fb] Running
	I0512 01:00:59.117869    9720 system_pods.go:61] "kube-proxy-sk8qh" [f28d65ac-6d94-41fd-ad5c-dfc02902ee82] Running
	I0512 01:00:59.117869    9720 system_pods.go:61] "kube-scheduler-pause-20220512005140-7184" [ffdf2485-8fe5-44b1-b98c-7e4e039bcac0] Running
	I0512 01:00:59.117869    9720 system_pods.go:74] duration metric: took 134.1256ms to wait for pod list to return data ...
	I0512 01:00:59.117869    9720 default_sa.go:34] waiting for default service account to be created ...
	I0512 01:00:59.306002    9720 default_sa.go:45] found service account: "default"
	I0512 01:00:59.306002    9720 default_sa.go:55] duration metric: took 188.1237ms for default service account to be created ...
	I0512 01:00:59.306002    9720 system_pods.go:116] waiting for k8s-apps to be running ...
	I0512 01:00:59.450976    9720 cli_runner.go:217] Completed: docker container inspect pause-20220512005140-7184 --format={{.State.Status}}: (1.1263183s)
	I0512 01:00:59.450976    9720 addons.go:348] installing /etc/kubernetes/addons/storageclass.yaml
	I0512 01:00:59.450976    9720 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0512 01:00:59.461594    9720 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-20220512005140-7184
	I0512 01:00:59.467106    9720 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-20220512005140-7184: (1.1663965s)
	I0512 01:00:59.467106    9720 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49879 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\pause-20220512005140-7184\id_rsa Username:docker}
	I0512 01:00:59.608881    9720 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.5/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0512 01:01:00.194012    9720 system_pods.go:86] 6 kube-system pods found
	I0512 01:01:00.194012    9720 system_pods.go:89] "coredns-64897985d-6rqbl" [7d6e3981-4ff9-4593-83b1-57b703abd918] Running
	I0512 01:01:00.194012    9720 system_pods.go:89] "etcd-pause-20220512005140-7184" [62c0faef-19ea-4696-97ab-48e84baedea3] Running
	I0512 01:01:00.194012    9720 system_pods.go:89] "kube-apiserver-pause-20220512005140-7184" [83c3db73-94bd-4f33-83e9-6c42f62f4d4b] Running
	I0512 01:01:00.194012    9720 system_pods.go:89] "kube-controller-manager-pause-20220512005140-7184" [054f4a92-3568-4023-a22b-617612d6b1fb] Running
	I0512 01:01:00.194012    9720 system_pods.go:89] "kube-proxy-sk8qh" [f28d65ac-6d94-41fd-ad5c-dfc02902ee82] Running
	I0512 01:01:00.194012    9720 system_pods.go:89] "kube-scheduler-pause-20220512005140-7184" [ffdf2485-8fe5-44b1-b98c-7e4e039bcac0] Running
	I0512 01:01:00.194012    9720 system_pods.go:126] duration metric: took 887.9638ms to wait for k8s-apps to be running ...
	I0512 01:01:00.194012    9720 system_svc.go:44] waiting for kubelet service to be running ....
	I0512 01:01:00.212032    9720 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0512 01:01:00.625126    9720 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.5/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.016193s)
	I0512 01:01:00.625126    9720 system_svc.go:56] duration metric: took 431.092ms WaitForService to wait for kubelet.
	I0512 01:01:00.626104    9720 kubeadm.go:548] duration metric: took 3.9097665s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0512 01:01:00.626104    9720 node_conditions.go:102] verifying NodePressure condition ...
	I0512 01:01:00.639108    9720 node_conditions.go:122] node storage ephemeral capacity is 263174212Ki
	I0512 01:01:00.639108    9720 node_conditions.go:123] node cpu capacity is 16
	I0512 01:01:00.639108    9720 node_conditions.go:105] duration metric: took 13.0031ms to run NodePressure ...
	I0512 01:01:00.639108    9720 start.go:213] waiting for startup goroutines ...
	I0512 01:01:00.669107    9720 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-20220512005140-7184: (1.2073842s)
	I0512 01:01:00.669107    9720 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49879 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\pause-20220512005140-7184\id_rsa Username:docker}
	I0512 01:01:00.967034    9720 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.5/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0512 01:00:56.848727    6824 cli_runner.go:217] Completed: docker exec cert-expiration-20220512005951-7184 stat /var/lib/dpkg/alternatives/iptables: (1.3104927s)
	I0512 01:00:56.848727    6824 oci.go:247] the created container "cert-expiration-20220512005951-7184" has a running status.
	I0512 01:00:56.848727    6824 kic.go:210] Creating ssh key for kic: C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\cert-expiration-20220512005951-7184\id_rsa...
	I0512 01:00:57.182119    6824 kic_runner.go:191] docker (temp): C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\cert-expiration-20220512005951-7184\id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0512 01:00:58.500455    6824 cli_runner.go:164] Run: docker container inspect cert-expiration-20220512005951-7184 --format={{.State.Status}}
	I0512 01:00:59.625010    6824 cli_runner.go:217] Completed: docker container inspect cert-expiration-20220512005951-7184 --format={{.State.Status}}: (1.1244965s)
	I0512 01:00:59.649506    6824 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0512 01:00:59.649506    6824 kic_runner.go:114] Args: [docker exec --privileged cert-expiration-20220512005951-7184 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0512 01:01:01.128559    6824 kic_runner.go:123] Done: [docker exec --privileged cert-expiration-20220512005951-7184 chown docker:docker /home/docker/.ssh/authorized_keys]: (1.4789767s)
	I0512 01:01:01.131534    6824 kic.go:250] ensuring only current user has permissions to key file located at : C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\cert-expiration-20220512005951-7184\id_rsa...
	I0512 01:01:01.927594    9720 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0512 01:01:01.931600    9720 addons.go:417] enableAddons completed in 5.2151954s
	I0512 01:01:02.207223    9720 start.go:499] kubectl: 1.18.2, cluster: 1.23.5 (minor skew: 5)
	I0512 01:01:02.210231    9720 out.go:177] 
	W0512 01:01:02.213242    9720 out.go:239] ! C:\ProgramData\chocolatey\bin\kubectl.exe is version 1.18.2, which may have incompatibilites with Kubernetes 1.23.5.
	I0512 01:01:02.224237    9720 out.go:177]   - Want kubectl v1.23.5? Try 'minikube kubectl -- get pods -A'
	I0512 01:01:02.232235    9720 out.go:177] * Done! kubectl is now configured to use "pause-20220512005140-7184" cluster and "default" namespace by default
	I0512 01:01:01.873595    8484 cli_runner.go:217] Completed: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname docker-flags-20220512005959-7184 --name docker-flags-20220512005959-7184 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=docker-flags-20220512005959-7184 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=docker-flags-20220512005959-7184 --network docker-flags-20220512005959-7184 --ip 192.168.49.2 --volume docker-flags-20220512005959-7184:/var --security-opt apparmor=unconfined --memory=2048mb --memory-swap=2048mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1652251400-14138@sha256:8c847a4aa2afc5a7fc659f9731046bf9cc7e788283deecc83c8633014fb0828a: (5.3480401s)
	I0512 01:01:01.882597    8484 cli_runner.go:164] Run: docker container inspect docker-flags-20220512005959-7184 --format={{.State.Running}}
	I0512 01:01:03.195519    8484 cli_runner.go:217] Completed: docker container inspect docker-flags-20220512005959-7184 --format={{.State.Running}}: (1.3128541s)
	I0512 01:01:03.202509    8484 cli_runner.go:164] Run: docker container inspect docker-flags-20220512005959-7184 --format={{.State.Status}}
	I0512 01:01:04.524320    8484 cli_runner.go:217] Completed: docker container inspect docker-flags-20220512005959-7184 --format={{.State.Status}}: (1.3217435s)
	I0512 01:01:04.532316    8484 cli_runner.go:164] Run: docker exec docker-flags-20220512005959-7184 stat /var/lib/dpkg/alternatives/iptables
	I0512 01:01:01.700503    6824 cli_runner.go:164] Run: docker container inspect cert-expiration-20220512005951-7184 --format={{.State.Status}}
	I0512 01:01:03.004335    6824 cli_runner.go:217] Completed: docker container inspect cert-expiration-20220512005951-7184 --format={{.State.Status}}: (1.3037647s)
	I0512 01:01:03.004335    6824 machine.go:88] provisioning docker machine ...
	I0512 01:01:03.004335    6824 ubuntu.go:169] provisioning hostname "cert-expiration-20220512005951-7184"
	I0512 01:01:03.012202    6824 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-expiration-20220512005951-7184
	I0512 01:01:04.319916    6824 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-expiration-20220512005951-7184: (1.3076471s)
	I0512 01:01:04.324901    6824 main.go:134] libmachine: Using SSH client type: native
	I0512 01:01:04.333895    6824 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x422e20] 0x425c80 <nil>  [] 0s} 127.0.0.1 50111 <nil> <nil>}
	I0512 01:01:04.333895    6824 main.go:134] libmachine: About to run SSH command:
	sudo hostname cert-expiration-20220512005951-7184 && echo "cert-expiration-20220512005951-7184" | sudo tee /etc/hostname
	I0512 01:01:04.551470    6824 main.go:134] libmachine: SSH cmd err, output: <nil>: cert-expiration-20220512005951-7184
	
	I0512 01:01:04.564303    6824 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-expiration-20220512005951-7184
	I0512 01:01:05.697486    6824 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-expiration-20220512005951-7184: (1.1331246s)
	I0512 01:01:05.701487    6824 main.go:134] libmachine: Using SSH client type: native
	I0512 01:01:05.701487    6824 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x422e20] 0x425c80 <nil>  [] 0s} 127.0.0.1 50111 <nil> <nil>}
	I0512 01:01:05.701487    6824 main.go:134] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\scert-expiration-20220512005951-7184' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 cert-expiration-20220512005951-7184/g' /etc/hosts;
				else 
					echo '127.0.1.1 cert-expiration-20220512005951-7184' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0512 01:01:05.935294    6824 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I0512 01:01:05.935294    6824 ubuntu.go:175] set auth options {CertDir:C:\Users\jenkins.minikube4\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube4\minikube-integration\.minikube}
	I0512 01:01:05.935294    6824 ubuntu.go:177] setting up certificates
	I0512 01:01:05.935294    6824 provision.go:83] configureAuth start
	I0512 01:01:05.944290    6824 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" cert-expiration-20220512005951-7184
	I0512 01:01:05.806016    8484 cli_runner.go:217] Completed: docker exec docker-flags-20220512005959-7184 stat /var/lib/dpkg/alternatives/iptables: (1.2735155s)
	I0512 01:01:05.806060    8484 oci.go:247] the created container "docker-flags-20220512005959-7184" has a running status.
	I0512 01:01:05.806301    8484 kic.go:210] Creating ssh key for kic: C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\docker-flags-20220512005959-7184\id_rsa...
	I0512 01:01:06.013793    8484 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\docker-flags-20220512005959-7184\id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I0512 01:01:06.020780    8484 kic_runner.go:191] docker (temp): C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\docker-flags-20220512005959-7184\id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0512 01:01:07.225364    8484 cli_runner.go:164] Run: docker container inspect docker-flags-20220512005959-7184 --format={{.State.Status}}
	I0512 01:01:08.395109    8484 cli_runner.go:217] Completed: docker container inspect docker-flags-20220512005959-7184 --format={{.State.Status}}: (1.1695839s)
	I0512 01:01:08.412760    8484 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0512 01:01:08.412760    8484 kic_runner.go:114] Args: [docker exec --privileged docker-flags-20220512005959-7184 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0512 01:01:09.752607    8484 kic_runner.go:123] Done: [docker exec --privileged docker-flags-20220512005959-7184 chown docker:docker /home/docker/.ssh/authorized_keys]: (1.3397782s)
	I0512 01:01:09.756575    8484 kic.go:250] ensuring only current user has permissions to key file located at : C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\docker-flags-20220512005959-7184\id_rsa...
	I0512 01:01:07.104406    6824 cli_runner.go:217] Completed: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" cert-expiration-20220512005951-7184: (1.1600568s)
	I0512 01:01:07.104406    6824 provision.go:138] copyHostCerts
	I0512 01:01:07.104406    6824 exec_runner.go:144] found C:\Users\jenkins.minikube4\minikube-integration\.minikube/key.pem, removing ...
	I0512 01:01:07.104406    6824 exec_runner.go:207] rm: C:\Users\jenkins.minikube4\minikube-integration\.minikube\key.pem
	I0512 01:01:07.104406    6824 exec_runner.go:151] cp: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube4\minikube-integration\.minikube/key.pem (1675 bytes)
	I0512 01:01:07.105409    6824 exec_runner.go:144] found C:\Users\jenkins.minikube4\minikube-integration\.minikube/ca.pem, removing ...
	I0512 01:01:07.105409    6824 exec_runner.go:207] rm: C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.pem
	I0512 01:01:07.106410    6824 exec_runner.go:151] cp: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube4\minikube-integration\.minikube/ca.pem (1078 bytes)
	I0512 01:01:07.107416    6824 exec_runner.go:144] found C:\Users\jenkins.minikube4\minikube-integration\.minikube/cert.pem, removing ...
	I0512 01:01:07.107416    6824 exec_runner.go:207] rm: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cert.pem
	I0512 01:01:07.107416    6824 exec_runner.go:151] cp: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube4\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0512 01:01:07.108420    6824 provision.go:112] generating server cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.cert-expiration-20220512005951-7184 san=[192.168.58.2 127.0.0.1 localhost 127.0.0.1 minikube cert-expiration-20220512005951-7184]
	I0512 01:01:07.456711    6824 provision.go:172] copyRemoteCerts
	I0512 01:01:07.467759    6824 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0512 01:01:07.477988    6824 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-expiration-20220512005951-7184
	I0512 01:01:08.632777    6824 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-expiration-20220512005951-7184: (1.1547292s)
	I0512 01:01:08.633499    6824 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50111 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\cert-expiration-20220512005951-7184\id_rsa Username:docker}
	I0512 01:01:08.774373    6824 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (1.3065461s)
	I0512 01:01:08.775371    6824 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0512 01:01:08.825445    6824 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0512 01:01:08.871683    6824 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1277 bytes)
	I0512 01:01:08.947899    6824 provision.go:86] duration metric: configureAuth took 3.012393s
	I0512 01:01:08.947899    6824 ubuntu.go:193] setting minikube options for container-runtime
	I0512 01:01:08.947899    6824 config.go:178] Loaded profile config "cert-expiration-20220512005951-7184": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.5
	I0512 01:01:08.958932    6824 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-expiration-20220512005951-7184
	I0512 01:01:10.152616    6824 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-expiration-20220512005951-7184: (1.1936227s)
	I0512 01:01:10.156609    6824 main.go:134] libmachine: Using SSH client type: native
	I0512 01:01:10.156609    6824 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x422e20] 0x425c80 <nil>  [] 0s} 127.0.0.1 50111 <nil> <nil>}
	I0512 01:01:10.156609    6824 main.go:134] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0512 01:01:10.347539    6824 main.go:134] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0512 01:01:10.347539    6824 ubuntu.go:71] root file system type: overlay
	I0512 01:01:10.347539    6824 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0512 01:01:10.356883    6824 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-expiration-20220512005951-7184
	I0512 01:01:11.498388    6824 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-expiration-20220512005951-7184: (1.1413833s)
	I0512 01:01:11.504495    6824 main.go:134] libmachine: Using SSH client type: native
	I0512 01:01:11.505234    6824 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x422e20] 0x425c80 <nil>  [] 0s} 127.0.0.1 50111 <nil> <nil>}
	I0512 01:01:11.505234    6824 main.go:134] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0512 01:01:10.382579    8484 cli_runner.go:164] Run: docker container inspect docker-flags-20220512005959-7184 --format={{.State.Status}}
	I0512 01:01:11.498514    8484 cli_runner.go:217] Completed: docker container inspect docker-flags-20220512005959-7184 --format={{.State.Status}}: (1.1157515s)
	I0512 01:01:11.498514    8484 machine.go:88] provisioning docker machine ...
	I0512 01:01:11.498514    8484 ubuntu.go:169] provisioning hostname "docker-flags-20220512005959-7184"
	I0512 01:01:11.511399    8484 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-20220512005959-7184
	I0512 01:01:12.614413    8484 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-20220512005959-7184: (1.1029192s)
	I0512 01:01:12.618411    8484 main.go:134] libmachine: Using SSH client type: native
	I0512 01:01:12.618411    8484 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x422e20] 0x425c80 <nil>  [] 0s} 127.0.0.1 50123 <nil> <nil>}
	I0512 01:01:12.618411    8484 main.go:134] libmachine: About to run SSH command:
	sudo hostname docker-flags-20220512005959-7184 && echo "docker-flags-20220512005959-7184" | sudo tee /etc/hostname
	I0512 01:01:12.845911    8484 main.go:134] libmachine: SSH cmd err, output: <nil>: docker-flags-20220512005959-7184
	
	I0512 01:01:12.853912    8484 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-20220512005959-7184
	I0512 01:01:14.044358    8484 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-20220512005959-7184: (1.1902352s)
	I0512 01:01:14.048988    8484 main.go:134] libmachine: Using SSH client type: native
	I0512 01:01:14.051240    8484 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x422e20] 0x425c80 <nil>  [] 0s} 127.0.0.1 50123 <nil> <nil>}
	I0512 01:01:14.051240    8484 main.go:134] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdocker-flags-20220512005959-7184' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 docker-flags-20220512005959-7184/g' /etc/hosts;
				else 
					echo '127.0.1.1 docker-flags-20220512005959-7184' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0512 01:01:14.249319    8484 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I0512 01:01:14.249319    8484 ubuntu.go:175] set auth options {CertDir:C:\Users\jenkins.minikube4\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube4\minikube-integration\.minikube}
	I0512 01:01:14.249319    8484 ubuntu.go:177] setting up certificates
	I0512 01:01:14.249319    8484 provision.go:83] configureAuth start
	I0512 01:01:14.259922    8484 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" docker-flags-20220512005959-7184
	I0512 01:01:11.736682    6824 main.go:134] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0512 01:01:11.744715    6824 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-expiration-20220512005951-7184
	I0512 01:01:12.897920    6824 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-expiration-20220512005951-7184: (1.1531454s)
	I0512 01:01:12.903919    6824 main.go:134] libmachine: Using SSH client type: native
	I0512 01:01:12.903919    6824 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x422e20] 0x425c80 <nil>  [] 0s} 127.0.0.1 50111 <nil> <nil>}
	I0512 01:01:12.903919    6824 main.go:134] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0512 01:01:18.234786    2560 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.23.5-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v cert-options-20220512010013-7184:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1652251400-14138@sha256:8c847a4aa2afc5a7fc659f9731046bf9cc7e788283deecc83c8633014fb0828a -I lz4 -xf /preloaded.tar -C /extractDir: (29.7513039s)
	I0512 01:01:18.234786    2560 kic.go:188] duration metric: took 29.758421 seconds to extract preloaded images to volume
	I0512 01:01:18.243798    2560 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0512 01:01:15.364468    8484 cli_runner.go:217] Completed: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" docker-flags-20220512005959-7184: (1.1043824s)
	I0512 01:01:15.364468    8484 provision.go:138] copyHostCerts
	I0512 01:01:15.364468    8484 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube4\minikube-integration\.minikube/ca.pem
	I0512 01:01:15.364468    8484 exec_runner.go:144] found C:\Users\jenkins.minikube4\minikube-integration\.minikube/ca.pem, removing ...
	I0512 01:01:15.364468    8484 exec_runner.go:207] rm: C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.pem
	I0512 01:01:15.365543    8484 exec_runner.go:151] cp: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube4\minikube-integration\.minikube/ca.pem (1078 bytes)
	I0512 01:01:15.366876    8484 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube4\minikube-integration\.minikube/cert.pem
	I0512 01:01:15.367120    8484 exec_runner.go:144] found C:\Users\jenkins.minikube4\minikube-integration\.minikube/cert.pem, removing ...
	I0512 01:01:15.367120    8484 exec_runner.go:207] rm: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cert.pem
	I0512 01:01:15.367470    8484 exec_runner.go:151] cp: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube4\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0512 01:01:15.368309    8484 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube4\minikube-integration\.minikube/key.pem
	I0512 01:01:15.368535    8484 exec_runner.go:144] found C:\Users\jenkins.minikube4\minikube-integration\.minikube/key.pem, removing ...
	I0512 01:01:15.368535    8484 exec_runner.go:207] rm: C:\Users\jenkins.minikube4\minikube-integration\.minikube\key.pem
	I0512 01:01:15.368535    8484 exec_runner.go:151] cp: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube4\minikube-integration\.minikube/key.pem (1675 bytes)
	I0512 01:01:15.369251    8484 provision.go:112] generating server cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.docker-flags-20220512005959-7184 san=[192.168.49.2 127.0.0.1 localhost 127.0.0.1 minikube docker-flags-20220512005959-7184]
	I0512 01:01:15.607218    8484 provision.go:172] copyRemoteCerts
	I0512 01:01:15.616254    8484 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0512 01:01:15.622955    8484 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-20220512005959-7184
	I0512 01:01:16.716054    8484 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-20220512005959-7184: (1.0930425s)
	I0512 01:01:16.716054    8484 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50123 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\docker-flags-20220512005959-7184\id_rsa Username:docker}
	I0512 01:01:16.869623    8484 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (1.2533046s)
	I0512 01:01:16.869623    8484 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0512 01:01:16.870206    8484 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0512 01:01:16.932249    8484 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0512 01:01:16.932249    8484 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1269 bytes)
	I0512 01:01:16.988490    8484 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0512 01:01:16.989029    8484 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0512 01:01:17.046438    8484 provision.go:86] duration metric: configureAuth took 2.7969741s
	I0512 01:01:17.046543    8484 ubuntu.go:193] setting minikube options for container-runtime
	I0512 01:01:17.046543    8484 config.go:178] Loaded profile config "docker-flags-20220512005959-7184": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.5
	I0512 01:01:17.055943    8484 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-20220512005959-7184
	I0512 01:01:18.139785    8484 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-20220512005959-7184: (1.0837867s)
	I0512 01:01:18.144847    8484 main.go:134] libmachine: Using SSH client type: native
	I0512 01:01:18.144847    8484 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x422e20] 0x425c80 <nil>  [] 0s} 127.0.0.1 50123 <nil> <nil>}
	I0512 01:01:18.144847    8484 main.go:134] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0512 01:01:18.271802    8484 main.go:134] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0512 01:01:18.271802    8484 ubuntu.go:71] root file system type: overlay
	I0512 01:01:18.272788    8484 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0512 01:01:18.281789    8484 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-20220512005959-7184
	I0512 01:01:19.374497    8484 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-20220512005959-7184: (1.0926519s)
	I0512 01:01:19.378509    8484 main.go:134] libmachine: Using SSH client type: native
	I0512 01:01:19.379497    8484 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x422e20] 0x425c80 <nil>  [] 0s} 127.0.0.1 50123 <nil> <nil>}
	I0512 01:01:19.379497    8484 main.go:134] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="FOO=BAR"
	Environment="BAZ=BAT"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 --debug --icc=true 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0512 01:01:19.571306    8484 main.go:134] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=FOO=BAR
	Environment=BAZ=BAT
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 --debug --icc=true 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0512 01:01:19.582529    8484 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-20220512005959-7184
	I0512 01:01:18.601467    6824 main.go:134] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2022-05-05 13:17:28.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2022-05-12 01:01:11.720408000 +0000
	@@ -1,30 +1,32 @@
	 [Unit]
	 Description=Docker Application Container Engine
	 Documentation=https://docs.docker.com
	-After=network-online.target docker.socket firewalld.service containerd.service
	+BindsTo=containerd.service
	+After=network-online.target firewalld.service containerd.service
	 Wants=network-online.target
	-Requires=docker.socket containerd.service
	+Requires=docker.socket
	+StartLimitBurst=3
	+StartLimitIntervalSec=60
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutSec=0
	-RestartSec=2
	-Restart=always
	-
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	+Restart=on-failure
	 
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP $MAINPID
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	@@ -32,16 +34,16 @@
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	 
	 # kill only the docker process, not all processes in the cgroup
	 KillMode=process
	-OOMScoreAdjust=-500
	 
	 [Install]
	 WantedBy=multi-user.target
	Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install enable docker
	
	I0512 01:01:18.601533    6824 machine.go:91] provisioned docker machine in 15.5963932s
	I0512 01:01:18.601533    6824 client.go:171] LocalClient.Create took 1m18.3395908s
	I0512 01:01:18.601627    6824 start.go:173] duration metric: libmachine.API.Create for "cert-expiration-20220512005951-7184" took 1m18.339685s
	I0512 01:01:18.601688    6824 start.go:306] post-start starting for "cert-expiration-20220512005951-7184" (driver="docker")
	I0512 01:01:18.601688    6824 start.go:316] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0512 01:01:18.616861    6824 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0512 01:01:18.622848    6824 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-expiration-20220512005951-7184
	I0512 01:01:19.736940    6824 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-expiration-20220512005951-7184: (1.114034s)
	I0512 01:01:19.736940    6824 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50111 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\cert-expiration-20220512005951-7184\id_rsa Username:docker}
	I0512 01:01:19.864375    6824 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (1.24745s)
	I0512 01:01:19.875799    6824 ssh_runner.go:195] Run: cat /etc/os-release
	I0512 01:01:19.887803    6824 main.go:134] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0512 01:01:19.887803    6824 main.go:134] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0512 01:01:19.887803    6824 main.go:134] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0512 01:01:19.887803    6824 info.go:137] Remote host: Ubuntu 20.04.4 LTS
	I0512 01:01:19.887803    6824 filesync.go:126] Scanning C:\Users\jenkins.minikube4\minikube-integration\.minikube\addons for local assets ...
	I0512 01:01:19.888331    6824 filesync.go:126] Scanning C:\Users\jenkins.minikube4\minikube-integration\.minikube\files for local assets ...
	I0512 01:01:19.889253    6824 filesync.go:149] local asset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\71842.pem -> 71842.pem in /etc/ssl/certs
	I0512 01:01:19.901018    6824 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0512 01:01:19.937101    6824 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\71842.pem --> /etc/ssl/certs/71842.pem (1708 bytes)
	I0512 01:01:20.000910    6824 start.go:309] post-start completed in 1.3990955s
	I0512 01:01:20.014586    6824 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" cert-expiration-20220512005951-7184
	I0512 01:01:21.114399    6824 cli_runner.go:217] Completed: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" cert-expiration-20220512005951-7184: (1.0997164s)
	I0512 01:01:21.114399    6824 profile.go:148] Saving config to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\cert-expiration-20220512005951-7184\config.json ...
	I0512 01:01:21.127400    6824 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0512 01:01:21.137420    6824 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-expiration-20220512005951-7184
	I0512 01:01:20.442467    2560 cli_runner.go:217] Completed: docker system info --format "{{json .}}": (2.1984368s)
	I0512 01:01:20.443015    2560 info.go:265] docker info: {ID:5MOX:W55Z:6RSS:V5PU:46KT:D723:NTM4:N7FK:USOO:URA3:TW6J:2PNT Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:87 OomKillDisable:true NGoroutines:63 SystemTime:2022-05-12 01:01:19.3019212 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.102.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_
64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:53902323712 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.14 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3df54a852345ae127d1fa3092b95168e4a88e2f8 Expected:3df54a852345ae127d1fa3092b95168e4a88e2f8} RuncCommit:{ID:v1.0.3-0-gf46b6ba Expected:v1.0.3-0-gf46b6ba} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,
profile=default] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.5.0] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:C:\Program Files\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc.
Version:v0.17.0]] Warnings:<nil>}}
	I0512 01:01:20.454532    2560 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0512 01:01:22.696767    2560 cli_runner.go:217] Completed: docker info --format "'{{json .SecurityOptions}}'": (2.2421186s)
	I0512 01:01:22.706440    2560 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname cert-options-20220512010013-7184 --name cert-options-20220512010013-7184 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=cert-options-20220512010013-7184 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=cert-options-20220512010013-7184 --network cert-options-20220512010013-7184 --ip 192.168.76.2 --volume cert-options-20220512010013-7184:/var --security-opt apparmor=unconfined --memory=2048mb --memory-swap=2048mb --cpus=2 -e container=docker --expose 8555 --publish=127.0.0.1::8555 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1652251400-14138@sha256:8c847a4aa2afc5a7fc659f9731046bf9cc7e788283deecc83c8633014fb0828a
	I0512 01:01:20.705031    8484 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-20220512005959-7184: (1.1222467s)
	I0512 01:01:20.709484    8484 main.go:134] libmachine: Using SSH client type: native
	I0512 01:01:20.710084    8484 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x422e20] 0x425c80 <nil>  [] 0s} 127.0.0.1 50123 <nil> <nil>}
	I0512 01:01:20.710676    8484 main.go:134] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0512 01:01:22.128442    8484 main.go:134] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2022-05-05 13:17:28.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2022-05-12 01:01:19.557643000 +0000
	@@ -1,30 +1,34 @@
	 [Unit]
	 Description=Docker Application Container Engine
	 Documentation=https://docs.docker.com
	-After=network-online.target docker.socket firewalld.service containerd.service
	+BindsTo=containerd.service
	+After=network-online.target firewalld.service containerd.service
	 Wants=network-online.target
	-Requires=docker.socket containerd.service
	+Requires=docker.socket
	+StartLimitBurst=3
	+StartLimitIntervalSec=60
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutSec=0
	-RestartSec=2
	-Restart=always
	-
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	+Restart=on-failure
	 
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+Environment=FOO=BAR
	+Environment=BAZ=BAT
	+
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 --debug --icc=true 
	+ExecReload=/bin/kill -s HUP $MAINPID
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	@@ -32,16 +36,16 @@
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	 
	 # kill only the docker process, not all processes in the cgroup
	 KillMode=process
	-OOMScoreAdjust=-500
	 
	 [Install]
	 WantedBy=multi-user.target
	Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install enable docker
	
	I0512 01:01:22.128669    8484 machine.go:91] provisioned docker machine in 10.6295589s
	I0512 01:01:22.128669    8484 client.go:171] LocalClient.Create took 1m11.7843979s
	I0512 01:01:22.128764    8484 start.go:173] duration metric: libmachine.API.Create for "docker-flags-20220512005959-7184" took 1m11.7844924s
	I0512 01:01:22.128817    8484 start.go:306] post-start starting for "docker-flags-20220512005959-7184" (driver="docker")
	I0512 01:01:22.128817    8484 start.go:316] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0512 01:01:22.152080    8484 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0512 01:01:22.159860    8484 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-20220512005959-7184
	I0512 01:01:23.264263    8484 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-20220512005959-7184: (1.1041168s)
	I0512 01:01:23.264263    8484 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50123 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\docker-flags-20220512005959-7184\id_rsa Username:docker}
	I0512 01:01:23.395529    8484 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (1.2432474s)
	I0512 01:01:23.409281    8484 ssh_runner.go:195] Run: cat /etc/os-release
	I0512 01:01:23.422901    8484 main.go:134] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0512 01:01:23.422901    8484 main.go:134] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0512 01:01:23.422901    8484 main.go:134] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0512 01:01:23.422901    8484 info.go:137] Remote host: Ubuntu 20.04.4 LTS
	I0512 01:01:23.422901    8484 filesync.go:126] Scanning C:\Users\jenkins.minikube4\minikube-integration\.minikube\addons for local assets ...
	I0512 01:01:23.423437    8484 filesync.go:126] Scanning C:\Users\jenkins.minikube4\minikube-integration\.minikube\files for local assets ...
	I0512 01:01:23.424396    8484 filesync.go:149] local asset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\71842.pem -> 71842.pem in /etc/ssl/certs
	I0512 01:01:23.424396    8484 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\71842.pem -> /etc/ssl/certs/71842.pem
	I0512 01:01:23.438689    8484 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0512 01:01:23.461682    8484 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\71842.pem --> /etc/ssl/certs/71842.pem (1708 bytes)
	I0512 01:01:23.507696    8484 start.go:309] post-start completed in 1.3788078s
	I0512 01:01:23.517684    8484 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" docker-flags-20220512005959-7184
	I0512 01:01:24.787609    8484 cli_runner.go:217] Completed: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" docker-flags-20220512005959-7184: (1.2698594s)
	I0512 01:01:24.787609    8484 profile.go:148] Saving config to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\docker-flags-20220512005959-7184\config.json ...
	I0512 01:01:24.808610    8484 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0512 01:01:24.827954    8484 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-20220512005959-7184
	I0512 01:01:22.285440    6824 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-expiration-20220512005951-7184: (1.147961s)
	I0512 01:01:22.286044    6824 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50111 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\cert-expiration-20220512005951-7184\id_rsa Username:docker}
	I0512 01:01:22.424011    6824 ssh_runner.go:235] Completed: sh -c "df -h /var | awk 'NR==2{print $5}'": (1.2965443s)
	I0512 01:01:22.435012    6824 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0512 01:01:22.449019    6824 start.go:134] duration metric: createHost completed in 1m22.1978999s
	I0512 01:01:22.449019    6824 start.go:81] releasing machines lock for "cert-expiration-20220512005951-7184", held for 1m22.1978999s
	I0512 01:01:22.458008    6824 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" cert-expiration-20220512005951-7184
	I0512 01:01:23.577230    6824 cli_runner.go:217] Completed: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" cert-expiration-20220512005951-7184: (1.1190847s)
	I0512 01:01:23.580429    6824 ssh_runner.go:195] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0512 01:01:23.588461    6824 ssh_runner.go:195] Run: systemctl --version
	I0512 01:01:23.591435    6824 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-expiration-20220512005951-7184
	I0512 01:01:23.599428    6824 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-expiration-20220512005951-7184
	I0512 01:01:24.849769    6824 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-expiration-20220512005951-7184: (1.2582692s)
	I0512 01:01:24.849769    6824 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50111 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\cert-expiration-20220512005951-7184\id_rsa Username:docker}
	I0512 01:01:24.874718    6824 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-expiration-20220512005951-7184: (1.2752234s)
	I0512 01:01:24.875713    6824 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50111 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\cert-expiration-20220512005951-7184\id_rsa Username:docker}
	I0512 01:01:25.091726    6824 ssh_runner.go:235] Completed: curl -sS -m 2 https://k8s.gcr.io/: (1.5112191s)
	I0512 01:01:25.091726    6824 ssh_runner.go:235] Completed: systemctl --version: (1.5031871s)
	I0512 01:01:25.104724    6824 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0512 01:01:25.154752    6824 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0512 01:01:25.182747    6824 cruntime.go:273] skipping containerd shutdown because we are bound to it
	I0512 01:01:25.192736    6824 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0512 01:01:25.219758    6824 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/dockershim.sock
	image-endpoint: unix:///var/run/dockershim.sock
	" | sudo tee /etc/crictl.yaml"
	I0512 01:01:25.273727    6824 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0512 01:01:25.431640    6824 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0512 01:01:25.629517    6824 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0512 01:01:25.675511    6824 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0512 01:01:25.868554    6824 ssh_runner.go:195] Run: sudo systemctl start docker
	I0512 01:01:25.917542    6824 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0512 01:01:26.028537    6824 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0512 01:01:26.142711    6824 out.go:204] * Preparing Kubernetes v1.23.5 on Docker 20.10.15 ...
	I0512 01:01:26.151701    6824 cli_runner.go:164] Run: docker exec -t cert-expiration-20220512005951-7184 dig +short host.docker.internal
	I0512 01:01:24.928727    2560 cli_runner.go:217] Completed: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname cert-options-20220512010013-7184 --name cert-options-20220512010013-7184 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=cert-options-20220512010013-7184 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=cert-options-20220512010013-7184 --network cert-options-20220512010013-7184 --ip 192.168.76.2 --volume cert-options-20220512010013-7184:/var --security-opt apparmor=unconfined --memory=2048mb --memory-swap=2048mb --cpus=2 -e container=docker --expose 8555 --publish=127.0.0.1::8555 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1652251400-14138@sha256:8c847a4aa2afc5a7fc659f9731046bf9cc7e788283deecc83c8633014fb0828a: (2.2221728s)
	I0512 01:01:24.941748    2560 cli_runner.go:164] Run: docker container inspect cert-options-20220512010013-7184 --format={{.State.Running}}
	I0512 01:01:26.327717    2560 cli_runner.go:217] Completed: docker container inspect cert-options-20220512010013-7184 --format={{.State.Running}}: (1.3858976s)
	I0512 01:01:26.339722    2560 cli_runner.go:164] Run: docker container inspect cert-options-20220512010013-7184 --format={{.State.Status}}
	I0512 01:01:27.489166    2560 cli_runner.go:217] Completed: docker container inspect cert-options-20220512010013-7184 --format={{.State.Status}}: (1.1493843s)
	I0512 01:01:27.496172    2560 cli_runner.go:164] Run: docker exec cert-options-20220512010013-7184 stat /var/lib/dpkg/alternatives/iptables
	I0512 01:01:28.766883    2560 cli_runner.go:217] Completed: docker exec cert-options-20220512010013-7184 stat /var/lib/dpkg/alternatives/iptables: (1.2706458s)
	I0512 01:01:28.766883    2560 oci.go:247] the created container "cert-options-20220512010013-7184" has a running status.
	I0512 01:01:28.766883    2560 kic.go:210] Creating ssh key for kic: C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\cert-options-20220512010013-7184\id_rsa...
	I0512 01:01:26.279710    8484 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-20220512005959-7184: (1.4515679s)
	I0512 01:01:26.279710    8484 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50123 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\docker-flags-20220512005959-7184\id_rsa Username:docker}
	I0512 01:01:26.427715    8484 ssh_runner.go:235] Completed: sh -c "df -h /var | awk 'NR==2{print $5}'": (1.6190215s)
	I0512 01:01:26.447823    8484 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0512 01:01:26.464743    8484 start.go:134] duration metric: createHost completed in 1m16.1262785s
	I0512 01:01:26.464743    8484 start.go:81] releasing machines lock for "docker-flags-20220512005959-7184", held for 1m16.1262785s
	I0512 01:01:26.477727    8484 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" docker-flags-20220512005959-7184
	I0512 01:01:27.616180    8484 cli_runner.go:217] Completed: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" docker-flags-20220512005959-7184: (1.1383943s)
	I0512 01:01:27.618172    8484 ssh_runner.go:195] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0512 01:01:27.627240    8484 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-20220512005959-7184
	I0512 01:01:27.629170    8484 ssh_runner.go:195] Run: systemctl --version
	I0512 01:01:27.639168    8484 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-20220512005959-7184
	I0512 01:01:28.735603    8484 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-20220512005959-7184: (1.1083058s)
	I0512 01:01:28.735603    8484 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50123 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\docker-flags-20220512005959-7184\id_rsa Username:docker}
	I0512 01:01:28.751610    8484 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-20220512005959-7184: (1.1123849s)
	I0512 01:01:28.751610    8484 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50123 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\docker-flags-20220512005959-7184\id_rsa Username:docker}
	I0512 01:01:28.958252    8484 ssh_runner.go:235] Completed: curl -sS -m 2 https://k8s.gcr.io/: (1.3400111s)
	I0512 01:01:28.958252    8484 ssh_runner.go:235] Completed: systemctl --version: (1.3290135s)
	I0512 01:01:28.971235    8484 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0512 01:01:29.017574    8484 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0512 01:01:29.044552    8484 cruntime.go:273] skipping containerd shutdown because we are bound to it
	I0512 01:01:29.056558    8484 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0512 01:01:29.088688    8484 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/dockershim.sock
	image-endpoint: unix:///var/run/dockershim.sock
	" | sudo tee /etc/crictl.yaml"
	I0512 01:01:29.133687    8484 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0512 01:01:29.316415    8484 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0512 01:01:29.491081    8484 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0512 01:01:29.527085    8484 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0512 01:01:29.706463    8484 ssh_runner.go:195] Run: sudo systemctl start docker
	I0512 01:01:29.749263    8484 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0512 01:01:29.837868    8484 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0512 01:01:29.930547    8484 out.go:204] * Preparing Kubernetes v1.23.5 on Docker 20.10.15 ...
	I0512 01:01:29.933540    8484 out.go:177]   - opt debug
	I0512 01:01:29.935567    8484 out.go:177]   - opt icc=true
	I0512 01:01:29.940545    8484 out.go:177]   - env FOO=BAR
	I0512 01:01:29.942588    8484 out.go:177]   - env BAZ=BAT
	I0512 01:01:29.954549    8484 cli_runner.go:164] Run: docker exec -t docker-flags-20220512005959-7184 dig +short host.docker.internal
	I0512 01:01:27.520163    6824 cli_runner.go:217] Completed: docker exec -t cert-expiration-20220512005951-7184 dig +short host.docker.internal: (1.368391s)
	I0512 01:01:27.520163    6824 network.go:96] got host ip for mount in container by digging dns: 192.168.65.2
	I0512 01:01:27.532174    6824 ssh_runner.go:195] Run: grep 192.168.65.2	host.minikube.internal$ /etc/hosts
	I0512 01:01:27.543190    6824 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.65.2	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0512 01:01:27.580175    6824 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" cert-expiration-20220512005951-7184
	I0512 01:01:28.704609    6824 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" cert-expiration-20220512005951-7184: (1.1243751s)
	I0512 01:01:28.704609    6824 preload.go:132] Checking if preload exists for k8s version v1.23.5 and runtime docker
	I0512 01:01:28.715602    6824 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0512 01:01:28.789608    6824 docker.go:610] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.23.5
	k8s.gcr.io/kube-proxy:v1.23.5
	k8s.gcr.io/kube-scheduler:v1.23.5
	k8s.gcr.io/kube-controller-manager:v1.23.5
	k8s.gcr.io/etcd:3.5.1-0
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0512 01:01:28.789608    6824 docker.go:541] Images already preloaded, skipping extraction
	I0512 01:01:28.797607    6824 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0512 01:01:28.863225    6824 docker.go:610] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.23.5
	k8s.gcr.io/kube-proxy:v1.23.5
	k8s.gcr.io/kube-scheduler:v1.23.5
	k8s.gcr.io/kube-controller-manager:v1.23.5
	k8s.gcr.io/etcd:3.5.1-0
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0512 01:01:28.863225    6824 cache_images.go:84] Images are preloaded, skipping loading
	I0512 01:01:28.875232    6824 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0512 01:01:29.086707    6824 cni.go:95] Creating CNI manager for ""
	I0512 01:01:29.086707    6824 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0512 01:01:29.086707    6824 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0512 01:01:29.086707    6824 kubeadm.go:158] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.58.2 APIServerPort:8443 KubernetesVersion:v1.23.5 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:cert-expiration-20220512005951-7184 NodeName:cert-expiration-20220512005951-7184 DNSDomain:cluster.local CRISocket:/var/run/dockershim.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.58.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.58.2 CgroupDriver:cgroupfs Client
CAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	I0512 01:01:29.086707    6824 kubeadm.go:162] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.58.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/dockershim.sock
	  name: "cert-expiration-20220512005951-7184"
	  kubeletExtraArgs:
	    node-ip: 192.168.58.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.58.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.23.5
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0512 01:01:29.087711    6824 kubeadm.go:936] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.23.5/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=docker --hostname-override=cert-expiration-20220512005951-7184 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.58.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.23.5 ClusterName:cert-expiration-20220512005951-7184 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0512 01:01:29.100691    6824 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.23.5
	I0512 01:01:29.120694    6824 binaries.go:44] Found k8s binaries, skipping transfer
	I0512 01:01:29.132691    6824 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0512 01:01:29.154697    6824 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (361 bytes)
	I0512 01:01:29.190852    6824 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0512 01:01:29.226627    6824 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2057 bytes)
	I0512 01:01:29.281438    6824 ssh_runner.go:195] Run: grep 192.168.58.2	control-plane.minikube.internal$ /etc/hosts
	I0512 01:01:29.293430    6824 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.58.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0512 01:01:29.319434    6824 certs.go:54] Setting up C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\cert-expiration-20220512005951-7184 for IP: 192.168.58.2
	I0512 01:01:29.320431    6824 certs.go:182] skipping minikubeCA CA generation: C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.key
	I0512 01:01:29.320431    6824 certs.go:182] skipping proxyClientCA CA generation: C:\Users\jenkins.minikube4\minikube-integration\.minikube\proxy-client-ca.key
	I0512 01:01:29.320431    6824 certs.go:302] generating minikube-user signed cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\cert-expiration-20220512005951-7184\client.key
	I0512 01:01:29.321437    6824 crypto.go:68] Generating cert C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\cert-expiration-20220512005951-7184\client.crt with IP's: []
	I0512 01:01:29.663362    6824 crypto.go:156] Writing cert to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\cert-expiration-20220512005951-7184\client.crt ...
	I0512 01:01:29.663362    6824 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\cert-expiration-20220512005951-7184\client.crt: {Name:mk59f7f9a2d6b10c4154a76f61e943b20d776224 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0512 01:01:29.665356    6824 crypto.go:164] Writing key to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\cert-expiration-20220512005951-7184\client.key ...
	I0512 01:01:29.665356    6824 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\cert-expiration-20220512005951-7184\client.key: {Name:mk1ec0a94d42d3957b20e0c2b3333686fb45267f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0512 01:01:29.666365    6824 certs.go:302] generating minikube signed cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\cert-expiration-20220512005951-7184\apiserver.key.cee25041
	I0512 01:01:29.666552    6824 crypto.go:68] Generating cert C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\cert-expiration-20220512005951-7184\apiserver.crt.cee25041 with IP's: [192.168.58.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I0512 01:01:30.307503    6824 crypto.go:156] Writing cert to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\cert-expiration-20220512005951-7184\apiserver.crt.cee25041 ...
	I0512 01:01:30.307503    6824 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\cert-expiration-20220512005951-7184\apiserver.crt.cee25041: {Name:mk8a82900262278fe57b022f526909b8f4332da2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0512 01:01:30.309087    6824 crypto.go:164] Writing key to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\cert-expiration-20220512005951-7184\apiserver.key.cee25041 ...
	I0512 01:01:30.309087    6824 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\cert-expiration-20220512005951-7184\apiserver.key.cee25041: {Name:mkf883c96cbe5e1018cc77fe39f73df0a3f806fa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0512 01:01:30.310083    6824 certs.go:320] copying C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\cert-expiration-20220512005951-7184\apiserver.crt.cee25041 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\cert-expiration-20220512005951-7184\apiserver.crt
	I0512 01:01:30.316083    6824 certs.go:324] copying C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\cert-expiration-20220512005951-7184\apiserver.key.cee25041 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\cert-expiration-20220512005951-7184\apiserver.key
	I0512 01:01:30.317086    6824 certs.go:302] generating aggregator signed cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\cert-expiration-20220512005951-7184\proxy-client.key
	I0512 01:01:30.317086    6824 crypto.go:68] Generating cert C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\cert-expiration-20220512005951-7184\proxy-client.crt with IP's: []
	I0512 01:01:31.185095    6824 crypto.go:156] Writing cert to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\cert-expiration-20220512005951-7184\proxy-client.crt ...
	I0512 01:01:31.185095    6824 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\cert-expiration-20220512005951-7184\proxy-client.crt: {Name:mk47cad0841a0cb1a109298ec4273b249da73a1a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0512 01:01:31.187106    6824 crypto.go:164] Writing key to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\cert-expiration-20220512005951-7184\proxy-client.key ...
	I0512 01:01:31.187106    6824 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\cert-expiration-20220512005951-7184\proxy-client.key: {Name:mk06ba077163a2ff00372fbb3c125307956d972e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0512 01:01:31.195100    6824 certs.go:388] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\7184.pem (1338 bytes)
	W0512 01:01:31.195100    6824 certs.go:384] ignoring C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\7184_empty.pem, impossibly tiny 0 bytes
	I0512 01:01:31.195100    6824 certs.go:388] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca-key.pem (1675 bytes)
	I0512 01:01:31.196094    6824 certs.go:388] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem (1078 bytes)
	I0512 01:01:31.196094    6824 certs.go:388] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0512 01:01:31.196094    6824 certs.go:388] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\key.pem (1675 bytes)
	I0512 01:01:31.197103    6824 certs.go:388] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\71842.pem (1708 bytes)
	I0512 01:01:31.199095    6824 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\cert-expiration-20220512005951-7184\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0512 01:01:31.277763    6824 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\cert-expiration-20220512005951-7184\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0512 01:01:31.334742    6824 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\cert-expiration-20220512005951-7184\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0512 01:01:31.380739    6824 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\cert-expiration-20220512005951-7184\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0512 01:01:31.425738    6824 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0512 01:01:31.470745    6824 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0512 01:01:31.517754    6824 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0512 01:01:31.559782    6824 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0512 01:01:31.608250    6824 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0512 01:01:29.416086    2560 kic_runner.go:191] docker (temp): C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\cert-options-20220512010013-7184\id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0512 01:01:30.705909    2560 cli_runner.go:164] Run: docker container inspect cert-options-20220512010013-7184 --format={{.State.Status}}
	I0512 01:01:31.837776    2560 cli_runner.go:217] Completed: docker container inspect cert-options-20220512010013-7184 --format={{.State.Status}}: (1.1317492s)
	I0512 01:01:31.856789    2560 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0512 01:01:31.856789    2560 kic_runner.go:114] Args: [docker exec --privileged cert-options-20220512010013-7184 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0512 01:01:33.221258    2560 kic_runner.go:123] Done: [docker exec --privileged cert-options-20220512010013-7184 chown docker:docker /home/docker/.ssh/authorized_keys]: (1.3643991s)
	I0512 01:01:33.226280    2560 kic.go:250] ensuring only current user has permissions to key file located at : C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\cert-options-20220512010013-7184\id_rsa...
	I0512 01:01:31.313734    8484 cli_runner.go:217] Completed: docker exec -t docker-flags-20220512005959-7184 dig +short host.docker.internal: (1.3591152s)
	I0512 01:01:31.313734    8484 network.go:96] got host ip for mount in container by digging dns: 192.168.65.2
	I0512 01:01:31.324740    8484 ssh_runner.go:195] Run: grep 192.168.65.2	host.minikube.internal$ /etc/hosts
	I0512 01:01:31.335743    8484 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.65.2	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0512 01:01:31.369729    8484 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" docker-flags-20220512005959-7184
	I0512 01:01:32.511656    8484 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" docker-flags-20220512005959-7184: (1.1418676s)
	I0512 01:01:32.513806    8484 preload.go:132] Checking if preload exists for k8s version v1.23.5 and runtime docker
	I0512 01:01:32.520566    8484 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0512 01:01:32.596915    8484 docker.go:610] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.23.5
	k8s.gcr.io/kube-proxy:v1.23.5
	k8s.gcr.io/kube-scheduler:v1.23.5
	k8s.gcr.io/kube-controller-manager:v1.23.5
	k8s.gcr.io/etcd:3.5.1-0
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0512 01:01:32.596915    8484 docker.go:541] Images already preloaded, skipping extraction
	I0512 01:01:32.607264    8484 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0512 01:01:32.786810    8484 cni.go:95] Creating CNI manager for ""
	I0512 01:01:32.786810    8484 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0512 01:01:32.786810    8484 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0512 01:01:32.786810    8484 kubeadm.go:158] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.23.5 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:docker-flags-20220512005959-7184 NodeName:docker-flags-20220512005959-7184 DNSDomain:cluster.local CRISocket:/var/run/dockershim.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile
:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	I0512 01:01:32.786810    8484 kubeadm.go:162] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/dockershim.sock
	  name: "docker-flags-20220512005959-7184"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.23.5
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0512 01:01:32.786810    8484 kubeadm.go:936] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.23.5/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=docker --hostname-override=docker-flags-20220512005959-7184 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.23.5 ClusterName:docker-flags-20220512005959-7184 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0512 01:01:32.797813    8484 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.23.5
	I0512 01:01:32.818820    8484 binaries.go:44] Found k8s binaries, skipping transfer
	I0512 01:01:32.827807    8484 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0512 01:01:32.851811    8484 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (358 bytes)
	I0512 01:01:32.889829    8484 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0512 01:01:32.934236    8484 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2054 bytes)
	I0512 01:01:32.986228    8484 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0512 01:01:32.998246    8484 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0512 01:01:33.041235    8484 certs.go:54] Setting up C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\docker-flags-20220512005959-7184 for IP: 192.168.49.2
	I0512 01:01:33.041235    8484 certs.go:182] skipping minikubeCA CA generation: C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.key
	I0512 01:01:33.042250    8484 certs.go:182] skipping proxyClientCA CA generation: C:\Users\jenkins.minikube4\minikube-integration\.minikube\proxy-client-ca.key
	I0512 01:01:33.042250    8484 certs.go:302] generating minikube-user signed cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\docker-flags-20220512005959-7184\client.key
	I0512 01:01:33.043240    8484 crypto.go:68] Generating cert C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\docker-flags-20220512005959-7184\client.crt with IP's: []
	I0512 01:01:33.434439    8484 crypto.go:156] Writing cert to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\docker-flags-20220512005959-7184\client.crt ...
	I0512 01:01:33.434439    8484 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\docker-flags-20220512005959-7184\client.crt: {Name:mk85546348a47b26dbe8501e6f1a9cf43bc76708 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0512 01:01:33.435436    8484 crypto.go:164] Writing key to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\docker-flags-20220512005959-7184\client.key ...
	I0512 01:01:33.435436    8484 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\docker-flags-20220512005959-7184\client.key: {Name:mkf69355a50f2e78a00d263854bbd0be15b969e7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0512 01:01:33.436437    8484 certs.go:302] generating minikube signed cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\docker-flags-20220512005959-7184\apiserver.key.dd3b5fb2
	I0512 01:01:33.436877    8484 crypto.go:68] Generating cert C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\docker-flags-20220512005959-7184\apiserver.crt.dd3b5fb2 with IP's: [192.168.49.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I0512 01:01:34.358385    8484 crypto.go:156] Writing cert to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\docker-flags-20220512005959-7184\apiserver.crt.dd3b5fb2 ...
	I0512 01:01:34.358385    8484 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\docker-flags-20220512005959-7184\apiserver.crt.dd3b5fb2: {Name:mk0ede4f461778686e8e6153bdb45abe04c22e99 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0512 01:01:34.359395    8484 crypto.go:164] Writing key to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\docker-flags-20220512005959-7184\apiserver.key.dd3b5fb2 ...
	I0512 01:01:34.359395    8484 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\docker-flags-20220512005959-7184\apiserver.key.dd3b5fb2: {Name:mkcd7595a78fb0d0ecca2611b5ed7086c0dff498 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0512 01:01:34.360399    8484 certs.go:320] copying C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\docker-flags-20220512005959-7184\apiserver.crt.dd3b5fb2 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\docker-flags-20220512005959-7184\apiserver.crt
	I0512 01:01:34.366400    8484 certs.go:324] copying C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\docker-flags-20220512005959-7184\apiserver.key.dd3b5fb2 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\docker-flags-20220512005959-7184\apiserver.key
	I0512 01:01:34.369393    8484 certs.go:302] generating aggregator signed cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\docker-flags-20220512005959-7184\proxy-client.key
	I0512 01:01:34.369393    8484 crypto.go:68] Generating cert C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\docker-flags-20220512005959-7184\proxy-client.crt with IP's: []
	I0512 01:01:34.588050    8484 crypto.go:156] Writing cert to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\docker-flags-20220512005959-7184\proxy-client.crt ...
	I0512 01:01:34.588050    8484 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\docker-flags-20220512005959-7184\proxy-client.crt: {Name:mk7e77bef3e55af52381328636a8d5400e7a6adb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0512 01:01:34.589043    8484 crypto.go:164] Writing key to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\docker-flags-20220512005959-7184\proxy-client.key ...
	I0512 01:01:34.589043    8484 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\docker-flags-20220512005959-7184\proxy-client.key: {Name:mk6fb42247d789e66c9913f4c6916e8852bdb2b9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0512 01:01:34.590043    8484 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\docker-flags-20220512005959-7184\apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0512 01:01:34.590043    8484 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\docker-flags-20220512005959-7184\apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0512 01:01:34.590043    8484 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\docker-flags-20220512005959-7184\proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0512 01:01:34.595633    8484 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\docker-flags-20220512005959-7184\proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0512 01:01:34.595875    8484 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.crt -> /var/lib/minikube/certs/ca.crt
	I0512 01:01:34.596210    8484 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.key -> /var/lib/minikube/certs/ca.key
	I0512 01:01:34.596210    8484 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0512 01:01:34.596658    8484 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0512 01:01:34.596788    8484 certs.go:388] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\7184.pem (1338 bytes)
	W0512 01:01:34.597753    8484 certs.go:384] ignoring C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\7184_empty.pem, impossibly tiny 0 bytes
	I0512 01:01:34.597753    8484 certs.go:388] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca-key.pem (1675 bytes)
	I0512 01:01:34.598049    8484 certs.go:388] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem (1078 bytes)
	I0512 01:01:34.598367    8484 certs.go:388] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0512 01:01:34.598802    8484 certs.go:388] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\key.pem (1675 bytes)
	I0512 01:01:34.599637    8484 certs.go:388] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\71842.pem (1708 bytes)
	I0512 01:01:34.599820    8484 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\71842.pem -> /usr/share/ca-certificates/71842.pem
	I0512 01:01:34.599993    8484 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0512 01:01:34.599993    8484 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\7184.pem -> /usr/share/ca-certificates/7184.pem
	I0512 01:01:34.600630    8484 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\docker-flags-20220512005959-7184\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0512 01:01:34.649485    8484 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\docker-flags-20220512005959-7184\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0512 01:01:34.699941    8484 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\docker-flags-20220512005959-7184\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0512 01:01:34.750340    8484 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\docker-flags-20220512005959-7184\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0512 01:01:34.808198    8484 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0512 01:01:34.876117    8484 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0512 01:01:34.918116    8484 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0512 01:01:34.967120    8484 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0512 01:01:35.015129    8484 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\71842.pem --> /usr/share/ca-certificates/71842.pem (1708 bytes)
	I0512 01:01:35.062015    8484 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0512 01:01:35.137727    8484 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\7184.pem --> /usr/share/ca-certificates/7184.pem (1338 bytes)
	I0512 01:01:31.663817    6824 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\7184.pem --> /usr/share/ca-certificates/7184.pem (1338 bytes)
	I0512 01:01:31.712576    6824 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\71842.pem --> /usr/share/ca-certificates/71842.pem (1708 bytes)
	I0512 01:01:31.760594    6824 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0512 01:01:31.808574    6824 ssh_runner.go:195] Run: openssl version
	I0512 01:01:31.833578    6824 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0512 01:01:31.880619    6824 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0512 01:01:31.900480    6824 certs.go:431] hashing: -rw-r--r-- 1 root root 1111 May 11 23:00 /usr/share/ca-certificates/minikubeCA.pem
	I0512 01:01:31.913832    6824 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0512 01:01:31.948218    6824 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0512 01:01:31.984224    6824 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/7184.pem && ln -fs /usr/share/ca-certificates/7184.pem /etc/ssl/certs/7184.pem"
	I0512 01:01:32.017218    6824 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/7184.pem
	I0512 01:01:32.028218    6824 certs.go:431] hashing: -rw-r--r-- 1 root root 1338 May 11 23:10 /usr/share/ca-certificates/7184.pem
	I0512 01:01:32.039250    6824 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/7184.pem
	I0512 01:01:32.068248    6824 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/7184.pem /etc/ssl/certs/51391683.0"
	I0512 01:01:32.104652    6824 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/71842.pem && ln -fs /usr/share/ca-certificates/71842.pem /etc/ssl/certs/71842.pem"
	I0512 01:01:32.136557    6824 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/71842.pem
	I0512 01:01:32.154146    6824 certs.go:431] hashing: -rw-r--r-- 1 root root 1708 May 11 23:10 /usr/share/ca-certificates/71842.pem
	I0512 01:01:32.164096    6824 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/71842.pem
	I0512 01:01:32.186107    6824 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/71842.pem /etc/ssl/certs/3ec20f2e.0"
	I0512 01:01:32.211550    6824 kubeadm.go:391] StartCluster: {Name:cert-expiration-20220512005951-7184 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1652251400-14138@sha256:8c847a4aa2afc5a7fc659f9731046bf9cc7e788283deecc83c8633014fb0828a Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.5 ClusterName:cert-expiration-20220512005951-7184 Namespace:default APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.23.5 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:3m0s Mount:false MountString:C:\Users\jenkins.minikube4:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0512 01:01:32.220665    6824 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0512 01:01:32.299330    6824 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0512 01:01:32.335285    6824 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0512 01:01:32.361279    6824 kubeadm.go:221] ignoring SystemVerification for kubeadm because of docker driver
	I0512 01:01:32.371895    6824 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0512 01:01:32.411726    6824 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0512 01:01:32.411726    6824 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.5:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0512 01:01:33.800025    2560 cli_runner.go:164] Run: docker container inspect cert-options-20220512010013-7184 --format={{.State.Status}}
	I0512 01:01:34.907114    2560 cli_runner.go:217] Completed: docker container inspect cert-options-20220512010013-7184 --format={{.State.Status}}: (1.1070318s)
	I0512 01:01:34.907114    2560 machine.go:88] provisioning docker machine ...
	I0512 01:01:34.907114    2560 ubuntu.go:169] provisioning hostname "cert-options-20220512010013-7184"
	I0512 01:01:34.915121    2560 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-options-20220512010013-7184
	I0512 01:01:36.072860    2560 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-options-20220512010013-7184: (1.1576794s)
	I0512 01:01:36.080861    2560 main.go:134] libmachine: Using SSH client type: native
	I0512 01:01:36.087962    2560 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x422e20] 0x425c80 <nil>  [] 0s} 127.0.0.1 50161 <nil> <nil>}
	I0512 01:01:36.087962    2560 main.go:134] libmachine: About to run SSH command:
	sudo hostname cert-options-20220512010013-7184 && echo "cert-options-20220512010013-7184" | sudo tee /etc/hostname
	I0512 01:01:36.279197    2560 main.go:134] libmachine: SSH cmd err, output: <nil>: cert-options-20220512010013-7184
	
	I0512 01:01:36.291211    2560 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-options-20220512010013-7184
	I0512 01:01:37.417777    2560 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-options-20220512010013-7184: (1.1265081s)
	I0512 01:01:37.423740    2560 main.go:134] libmachine: Using SSH client type: native
	I0512 01:01:37.423771    2560 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x422e20] 0x425c80 <nil>  [] 0s} 127.0.0.1 50161 <nil> <nil>}
	I0512 01:01:37.423771    2560 main.go:134] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\scert-options-20220512010013-7184' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 cert-options-20220512010013-7184/g' /etc/hosts;
				else 
					echo '127.0.1.1 cert-options-20220512010013-7184' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0512 01:01:37.552793    2560 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I0512 01:01:37.552793    2560 ubuntu.go:175] set auth options {CertDir:C:\Users\jenkins.minikube4\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube4\minikube-integration\.minikube}
	I0512 01:01:37.552793    2560 ubuntu.go:177] setting up certificates
	I0512 01:01:37.552793    2560 provision.go:83] configureAuth start
	I0512 01:01:37.560804    2560 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" cert-options-20220512010013-7184
	I0512 01:01:38.669594    2560 cli_runner.go:217] Completed: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" cert-options-20220512010013-7184: (1.1086343s)
	I0512 01:01:38.669594    2560 provision.go:138] copyHostCerts
	I0512 01:01:38.669761    2560 exec_runner.go:144] found C:\Users\jenkins.minikube4\minikube-integration\.minikube/cert.pem, removing ...
	I0512 01:01:38.669761    2560 exec_runner.go:207] rm: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cert.pem
	I0512 01:01:38.670306    2560 exec_runner.go:151] cp: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube4\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0512 01:01:38.671687    2560 exec_runner.go:144] found C:\Users\jenkins.minikube4\minikube-integration\.minikube/key.pem, removing ...
	I0512 01:01:38.671746    2560 exec_runner.go:207] rm: C:\Users\jenkins.minikube4\minikube-integration\.minikube\key.pem
	I0512 01:01:38.671981    2560 exec_runner.go:151] cp: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube4\minikube-integration\.minikube/key.pem (1675 bytes)
	I0512 01:01:38.672884    2560 exec_runner.go:144] found C:\Users\jenkins.minikube4\minikube-integration\.minikube/ca.pem, removing ...
	I0512 01:01:38.672884    2560 exec_runner.go:207] rm: C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.pem
	I0512 01:01:38.672884    2560 exec_runner.go:151] cp: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube4\minikube-integration\.minikube/ca.pem (1078 bytes)
	I0512 01:01:38.674014    2560 provision.go:112] generating server cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.cert-options-20220512010013-7184 san=[192.168.76.2 127.0.0.1 localhost 127.0.0.1 minikube cert-options-20220512010013-7184]
	I0512 01:01:35.203001    8484 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0512 01:01:35.269749    8484 ssh_runner.go:195] Run: openssl version
	I0512 01:01:35.296136    8484 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/71842.pem && ln -fs /usr/share/ca-certificates/71842.pem /etc/ssl/certs/71842.pem"
	I0512 01:01:35.328006    8484 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/71842.pem
	I0512 01:01:35.341995    8484 certs.go:431] hashing: -rw-r--r-- 1 root root 1708 May 11 23:10 /usr/share/ca-certificates/71842.pem
	I0512 01:01:35.351997    8484 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/71842.pem
	I0512 01:01:35.373998    8484 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/71842.pem /etc/ssl/certs/3ec20f2e.0"
	I0512 01:01:35.409011    8484 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0512 01:01:35.441991    8484 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0512 01:01:35.460059    8484 certs.go:431] hashing: -rw-r--r-- 1 root root 1111 May 11 23:00 /usr/share/ca-certificates/minikubeCA.pem
	I0512 01:01:35.473409    8484 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0512 01:01:35.505210    8484 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0512 01:01:35.547668    8484 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/7184.pem && ln -fs /usr/share/ca-certificates/7184.pem /etc/ssl/certs/7184.pem"
	I0512 01:01:35.591105    8484 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/7184.pem
	I0512 01:01:35.599691    8484 certs.go:431] hashing: -rw-r--r-- 1 root root 1338 May 11 23:10 /usr/share/ca-certificates/7184.pem
	I0512 01:01:35.609667    8484 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/7184.pem
	I0512 01:01:35.630679    8484 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/7184.pem /etc/ssl/certs/51391683.0"
	I0512 01:01:35.656546    8484 kubeadm.go:391] StartCluster: {Name:docker-flags-20220512005959-7184 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1652251400-14138@sha256:8c847a4aa2afc5a7fc659f9731046bf9cc7e788283deecc83c8633014fb0828a Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[FOO=BAR BAZ=BAT] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[debug icc=true] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.5 ClusterName:docker-flags-20220512005959-7184
Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.23.5 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube4:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p M
ountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0512 01:01:35.668985    8484 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0512 01:01:35.745769    8484 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0512 01:01:35.778519    8484 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0512 01:01:35.798370    8484 kubeadm.go:221] ignoring SystemVerification for kubeadm because of docker driver
	I0512 01:01:35.809378    8484 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0512 01:01:35.832386    8484 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0512 01:01:35.833372    8484 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.5:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	
	* 
	* ==> Docker <==
	* -- Logs begin at Thu 2022-05-12 00:58:27 UTC, end at Thu 2022-05-12 01:01:49 UTC. --
	May 12 00:58:46 pause-20220512005140-7184 dockerd[508]: time="2022-05-12T00:58:46.600079000Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc
	May 12 00:58:46 pause-20220512005140-7184 dockerd[508]: time="2022-05-12T00:58:46.600122800Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///run/containerd/containerd.sock  <nil> 0 <nil>}] <nil> <nil>}" module=grpc
	May 12 00:58:46 pause-20220512005140-7184 dockerd[508]: time="2022-05-12T00:58:46.600144700Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc
	May 12 00:58:46 pause-20220512005140-7184 dockerd[508]: time="2022-05-12T00:58:46.603242100Z" level=info msg="parsed scheme: \"unix\"" module=grpc
	May 12 00:58:46 pause-20220512005140-7184 dockerd[508]: time="2022-05-12T00:58:46.603380800Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc
	May 12 00:58:46 pause-20220512005140-7184 dockerd[508]: time="2022-05-12T00:58:46.603422900Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///run/containerd/containerd.sock  <nil> 0 <nil>}] <nil> <nil>}" module=grpc
	May 12 00:58:46 pause-20220512005140-7184 dockerd[508]: time="2022-05-12T00:58:46.603450500Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc
	May 12 00:58:46 pause-20220512005140-7184 dockerd[508]: time="2022-05-12T00:58:46.829452600Z" level=info msg="[graphdriver] using prior storage driver: overlay2"
	May 12 00:58:46 pause-20220512005140-7184 dockerd[508]: time="2022-05-12T00:58:46.859866300Z" level=warning msg="Your kernel does not support cgroup blkio weight"
	May 12 00:58:46 pause-20220512005140-7184 dockerd[508]: time="2022-05-12T00:58:46.859968900Z" level=warning msg="Your kernel does not support cgroup blkio weight_device"
	May 12 00:58:46 pause-20220512005140-7184 dockerd[508]: time="2022-05-12T00:58:46.859991300Z" level=warning msg="Your kernel does not support cgroup blkio throttle.read_bps_device"
	May 12 00:58:46 pause-20220512005140-7184 dockerd[508]: time="2022-05-12T00:58:46.859999400Z" level=warning msg="Your kernel does not support cgroup blkio throttle.write_bps_device"
	May 12 00:58:46 pause-20220512005140-7184 dockerd[508]: time="2022-05-12T00:58:46.860010200Z" level=warning msg="Your kernel does not support cgroup blkio throttle.read_iops_device"
	May 12 00:58:46 pause-20220512005140-7184 dockerd[508]: time="2022-05-12T00:58:46.860017400Z" level=warning msg="Your kernel does not support cgroup blkio throttle.write_iops_device"
	May 12 00:58:46 pause-20220512005140-7184 dockerd[508]: time="2022-05-12T00:58:46.860268100Z" level=info msg="Loading containers: start."
	May 12 00:58:47 pause-20220512005140-7184 dockerd[508]: time="2022-05-12T00:58:47.064954200Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	May 12 00:58:47 pause-20220512005140-7184 dockerd[508]: time="2022-05-12T00:58:47.168245600Z" level=info msg="Loading containers: done."
	May 12 00:58:47 pause-20220512005140-7184 dockerd[508]: time="2022-05-12T00:58:47.287750800Z" level=info msg="Docker daemon" commit=4433bf6 graphdriver(s)=overlay2 version=20.10.15
	May 12 00:58:47 pause-20220512005140-7184 dockerd[508]: time="2022-05-12T00:58:47.287901600Z" level=info msg="Daemon has completed initialization"
	May 12 00:58:47 pause-20220512005140-7184 systemd[1]: Started Docker Application Container Engine.
	May 12 00:58:47 pause-20220512005140-7184 dockerd[508]: time="2022-05-12T00:58:47.353224900Z" level=info msg="API listen on [::]:2376"
	May 12 00:58:47 pause-20220512005140-7184 dockerd[508]: time="2022-05-12T00:58:47.359564200Z" level=info msg="API listen on /var/run/docker.sock"
	May 12 00:59:43 pause-20220512005140-7184 dockerd[508]: time="2022-05-12T00:59:43.626822600Z" level=info msg="ignoring event" container=18a71909db628e98999d5e631afcee12b0535efc00d4ad63d9e6d8d03f0fca72 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	May 12 01:00:22 pause-20220512005140-7184 dockerd[508]: time="2022-05-12T01:00:22.852574500Z" level=info msg="ignoring event" container=93c48e3561a563768d9850597bce373e6d471c669bab3559fc5f6127eb8cbead module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	May 12 01:00:24 pause-20220512005140-7184 dockerd[508]: time="2022-05-12T01:00:24.806323600Z" level=info msg="ignoring event" container=d2c19d84bf254bb6896aaf87a79d1237501c216f6232e2c8f4fda3cd9ce82963 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE               CREATED              STATE               NAME                      ATTEMPT             POD ID
	e6a72fa21d5f9       6e38f40d628db       47 seconds ago       Running             storage-provisioner       0                   7f2018d24dcb9
	fb0603d9b195e       a4ca41631cc7a       About a minute ago   Running             coredns                   0                   ef7464c48a751
	b075ccd54d6d7       3c53fa8541f95       About a minute ago   Running             kube-proxy                0                   fab7a31437673
	52389506cb8f7       b0c9e5e4dbb14       2 minutes ago        Running             kube-controller-manager   1                   668b95fd5cf75
	2fc40eb3e688c       884d49d6d8c9f       2 minutes ago        Running             kube-scheduler            0                   0a97c038a0ef4
	a5606a57f2a0c       25f8c7f3da61c       2 minutes ago        Running             etcd                      0                   8fb3af689928d
	18a71909db628       b0c9e5e4dbb14       2 minutes ago        Exited              kube-controller-manager   0                   668b95fd5cf75
	34d00adfe03d1       3fc1d62d65872       2 minutes ago        Running             kube-apiserver            0                   fc0e6231e879c
	
	* 
	* ==> coredns [fb0603d9b195] <==
	* .:53
	[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	CoreDNS-1.8.6
	linux/amd64, go1.17.1, 13a9191
	[INFO] Reloading
	[INFO] plugin/health: Going into lameduck mode for 5s
	[INFO] plugin/reload: Running configuration MD5 = c23ed519c17e71ee396ed052e6209e94
	[INFO] Reloading complete
	
	* 
	* ==> describe nodes <==
	* Name:               pause-20220512005140-7184
	Roles:              control-plane,master
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=pause-20220512005140-7184
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=50a7977b568d2ad3e04003527a57f4502d6177a0
	                    minikube.k8s.io/name=pause-20220512005140-7184
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2022_05_12T00_59_46_0700
	                    minikube.k8s.io/version=v1.25.2
	                    node-role.kubernetes.io/control-plane=
	                    node-role.kubernetes.io/master=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 12 May 2022 00:59:33 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-20220512005140-7184
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 12 May 2022 01:01:41 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 12 May 2022 01:00:09 +0000   Thu, 12 May 2022 00:59:29 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 12 May 2022 01:00:09 +0000   Thu, 12 May 2022 00:59:29 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 12 May 2022 01:00:09 +0000   Thu, 12 May 2022 00:59:29 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 12 May 2022 01:00:09 +0000   Thu, 12 May 2022 00:59:59 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.67.2
	  Hostname:    pause-20220512005140-7184
	Capacity:
	  cpu:                16
	  ephemeral-storage:  263174212Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             52638988Ki
	  pods:               110
	Allocatable:
	  cpu:                16
	  ephemeral-storage:  263174212Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             52638988Ki
	  pods:               110
	System Info:
	  Machine ID:                 8556a0a9a0e64ba4b825f672d2dce0b9
	  System UUID:                8556a0a9a0e64ba4b825f672d2dce0b9
	  Boot ID:                    10186544-b659-4889-afdb-c2512535b797
	  Kernel Version:             5.10.102.1-microsoft-standard-WSL2
	  OS Image:                   Ubuntu 20.04.4 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://20.10.15
	  Kubelet Version:            v1.23.5
	  Kube-Proxy Version:         v1.23.5
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                 ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-64897985d-6rqbl                              100m (0%!)(MISSING)     0 (0%!)(MISSING)      70Mi (0%!)(MISSING)        170Mi (0%!)(MISSING)     99s
	  kube-system                 etcd-pause-20220512005140-7184                       100m (0%!)(MISSING)     0 (0%!)(MISSING)      100Mi (0%!)(MISSING)       0 (0%!)(MISSING)         2m10s
	  kube-system                 kube-apiserver-pause-20220512005140-7184             250m (1%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m
	  kube-system                 kube-controller-manager-pause-20220512005140-7184    200m (1%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m8s
	  kube-system                 kube-proxy-sk8qh                                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         99s
	  kube-system                 kube-scheduler-pause-20220512005140-7184             100m (0%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m10s
	  kube-system                 storage-provisioner                                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         49s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (4%!)(MISSING)   0 (0%!)(MISSING)
	  memory             170Mi (0%!)(MISSING)  170Mi (0%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From        Message
	  ----    ------                   ----                   ----        -------
	  Normal  Starting                 94s                    kube-proxy  
	  Normal  NodeHasSufficientMemory  2m26s (x8 over 2m27s)  kubelet     Node pause-20220512005140-7184 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m26s (x8 over 2m27s)  kubelet     Node pause-20220512005140-7184 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m26s (x7 over 2m27s)  kubelet     Node pause-20220512005140-7184 status is now: NodeHasSufficientPID
	  Normal  Starting                 2m2s                   kubelet     Starting kubelet.
	  Normal  NodeHasNoDiskPressure    2m1s                   kubelet     Node pause-20220512005140-7184 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m1s                   kubelet     Node pause-20220512005140-7184 status is now: NodeHasSufficientPID
	  Normal  NodeNotReady             2m1s                   kubelet     Node pause-20220512005140-7184 status is now: NodeNotReady
	  Normal  NodeHasSufficientMemory  2m1s                   kubelet     Node pause-20220512005140-7184 status is now: NodeHasSufficientMemory
	  Normal  NodeAllocatableEnforced  2m                     kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeReady                110s                   kubelet     Node pause-20220512005140-7184 status is now: NodeReady
	
	* 
	* ==> dmesg <==
	* [May12 00:41] WSL2: Performing memory compaction.
	[May12 00:42] WSL2: Performing memory compaction.
	[May12 00:43] WSL2: Performing memory compaction.
	[May12 00:44] WSL2: Performing memory compaction.
	[May12 00:45] WSL2: Performing memory compaction.
	[May12 00:46] WSL2: Performing memory compaction.
	[May12 00:47] WSL2: Performing memory compaction.
	[May12 00:48] WSL2: Performing memory compaction.
	[May12 00:49] process 'docker/tmp/qemu-check071081722/check' started with executable stack
	[ +21.082981] WSL2: Performing memory compaction.
	[May12 00:51] WSL2: Performing memory compaction.
	[May12 00:52] WSL2: Performing memory compaction.
	[May12 00:54] WSL2: Performing memory compaction.
	[May12 00:56] overlayfs: upperdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
	[  +0.000004] overlayfs: workdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
	[  +0.010948] overlayfs: upperdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
	[  +0.000005] overlayfs: workdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
	[  +0.036593] overlayfs: upperdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
	[  +0.000001] overlayfs: upperdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
	[  +0.000004] overlayfs: workdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
	[  +0.000001] overlayfs: workdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
	[May12 00:57] WSL2: Performing memory compaction.
	[May12 00:58] WSL2: Performing memory compaction.
	[May12 01:00] WSL2: Performing memory compaction.
	[May12 01:01] WSL2: Performing memory compaction.
	
	* 
	* ==> etcd [a5606a57f2a0] <==
	* {"level":"warn","ts":"2022-05-12T01:01:15.872Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2022-05-12T01:01:14.310Z","time spent":"1.5618874s","remote":"127.0.0.1:54086","response type":"/etcdserverpb.KV/Range","request count":0,"request size":30,"response count":1,"response size":367,"request content":"key:\"/registry/namespaces/default\" "}
	{"level":"warn","ts":"2022-05-12T01:01:15.872Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2022-05-12T01:01:14.941Z","time spent":"931.1066ms","remote":"127.0.0.1:54112","response type":"/etcdserverpb.KV/Range","request count":0,"request size":18,"response count":0,"response size":29,"request content":"key:\"/registry/health\" "}
	{"level":"warn","ts":"2022-05-12T01:01:15.872Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"140.5704ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/csinodes/\" range_end:\"/registry/csinodes0\" count_only:true ","response":"range_response_count:0 size:7"}
	{"level":"info","ts":"2022-05-12T01:01:15.872Z","caller":"traceutil/trace.go:171","msg":"trace[360830156] range","detail":"{range_begin:/registry/csinodes/; range_end:/registry/csinodes0; response_count:0; response_revision:521; }","duration":"141.0913ms","start":"2022-05-12T01:01:15.731Z","end":"2022-05-12T01:01:15.872Z","steps":["trace[360830156] 'count revisions from in-memory index tree'  (duration: 140.4751ms)"],"step_count":1}
	{"level":"warn","ts":"2022-05-12T01:01:15.872Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"775.2498ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/storageclasses/\" range_end:\"/registry/storageclasses0\" count_only:true ","response":"range_response_count:0 size:7"}
	{"level":"info","ts":"2022-05-12T01:01:15.872Z","caller":"traceutil/trace.go:171","msg":"trace[501765133] range","detail":"{range_begin:/registry/storageclasses/; range_end:/registry/storageclasses0; response_count:0; response_revision:521; }","duration":"775.9528ms","start":"2022-05-12T01:01:15.096Z","end":"2022-05-12T01:01:15.872Z","steps":["trace[501765133] 'count revisions from in-memory index tree'  (duration: 775.137ms)"],"step_count":1}
	{"level":"warn","ts":"2022-05-12T01:01:15.873Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2022-05-12T01:01:15.096Z","time spent":"776.0933ms","remote":"127.0.0.1:54180","response type":"/etcdserverpb.KV/Range","request count":0,"request size":56,"response count":1,"response size":31,"request content":"key:\"/registry/storageclasses/\" range_end:\"/registry/storageclasses0\" count_only:true "}
	{"level":"warn","ts":"2022-05-12T01:01:16.909Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"627.7362ms","expected-duration":"100ms","prefix":"","request":"header:<ID:2289940453759133128 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/masterleases/192.168.67.2\" mod_revision:515 > success:<request_put:<key:\"/registry/masterleases/192.168.67.2\" value_size:67 lease:2289940453759133126 >> failure:<request_range:<key:\"/registry/masterleases/192.168.67.2\" > >>","response":"size:16"}
	{"level":"info","ts":"2022-05-12T01:01:16.909Z","caller":"traceutil/trace.go:171","msg":"trace[1973444395] linearizableReadLoop","detail":"{readStateIndex:554; appliedIndex:553; }","duration":"970.0203ms","start":"2022-05-12T01:01:15.939Z","end":"2022-05-12T01:01:16.909Z","steps":["trace[1973444395] 'read index received'  (duration: 342.0045ms)","trace[1973444395] 'applied index is now lower than readState.Index'  (duration: 628.0119ms)"],"step_count":2}
	{"level":"warn","ts":"2022-05-12T01:01:16.910Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"970.2145ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2022-05-12T01:01:16.910Z","caller":"traceutil/trace.go:171","msg":"trace[134297754] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:522; }","duration":"970.3321ms","start":"2022-05-12T01:01:15.939Z","end":"2022-05-12T01:01:16.910Z","steps":["trace[134297754] 'agreement among raft nodes before linearized reading'  (duration: 970.1351ms)"],"step_count":1}
	{"level":"warn","ts":"2022-05-12T01:01:16.910Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2022-05-12T01:01:15.939Z","time spent":"970.3845ms","remote":"127.0.0.1:54112","response type":"/etcdserverpb.KV/Range","request count":0,"request size":18,"response count":0,"response size":29,"request content":"key:\"/registry/health\" "}
	{"level":"warn","ts":"2022-05-12T01:01:16.910Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"957.0695ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" ","response":"range_response_count:1 size:1127"}
	{"level":"info","ts":"2022-05-12T01:01:16.910Z","caller":"traceutil/trace.go:171","msg":"trace[489068306] range","detail":"{range_begin:/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath; range_end:; response_count:1; response_revision:522; }","duration":"957.1391ms","start":"2022-05-12T01:01:15.953Z","end":"2022-05-12T01:01:16.910Z","steps":["trace[489068306] 'agreement among raft nodes before linearized reading'  (duration: 957.06ms)"],"step_count":1}
	{"level":"warn","ts":"2022-05-12T01:01:16.910Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2022-05-12T01:01:15.953Z","time spent":"957.2418ms","remote":"127.0.0.1:54088","response type":"/etcdserverpb.KV/Range","request count":0,"request size":67,"response count":1,"response size":1151,"request content":"key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" "}
	{"level":"warn","ts":"2022-05-12T01:01:16.910Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"902.4111ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/apiregistration.k8s.io/apiservices/\" range_end:\"/registry/apiregistration.k8s.io/apiservices0\" count_only:true ","response":"range_response_count:0 size:7"}
	{"level":"info","ts":"2022-05-12T01:01:16.910Z","caller":"traceutil/trace.go:171","msg":"trace[728535268] range","detail":"{range_begin:/registry/apiregistration.k8s.io/apiservices/; range_end:/registry/apiregistration.k8s.io/apiservices0; response_count:0; response_revision:522; }","duration":"902.4589ms","start":"2022-05-12T01:01:16.007Z","end":"2022-05-12T01:01:16.910Z","steps":["trace[728535268] 'agreement among raft nodes before linearized reading'  (duration: 902.3709ms)"],"step_count":1}
	{"level":"info","ts":"2022-05-12T01:01:16.910Z","caller":"traceutil/trace.go:171","msg":"trace[1105670514] transaction","detail":"{read_only:false; response_revision:522; number_of_response:1; }","duration":"1.0286107s","start":"2022-05-12T01:01:15.881Z","end":"2022-05-12T01:01:16.910Z","steps":["trace[1105670514] 'process raft request'  (duration: 400.4977ms)","trace[1105670514] 'compare'  (duration: 626.8637ms)"],"step_count":2}
	{"level":"warn","ts":"2022-05-12T01:01:16.910Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2022-05-12T01:01:15.881Z","time spent":"1.029035s","remote":"127.0.0.1:54064","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":118,"response count":0,"response size":40,"request content":"compare:<target:MOD key:\"/registry/masterleases/192.168.67.2\" mod_revision:515 > success:<request_put:<key:\"/registry/masterleases/192.168.67.2\" value_size:67 lease:2289940453759133126 >> failure:<request_range:<key:\"/registry/masterleases/192.168.67.2\" > >"}
	{"level":"warn","ts":"2022-05-12T01:01:16.910Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2022-05-12T01:01:16.007Z","time spent":"902.504ms","remote":"127.0.0.1:54216","response type":"/etcdserverpb.KV/Range","request count":0,"request size":96,"response count":29,"response size":31,"request content":"key:\"/registry/apiregistration.k8s.io/apiservices/\" range_end:\"/registry/apiregistration.k8s.io/apiservices0\" count_only:true "}
	{"level":"info","ts":"2022-05-12T01:01:17.126Z","caller":"traceutil/trace.go:171","msg":"trace[39428066] linearizableReadLoop","detail":"{readStateIndex:555; appliedIndex:555; }","duration":"182.2554ms","start":"2022-05-12T01:01:16.944Z","end":"2022-05-12T01:01:17.126Z","steps":["trace[39428066] 'read index received'  (duration: 182.2416ms)","trace[39428066] 'applied index is now lower than readState.Index'  (duration: 10.8µs)"],"step_count":2}
	{"level":"warn","ts":"2022-05-12T01:01:17.187Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"243.3007ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2022-05-12T01:01:17.187Z","caller":"traceutil/trace.go:171","msg":"trace[357678386] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:523; }","duration":"243.5336ms","start":"2022-05-12T01:01:16.944Z","end":"2022-05-12T01:01:17.187Z","steps":["trace[357678386] 'agreement among raft nodes before linearized reading'  (duration: 182.4481ms)","trace[357678386] 'range keys from in-memory index tree'  (duration: 60.8249ms)"],"step_count":2}
	{"level":"warn","ts":"2022-05-12T01:01:41.521Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"161.6043ms","expected-duration":"100ms","prefix":"","request":"header:<ID:2289940453759133243 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/leases/kube-node-lease/pause-20220512005140-7184\" mod_revision:532 > success:<request_put:<key:\"/registry/leases/kube-node-lease/pause-20220512005140-7184\" value_size:532 >> failure:<request_range:<key:\"/registry/leases/kube-node-lease/pause-20220512005140-7184\" > >>","response":"size:16"}
	{"level":"info","ts":"2022-05-12T01:01:41.522Z","caller":"traceutil/trace.go:171","msg":"trace[954127990] transaction","detail":"{read_only:false; response_revision:540; number_of_response:1; }","duration":"162.0627ms","start":"2022-05-12T01:01:41.360Z","end":"2022-05-12T01:01:41.522Z","steps":["trace[954127990] 'compare'  (duration: 161.2697ms)"],"step_count":1}
	
	* 
	* ==> kernel <==
	*  01:01:49 up  2:09,  0 users,  load average: 4.05, 4.56, 3.38
	Linux pause-20220512005140-7184 5.10.102.1-microsoft-standard-WSL2 #1 SMP Wed Mar 2 00:30:59 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 20.04.4 LTS"
	
	* 
	* ==> kube-apiserver [34d00adfe03d] <==
	* I0512 01:00:07.929566       1 trace.go:205] Trace[1096747061]: "Get" url:/api/v1/namespaces/default/serviceaccounts/default,user-agent:kubectl/v1.23.5 (linux/amd64) kubernetes/c285e78,audit-id:792b9dc1-3ed2-41ce-ad1f-142fdf130a04,client:127.0.0.1,accept:application/json;as=Table;v=v1;g=meta.k8s.io,application/json;as=Table;v=v1beta1;g=meta.k8s.io,application/json,protocol:HTTP/2.0 (12-May-2022 01:00:02.055) (total time: 5873ms):
	Trace[1096747061]: [5.8735741s] [5.8735741s] END
	I0512 01:00:07.929157       1 trace.go:205] Trace[115992878]: "Create" url:/api/v1/namespaces/kube-system/serviceaccounts,user-agent:kube-controller-manager/v1.23.5 (linux/amd64) kubernetes/c285e78/kube-controller-manager,audit-id:a8df53b7-c4b1-4a35-9a57-fce024d6e22c,client:192.168.67.2,accept:application/vnd.kubernetes.protobuf, */*,protocol:HTTP/2.0 (12-May-2022 01:00:01.822) (total time: 6106ms):
	Trace[115992878]: ---"Object stored in database" 6105ms (01:00:07.928)
	Trace[115992878]: [6.1062849s] [6.1062849s] END
	I0512 01:00:09.812754       1 controller.go:611] quota admission added evaluator for: replicasets.apps
	I0512 01:00:09.902316       1 controller.go:611] quota admission added evaluator for: controllerrevisions.apps
	I0512 01:00:14.726740       1 controller.go:611] quota admission added evaluator for: events.events.k8s.io
	I0512 01:00:44.986351       1 trace.go:205] Trace[1192640284]: "GuaranteedUpdate etcd3" type:*v1.Endpoints (12-May-2022 01:00:44.314) (total time: 672ms):
	Trace[1192640284]: ---"Transaction committed" 668ms (01:00:44.986)
	Trace[1192640284]: [672.1393ms] [672.1393ms] END
	I0512 01:01:00.176257       1 trace.go:205] Trace[1230668400]: "List etcd3" key:/pods/kube-system,resourceVersion:,resourceVersionMatch:,limit:0,continue: (12-May-2022 01:00:59.494) (total time: 681ms):
	Trace[1230668400]: [681.9148ms] [681.9148ms] END
	I0512 01:01:00.178306       1 trace.go:205] Trace[1401788407]: "List" url:/api/v1/namespaces/kube-system/pods,user-agent:minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format,audit-id:82410fa3-3d8f-4f14-a39e-1eddc9eac436,client:192.168.67.1,accept:application/json, */*,protocol:HTTP/2.0 (12-May-2022 01:00:59.494) (total time: 683ms):
	Trace[1401788407]: ---"Listing from storage done" 682ms (01:01:00.176)
	Trace[1401788407]: [683.9758ms] [683.9758ms] END
	I0512 01:01:15.873525       1 trace.go:205] Trace[973272629]: "Get" url:/api/v1/namespaces/default,user-agent:kube-apiserver/v1.23.5 (linux/amd64) kubernetes/c285e78,audit-id:d1153530-27d6-412a-8213-836eba1e2be8,client:127.0.0.1,accept:application/vnd.kubernetes.protobuf, */*,protocol:HTTP/2.0 (12-May-2022 01:01:14.309) (total time: 1564ms):
	Trace[973272629]: ---"About to write a response" 1564ms (01:01:15.873)
	Trace[973272629]: [1.5641768s] [1.5641768s] END
	I0512 01:01:16.912222       1 trace.go:205] Trace[648684563]: "Get" url:/api/v1/namespaces/kube-system/endpoints/k8s.io-minikube-hostpath,user-agent:storage-provisioner/v0.0.0 (linux/amd64) kubernetes/$Format,audit-id:f22c37cc-e9c0-43fb-94aa-8269f7b17ea7,client:192.168.67.2,accept:application/json, */*,protocol:HTTP/2.0 (12-May-2022 01:01:15.951) (total time: 960ms):
	Trace[648684563]: ---"About to write a response" 960ms (01:01:16.911)
	Trace[648684563]: [960.2776ms] [960.2776ms] END
	I0512 01:01:16.912963       1 trace.go:205] Trace[711113022]: "GuaranteedUpdate etcd3" type:*v1.Endpoints (12-May-2022 01:01:15.877) (total time: 1035ms):
	Trace[711113022]: ---"Transaction committed" 1032ms (01:01:16.912)
	Trace[711113022]: [1.0356261s] [1.0356261s] END
	
	* 
	* ==> kube-controller-manager [18a71909db62] <==
	* 	/usr/local/go/src/bytes/buffer.go:204 +0x98
	crypto/tls.(*Conn).readFromUntil(0xc0009c6e00, {0x4d4fe80, 0xc000128018}, 0x8ef)
		/usr/local/go/src/crypto/tls/conn.go:799 +0xe5
	crypto/tls.(*Conn).readRecordOrCCS(0xc0009c6e00, 0x0)
		/usr/local/go/src/crypto/tls/conn.go:606 +0x112
	crypto/tls.(*Conn).readRecord(...)
		/usr/local/go/src/crypto/tls/conn.go:574
	crypto/tls.(*Conn).Read(0xc0009c6e00, {0xc00128b000, 0x1000, 0x919560})
		/usr/local/go/src/crypto/tls/conn.go:1277 +0x16f
	bufio.(*Reader).Read(0xc0003c9440, {0xc00128c040, 0x9, 0x934bc2})
		/usr/local/go/src/bufio/bufio.go:227 +0x1b4
	io.ReadAtLeast({0x4d47860, 0xc0003c9440}, {0xc00128c040, 0x9, 0x9}, 0x9)
		/usr/local/go/src/io/io.go:328 +0x9a
	io.ReadFull(...)
		/usr/local/go/src/io/io.go:347
	k8s.io/kubernetes/vendor/golang.org/x/net/http2.readFrameHeader({0xc00128c040, 0x9, 0xc00102b110}, {0x4d47860, 0xc0003c9440})
		/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/golang.org/x/net/http2/frame.go:237 +0x6e
	k8s.io/kubernetes/vendor/golang.org/x/net/http2.(*Framer).ReadFrame(0xc00128c000)
		/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/golang.org/x/net/http2/frame.go:498 +0x95
	k8s.io/kubernetes/vendor/golang.org/x/net/http2.(*clientConnReadLoop).run(0xc00063ff98)
		/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/golang.org/x/net/http2/transport.go:2101 +0x130
	k8s.io/kubernetes/vendor/golang.org/x/net/http2.(*ClientConn).readLoop(0xc001288000)
		/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/golang.org/x/net/http2/transport.go:1997 +0x6f
	created by k8s.io/kubernetes/vendor/golang.org/x/net/http2.(*Transport).newClientConn
		/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/golang.org/x/net/http2/transport.go:725 +0xac5
	
	* 
	* ==> kube-controller-manager [52389506cb8f] <==
	* I0512 01:00:09.503610       1 disruption.go:371] Sending events to api server.
	I0512 01:00:09.514563       1 shared_informer.go:247] Caches are synced for job 
	I0512 01:00:09.522374       1 shared_informer.go:240] Waiting for caches to sync for garbage collector
	I0512 01:00:09.602674       1 shared_informer.go:247] Caches are synced for resource quota 
	I0512 01:00:09.602675       1 shared_informer.go:247] Caches are synced for certificate-csrsigning-kube-apiserver-client 
	I0512 01:00:09.602675       1 shared_informer.go:247] Caches are synced for resource quota 
	I0512 01:00:09.602769       1 shared_informer.go:247] Caches are synced for certificate-csrsigning-legacy-unknown 
	I0512 01:00:09.603302       1 shared_informer.go:247] Caches are synced for certificate-csrsigning-kubelet-serving 
	I0512 01:00:09.603484       1 shared_informer.go:247] Caches are synced for TTL after finished 
	I0512 01:00:09.603553       1 shared_informer.go:247] Caches are synced for persistent volume 
	I0512 01:00:09.603689       1 shared_informer.go:247] Caches are synced for HPA 
	I0512 01:00:09.603757       1 shared_informer.go:247] Caches are synced for certificate-csrapproving 
	I0512 01:00:09.603904       1 shared_informer.go:247] Caches are synced for certificate-csrsigning-kubelet-client 
	I0512 01:00:09.610406       1 shared_informer.go:247] Caches are synced for cronjob 
	I0512 01:00:09.705079       1 range_allocator.go:374] Set node pause-20220512005140-7184 PodCIDR to [10.244.0.0/24]
	I0512 01:00:09.923057       1 shared_informer.go:247] Caches are synced for garbage collector 
	I0512 01:00:09.923243       1 event.go:294] "Event occurred" object="kube-system/coredns" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-64897985d to 2"
	I0512 01:00:09.928867       1 shared_informer.go:247] Caches are synced for garbage collector 
	I0512 01:00:09.928966       1 garbagecollector.go:155] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I0512 01:00:10.120209       1 event.go:294] "Event occurred" object="kube-system/kube-proxy" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-sk8qh"
	I0512 01:00:10.306992       1 event.go:294] "Event occurred" object="kube-system/coredns-64897985d" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-64897985d-6rqbl"
	I0512 01:00:10.424887       1 event.go:294] "Event occurred" object="kube-system/coredns-64897985d" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-64897985d-jt5dx"
	I0512 01:00:10.809631       1 event.go:294] "Event occurred" object="kube-system/coredns" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set coredns-64897985d to 1"
	E0512 01:00:10.903903       1 replica_set.go:536] sync "kube-system/coredns-64897985d" failed with Operation cannot be fulfilled on replicasets.apps "coredns-64897985d": the object has been modified; please apply your changes to the latest version and try again
	I0512 01:00:10.914019       1 event.go:294] "Event occurred" object="kube-system/coredns-64897985d" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-64897985d-jt5dx"
	
	* 
	* ==> kube-proxy [b075ccd54d6d] <==
	* E0512 01:00:13.919024       1 proxier.go:647] "Failed to read builtin modules file, you can ignore this message when kube-proxy is running inside container without mounting /lib/modules" err="open /lib/modules/5.10.102.1-microsoft-standard-WSL2/modules.builtin: no such file or directory" filePath="/lib/modules/5.10.102.1-microsoft-standard-WSL2/modules.builtin"
	I0512 01:00:13.923387       1 proxier.go:657] "Failed to load kernel module with modprobe, you can ignore this message when kube-proxy is running inside container without mounting /lib/modules" moduleName="ip_vs"
	I0512 01:00:14.005854       1 proxier.go:657] "Failed to load kernel module with modprobe, you can ignore this message when kube-proxy is running inside container without mounting /lib/modules" moduleName="ip_vs_rr"
	I0512 01:00:14.009902       1 proxier.go:657] "Failed to load kernel module with modprobe, you can ignore this message when kube-proxy is running inside container without mounting /lib/modules" moduleName="ip_vs_wrr"
	I0512 01:00:14.014553       1 proxier.go:657] "Failed to load kernel module with modprobe, you can ignore this message when kube-proxy is running inside container without mounting /lib/modules" moduleName="ip_vs_sh"
	I0512 01:00:14.017702       1 proxier.go:657] "Failed to load kernel module with modprobe, you can ignore this message when kube-proxy is running inside container without mounting /lib/modules" moduleName="nf_conntrack"
	I0512 01:00:14.208358       1 node.go:163] Successfully retrieved node IP: 192.168.67.2
	I0512 01:00:14.208510       1 server_others.go:138] "Detected node IP" address="192.168.67.2"
	I0512 01:00:14.208590       1 server_others.go:561] "Unknown proxy mode, assuming iptables proxy" proxyMode=""
	I0512 01:00:14.716981       1 server_others.go:206] "Using iptables Proxier"
	I0512 01:00:14.717126       1 server_others.go:213] "kube-proxy running in dual-stack mode" ipFamily=IPv4
	I0512 01:00:14.717147       1 server_others.go:214] "Creating dualStackProxier for iptables"
	I0512 01:00:14.717188       1 server_others.go:491] "Detect-local-mode set to ClusterCIDR, but no IPv6 cluster CIDR defined, , defaulting to no-op detect-local for IPv6"
	I0512 01:00:14.718482       1 server.go:656] "Version info" version="v1.23.5"
	I0512 01:00:14.719990       1 config.go:226] "Starting endpoint slice config controller"
	I0512 01:00:14.720321       1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
	I0512 01:00:14.721226       1 config.go:317] "Starting service config controller"
	I0512 01:00:14.721253       1 shared_informer.go:240] Waiting for caches to sync for service config
	I0512 01:00:14.821982       1 shared_informer.go:247] Caches are synced for service config 
	I0512 01:00:14.902572       1 shared_informer.go:247] Caches are synced for endpoint slice config 
	
	* 
	* ==> kube-scheduler [2fc40eb3e688] <==
	* E0512 00:59:40.581820       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0512 00:59:40.796618       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1beta1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0512 00:59:40.796748       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.CSIStorageCapacity: failed to list *v1beta1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0512 00:59:40.826087       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0512 00:59:40.826224       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0512 00:59:41.108608       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0512 00:59:41.108718       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0512 00:59:41.174238       1 reflector.go:324] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:205: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0512 00:59:41.174354       1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:205: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0512 00:59:41.325504       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0512 00:59:41.325656       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0512 00:59:41.608211       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0512 00:59:41.608381       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0512 00:59:42.342260       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0512 00:59:42.342418       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0512 00:59:42.439027       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0512 00:59:42.439185       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0512 00:59:42.609349       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0512 00:59:42.609510       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0512 00:59:52.520252       1 plugin.go:138] "getting namespace, assuming empty set of namespace labels" err="namespace \"kube-system\" not found" namespace="kube-system"
	E0512 00:59:52.520441       1 plugin.go:138] "getting namespace, assuming empty set of namespace labels" err="namespace \"kube-system\" not found" namespace="kube-system"
	E0512 00:59:52.520602       1 plugin.go:138] "getting namespace, assuming empty set of namespace labels" err="namespace \"kube-system\" not found" namespace="kube-system"
	E0512 00:59:52.520733       1 plugin.go:138] "getting namespace, assuming empty set of namespace labels" err="namespace \"kube-system\" not found" namespace="kube-system"
	E0512 00:59:52.669964       1 plugin.go:138] "getting namespace, assuming empty set of namespace labels" err="namespace \"kube-system\" not found" namespace="kube-system"
	I0512 00:59:53.417197       1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	
	* 
	* ==> kubelet <==
	* -- Logs begin at Thu 2022-05-12 00:58:27 UTC, end at Thu 2022-05-12 01:01:50 UTC. --
	May 12 01:01:48 pause-20220512005140-7184 kubelet[2193]: I0512 01:01:48.452886    2193 kubelet_getters.go:300] "Path does not exist" path="/var/lib/kubelet/pods/e99d636dff03f5e3da6f24ae67869c67/volumes"
	May 12 01:01:48 pause-20220512005140-7184 kubelet[2193]: time="2022-05-12T01:01:48Z" level=warning msg="Failed to remove cgroup (will retry)" error="rmdir /sys/fs/cgroup/rdma/kubepods/burstable/poda5c9cbf1e212ca46487a0f77b2ba121a/08fa720e4f2f8fdc25f6ee1871ff67f889a08da6980ed52c949411cd5ecbc2a0: device or resource busy"
	May 12 01:01:48 pause-20220512005140-7184 kubelet[2193]: time="2022-05-12T01:01:48Z" level=warning msg="Failed to remove cgroup (will retry)" error="rmdir /sys/fs/cgroup/rdma/kubepods/burstable/pod82c25bf5694d0bebfaf7ccd6aa8f20a5/0b63581e4b1f71a98c3551c67dc5468e3a7a1b5a65fa579b022274a41a7cce65: device or resource busy"
	May 12 01:01:48 pause-20220512005140-7184 kubelet[2193]: time="2022-05-12T01:01:48Z" level=warning msg="Failed to remove cgroup (will retry)" error="rmdir /sys/fs/cgroup/rdma/kubepods/burstable/pod1c4da7ece64655cb68bd55ce65a833a0/3c99a58f2055509fb8894bd5a6484ab6b5f03eb8aa20d9c987c462fba0b1446f: device or resource busy"
	May 12 01:01:48 pause-20220512005140-7184 kubelet[2193]: time="2022-05-12T01:01:48Z" level=warning msg="Failed to remove cgroup (will retry)" error="rmdir /sys/fs/cgroup/rdma/kubepods/burstable/pod9c5b7a2e5e249e16424eb9d040bf20d9/a2c4cf5a719903a6ff0e5a5efe92c7db154c94f3a3e92f2383b6aae26017980a: device or resource busy"
	May 12 01:01:48 pause-20220512005140-7184 kubelet[2193]: time="2022-05-12T01:01:48Z" level=warning msg="Failed to remove cgroup (will retry)" error="rmdir /sys/fs/cgroup/rdma/kubepods/burstable/pode99d636dff03f5e3da6f24ae67869c67/aa28c752fe758a2723e31d987fe053f3bb0fc8e8b74d9365b076aaf9d38fca1d: device or resource busy"
	May 12 01:01:48 pause-20220512005140-7184 kubelet[2193]: time="2022-05-12T01:01:48Z" level=warning msg="Failed to remove cgroup (will retry)" error="rmdir /sys/fs/cgroup/rdma/kubepods/burstable/pod04768e53f2a1c22ac973569c7458edad/6c11849f651038cdc886171f1052a877317520a5b2145e8a1a29812f23b6f96e: device or resource busy"
	May 12 01:01:48 pause-20220512005140-7184 kubelet[2193]: time="2022-05-12T01:01:48Z" level=warning msg="Failed to remove cgroup (will retry)" error="rmdir /sys/fs/cgroup/rdma/kubepods/burstable/podda06069e620cad6c9eac71b88ae00fde/b63d21055dd442e8ab117ad6aa86c32f9708f6fcadc0649bf33e5f501261df9a: device or resource busy"
	May 12 01:01:48 pause-20220512005140-7184 kubelet[2193]: time="2022-05-12T01:01:48Z" level=warning msg="Failed to remove cgroup (will retry)" error="rmdir /sys/fs/cgroup/rdma/kubepods/burstable/pod11ed937b1ac6188defdcf43e402b8b40/2af73f5578baf8d3dadfafc5a96c295bb3386dba2999b2cadda50bb73542f565: device or resource busy"
	May 12 01:01:48 pause-20220512005140-7184 kubelet[2193]: time="2022-05-12T01:01:48Z" level=error msg="Failed to remove cgroup" error="rmdir /sys/fs/cgroup/rdma/kubepods/burstable/pod04768e53f2a1c22ac973569c7458edad/6c11849f651038cdc886171f1052a877317520a5b2145e8a1a29812f23b6f96e: device or resource busy"
	May 12 01:01:48 pause-20220512005140-7184 kubelet[2193]: time="2022-05-12T01:01:48Z" level=error msg="Failed to remove cgroup" error="rmdir /sys/fs/cgroup/rdma/kubepods/burstable/podda06069e620cad6c9eac71b88ae00fde/b63d21055dd442e8ab117ad6aa86c32f9708f6fcadc0649bf33e5f501261df9a: device or resource busy"
	May 12 01:01:48 pause-20220512005140-7184 kubelet[2193]: time="2022-05-12T01:01:48Z" level=error msg="Failed to remove cgroup" error="rmdir /sys/fs/cgroup/rdma/kubepods/burstable/poda5c9cbf1e212ca46487a0f77b2ba121a/08fa720e4f2f8fdc25f6ee1871ff67f889a08da6980ed52c949411cd5ecbc2a0: device or resource busy"
	May 12 01:01:48 pause-20220512005140-7184 kubelet[2193]: time="2022-05-12T01:01:48Z" level=error msg="Failed to remove cgroup" error="rmdir /sys/fs/cgroup/rdma/kubepods/burstable/pode99d636dff03f5e3da6f24ae67869c67/aa28c752fe758a2723e31d987fe053f3bb0fc8e8b74d9365b076aaf9d38fca1d: device or resource busy"
	May 12 01:01:48 pause-20220512005140-7184 kubelet[2193]: time="2022-05-12T01:01:48Z" level=error msg="Failed to remove cgroup" error="rmdir /sys/fs/cgroup/rdma/kubepods/burstable/pod9c5b7a2e5e249e16424eb9d040bf20d9/a2c4cf5a719903a6ff0e5a5efe92c7db154c94f3a3e92f2383b6aae26017980a: device or resource busy"
	May 12 01:01:48 pause-20220512005140-7184 kubelet[2193]: I0512 01:01:48.609850    2193 pod_container_manager_linux.go:194] "Failed to delete cgroup paths" cgroupName=[kubepods burstable pod04768e53f2a1c22ac973569c7458edad] err="unable to destroy cgroup paths for cgroup [kubepods burstable pod04768e53f2a1c22ac973569c7458edad] : Failed to remove paths: map[rdma:/sys/fs/cgroup/rdma/kubepods/burstable/pod04768e53f2a1c22ac973569c7458edad]"
	May 12 01:01:48 pause-20220512005140-7184 kubelet[2193]: I0512 01:01:48.609984    2193 pod_container_manager_linux.go:194] "Failed to delete cgroup paths" cgroupName=[kubepods burstable podda06069e620cad6c9eac71b88ae00fde] err="unable to destroy cgroup paths for cgroup [kubepods burstable podda06069e620cad6c9eac71b88ae00fde] : Failed to remove paths: map[rdma:/sys/fs/cgroup/rdma/kubepods/burstable/podda06069e620cad6c9eac71b88ae00fde]"
	May 12 01:01:48 pause-20220512005140-7184 kubelet[2193]: I0512 01:01:48.610007    2193 pod_container_manager_linux.go:194] "Failed to delete cgroup paths" cgroupName=[kubepods burstable poda5c9cbf1e212ca46487a0f77b2ba121a] err="unable to destroy cgroup paths for cgroup [kubepods burstable poda5c9cbf1e212ca46487a0f77b2ba121a] : Failed to remove paths: map[rdma:/sys/fs/cgroup/rdma/kubepods/burstable/poda5c9cbf1e212ca46487a0f77b2ba121a]"
	May 12 01:01:48 pause-20220512005140-7184 kubelet[2193]: I0512 01:01:48.610012    2193 pod_container_manager_linux.go:194] "Failed to delete cgroup paths" cgroupName=[kubepods burstable pode99d636dff03f5e3da6f24ae67869c67] err="unable to destroy cgroup paths for cgroup [kubepods burstable pode99d636dff03f5e3da6f24ae67869c67] : Failed to remove paths: map[rdma:/sys/fs/cgroup/rdma/kubepods/burstable/pode99d636dff03f5e3da6f24ae67869c67]"
	May 12 01:01:48 pause-20220512005140-7184 kubelet[2193]: I0512 01:01:48.610065    2193 pod_container_manager_linux.go:194] "Failed to delete cgroup paths" cgroupName=[kubepods burstable pod9c5b7a2e5e249e16424eb9d040bf20d9] err="unable to destroy cgroup paths for cgroup [kubepods burstable pod9c5b7a2e5e249e16424eb9d040bf20d9] : Failed to remove paths: map[rdma:/sys/fs/cgroup/rdma/kubepods/burstable/pod9c5b7a2e5e249e16424eb9d040bf20d9]"
	May 12 01:01:48 pause-20220512005140-7184 kubelet[2193]: time="2022-05-12T01:01:48Z" level=error msg="Failed to remove cgroup" error="rmdir /sys/fs/cgroup/rdma/kubepods/burstable/pod82c25bf5694d0bebfaf7ccd6aa8f20a5/0b63581e4b1f71a98c3551c67dc5468e3a7a1b5a65fa579b022274a41a7cce65: device or resource busy"
	May 12 01:01:48 pause-20220512005140-7184 kubelet[2193]: time="2022-05-12T01:01:48Z" level=error msg="Failed to remove cgroup" error="rmdir /sys/fs/cgroup/rdma/kubepods/burstable/pod11ed937b1ac6188defdcf43e402b8b40/2af73f5578baf8d3dadfafc5a96c295bb3386dba2999b2cadda50bb73542f565: device or resource busy"
	May 12 01:01:48 pause-20220512005140-7184 kubelet[2193]: I0512 01:01:48.610344    2193 pod_container_manager_linux.go:194] "Failed to delete cgroup paths" cgroupName=[kubepods burstable pod82c25bf5694d0bebfaf7ccd6aa8f20a5] err="unable to destroy cgroup paths for cgroup [kubepods burstable pod82c25bf5694d0bebfaf7ccd6aa8f20a5] : Failed to remove paths: map[rdma:/sys/fs/cgroup/rdma/kubepods/burstable/pod82c25bf5694d0bebfaf7ccd6aa8f20a5]"
	May 12 01:01:48 pause-20220512005140-7184 kubelet[2193]: time="2022-05-12T01:01:48Z" level=error msg="Failed to remove cgroup" error="rmdir /sys/fs/cgroup/rdma/kubepods/burstable/pod1c4da7ece64655cb68bd55ce65a833a0/3c99a58f2055509fb8894bd5a6484ab6b5f03eb8aa20d9c987c462fba0b1446f: device or resource busy"
	May 12 01:01:48 pause-20220512005140-7184 kubelet[2193]: I0512 01:01:48.610361    2193 pod_container_manager_linux.go:194] "Failed to delete cgroup paths" cgroupName=[kubepods burstable pod11ed937b1ac6188defdcf43e402b8b40] err="unable to destroy cgroup paths for cgroup [kubepods burstable pod11ed937b1ac6188defdcf43e402b8b40] : Failed to remove paths: map[rdma:/sys/fs/cgroup/rdma/kubepods/burstable/pod11ed937b1ac6188defdcf43e402b8b40]"
	May 12 01:01:48 pause-20220512005140-7184 kubelet[2193]: I0512 01:01:48.610411    2193 pod_container_manager_linux.go:194] "Failed to delete cgroup paths" cgroupName=[kubepods burstable pod1c4da7ece64655cb68bd55ce65a833a0] err="unable to destroy cgroup paths for cgroup [kubepods burstable pod1c4da7ece64655cb68bd55ce65a833a0] : Failed to remove paths: map[rdma:/sys/fs/cgroup/rdma/kubepods/burstable/pod1c4da7ece64655cb68bd55ce65a833a0]"
	
	* 
	* ==> storage-provisioner [e6a72fa21d5f] <==
	* I0512 01:01:03.687115       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0512 01:01:03.732723       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0512 01:01:03.732922       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0512 01:01:03.776074       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0512 01:01:03.776531       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_pause-20220512005140-7184_79c0bd3a-7d46-426c-bc29-8ccb891e3a6f!
	I0512 01:01:03.776536       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"78896ed9-a2ba-43cf-b67f-5cc8ac1c18c0", APIVersion:"v1", ResourceVersion:"512", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' pause-20220512005140-7184_79c0bd3a-7d46-426c-bc29-8ccb891e3a6f became leader
	I0512 01:01:03.877681       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_pause-20220512005140-7184_79c0bd3a-7d46-426c-bc29-8ccb891e3a6f!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p pause-20220512005140-7184 -n pause-20220512005140-7184
helpers_test.go:254: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p pause-20220512005140-7184 -n pause-20220512005140-7184: (7.8561412s)
helpers_test.go:261: (dbg) Run:  kubectl --context pause-20220512005140-7184 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:270: non-running pods: 
helpers_test.go:272: ======> post-mortem[TestPause/serial/Pause]: describe non-running pods <======
helpers_test.go:275: (dbg) Run:  kubectl --context pause-20220512005140-7184 describe pod 
helpers_test.go:275: (dbg) Non-zero exit: kubectl --context pause-20220512005140-7184 describe pod : exit status 1 (341.1932ms)

                                                
                                                
** stderr ** 
	error: resource name may not be empty

                                                
                                                
** /stderr **
helpers_test.go:277: kubectl --context pause-20220512005140-7184 describe pod : exit status 1
--- FAIL: TestPause/serial/Pause (57.59s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (121.19s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:313: (dbg) Run:  out/minikube-windows-amd64.exe pause -p embed-certs-20220512010611-7184 --alsologtostderr -v=1
E0512 01:17:00.944182    7184 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\no-preload-20220512010315-7184\client.crt: The system cannot find the path specified.
start_stop_delete_test.go:313: (dbg) Non-zero exit: out/minikube-windows-amd64.exe pause -p embed-certs-20220512010611-7184 --alsologtostderr -v=1: exit status 80 (11.7186434s)

                                                
                                                
-- stdout --
	* Pausing node embed-certs-20220512010611-7184 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0512 01:16:57.277266    4928 out.go:296] Setting OutFile to fd 1568 ...
	I0512 01:16:57.333864    4928 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0512 01:16:57.333864    4928 out.go:309] Setting ErrFile to fd 1524...
	I0512 01:16:57.333864    4928 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0512 01:16:57.343893    4928 out.go:303] Setting JSON to false
	I0512 01:16:57.343893    4928 mustload.go:65] Loading cluster: embed-certs-20220512010611-7184
	I0512 01:16:57.344869    4928 config.go:178] Loaded profile config "embed-certs-20220512010611-7184": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.5
	I0512 01:16:57.363864    4928 cli_runner.go:164] Run: docker container inspect embed-certs-20220512010611-7184 --format={{.State.Status}}
	I0512 01:16:59.970468    4928 cli_runner.go:217] Completed: docker container inspect embed-certs-20220512010611-7184 --format={{.State.Status}}: (2.6063808s)
	I0512 01:16:59.970530    4928 host.go:66] Checking if "embed-certs-20220512010611-7184" exists ...
	I0512 01:16:59.978974    4928 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" embed-certs-20220512010611-7184
	I0512 01:17:01.053179    4928 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" embed-certs-20220512010611-7184: (1.0739876s)
	I0512 01:17:01.054587    4928 pause.go:58] "namespaces" ="keys" ="(MISSING)"
	I0512 01:17:01.208677    4928 out.go:177] * Pausing node embed-certs-20220512010611-7184 ... 
	I0512 01:17:01.371079    4928 host.go:66] Checking if "embed-certs-20220512010611-7184" exists ...
	I0512 01:17:01.382613    4928 ssh_runner.go:195] Run: systemctl --version
	I0512 01:17:01.389673    4928 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220512010611-7184
	I0512 01:17:02.459815    4928 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220512010611-7184: (1.0698095s)
	I0512 01:17:02.515720    4928 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50418 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\embed-certs-20220512010611-7184\id_rsa Username:docker}
	I0512 01:17:02.673137    4928 ssh_runner.go:235] Completed: systemctl --version: (1.2904005s)
	I0512 01:17:02.689666    4928 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0512 01:17:02.725001    4928 pause.go:50] kubelet running: true
	I0512 01:17:02.736967    4928 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I0512 01:17:03.100142    4928 ssh_runner.go:195] Run: docker ps --filter status=running --filter=name=k8s_.*_(kube-system|kubernetes-dashboard|storage-gluster|istio-operator)_ --format={{.ID}}
	I0512 01:17:03.177720    4928 docker.go:459] Pausing containers: [9dc69e11c6a1 2ab01f88686a ca490861f8c5 518dc8f1de40 f81a11687ddc 771266c61ad4 c33958dd750e f5ebcb93e156 c0de5d57b524 67ed526660f7 a69052506663 9f604816537e b523f2c5d6b7 fb24ff0a554f 232a840dde2a 89f8f5f5be49 1ac20faf9fc4 f8b333667e2b]
	I0512 01:17:03.187623    4928 ssh_runner.go:195] Run: docker pause 9dc69e11c6a1 2ab01f88686a ca490861f8c5 518dc8f1de40 f81a11687ddc 771266c61ad4 c33958dd750e f5ebcb93e156 c0de5d57b524 67ed526660f7 a69052506663 9f604816537e b523f2c5d6b7 fb24ff0a554f 232a840dde2a 89f8f5f5be49 1ac20faf9fc4 f8b333667e2b
	I0512 01:17:08.624473    4928 ssh_runner.go:235] Completed: docker pause 9dc69e11c6a1 2ab01f88686a ca490861f8c5 518dc8f1de40 f81a11687ddc 771266c61ad4 c33958dd750e f5ebcb93e156 c0de5d57b524 67ed526660f7 a69052506663 9f604816537e b523f2c5d6b7 fb24ff0a554f 232a840dde2a 89f8f5f5be49 1ac20faf9fc4 f8b333667e2b: (5.4365705s)
	I0512 01:17:08.629474    4928 out.go:177] 
	W0512 01:17:08.633474    4928 out.go:239] X Exiting due to GUEST_PAUSE: docker: docker pause 9dc69e11c6a1 2ab01f88686a ca490861f8c5 518dc8f1de40 f81a11687ddc 771266c61ad4 c33958dd750e f5ebcb93e156 c0de5d57b524 67ed526660f7 a69052506663 9f604816537e b523f2c5d6b7 fb24ff0a554f 232a840dde2a 89f8f5f5be49 1ac20faf9fc4 f8b333667e2b: Process exited with status 1
	stdout:
	9dc69e11c6a1
	2ab01f88686a
	ca490861f8c5
	518dc8f1de40
	f81a11687ddc
	771266c61ad4
	c33958dd750e
	f5ebcb93e156
	c0de5d57b524
	67ed526660f7
	a69052506663
	9f604816537e
	b523f2c5d6b7
	232a840dde2a
	89f8f5f5be49
	1ac20faf9fc4
	f8b333667e2b
	
	stderr:
	Error response from daemon: Cannot pause container fb24ff0a554f61b00a9687ba07ba5ecf0249fe182de14e082a57f1c3023219b7: OCI runtime pause failed: unable to freeze: unknown
	
	X Exiting due to GUEST_PAUSE: docker: docker pause 9dc69e11c6a1 2ab01f88686a ca490861f8c5 518dc8f1de40 f81a11687ddc 771266c61ad4 c33958dd750e f5ebcb93e156 c0de5d57b524 67ed526660f7 a69052506663 9f604816537e b523f2c5d6b7 fb24ff0a554f 232a840dde2a 89f8f5f5be49 1ac20faf9fc4 f8b333667e2b: Process exited with status 1
	stdout:
	9dc69e11c6a1
	2ab01f88686a
	ca490861f8c5
	518dc8f1de40
	f81a11687ddc
	771266c61ad4
	c33958dd750e
	f5ebcb93e156
	c0de5d57b524
	67ed526660f7
	a69052506663
	9f604816537e
	b523f2c5d6b7
	232a840dde2a
	89f8f5f5be49
	1ac20faf9fc4
	f8b333667e2b
	
	stderr:
	Error response from daemon: Cannot pause container fb24ff0a554f61b00a9687ba07ba5ecf0249fe182de14e082a57f1c3023219b7: OCI runtime pause failed: unable to freeze: unknown
	
	W0512 01:17:08.634463    4928 out.go:239] * 
	* 
	W0512 01:17:08.671694    4928 out.go:239] ╭───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                       │
	│    * If the above advice does not help, please let us know:                                                           │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                         │
	│                                                                                                                       │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                              │
	│    * Please also attach the following file to the GitHub issue:                                                       │
	│    * - C:\Users\jenkins.minikube4\AppData\Local\Temp\minikube_pause_0a4d03c8adbe4992011689b475409882710ca950_1.log    │
	│                                                                                                                       │
	╰───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                       │
	│    * If the above advice does not help, please let us know:                                                           │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                         │
	│                                                                                                                       │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                              │
	│    * Please also attach the following file to the GitHub issue:                                                       │
	│    * - C:\Users\jenkins.minikube4\AppData\Local\Temp\minikube_pause_0a4d03c8adbe4992011689b475409882710ca950_1.log    │
	│                                                                                                                       │
	╰───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I0512 01:17:08.678797    4928 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:313: out/minikube-windows-amd64.exe pause -p embed-certs-20220512010611-7184 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect embed-certs-20220512010611-7184
helpers_test.go:231: (dbg) Done: docker inspect embed-certs-20220512010611-7184: (1.1246933s)
helpers_test.go:235: (dbg) docker inspect embed-certs-20220512010611-7184:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "f4e28399843b28941bba40e3d3af83b376043607edf06c84a6114ce246bbbf24",
	        "Created": "2022-05-12T01:06:59.3694127Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 211306,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2022-05-12T01:09:11.1082331Z",
	            "FinishedAt": "2022-05-12T01:08:51.0918361Z"
	        },
	        "Image": "sha256:8a42e1145657f551cd435eddb43b96ab44d0facbe44106da934225366eeb7757",
	        "ResolvConfPath": "/var/lib/docker/containers/f4e28399843b28941bba40e3d3af83b376043607edf06c84a6114ce246bbbf24/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/f4e28399843b28941bba40e3d3af83b376043607edf06c84a6114ce246bbbf24/hostname",
	        "HostsPath": "/var/lib/docker/containers/f4e28399843b28941bba40e3d3af83b376043607edf06c84a6114ce246bbbf24/hosts",
	        "LogPath": "/var/lib/docker/containers/f4e28399843b28941bba40e3d3af83b376043607edf06c84a6114ce246bbbf24/f4e28399843b28941bba40e3d3af83b376043607edf06c84a6114ce246bbbf24-json.log",
	        "Name": "/embed-certs-20220512010611-7184",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "embed-certs-20220512010611-7184:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "embed-certs-20220512010611-7184",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 2306867200,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/983caab56e6e21128f0b32a653617b71cbd07396cc92226128f32156658526cb-init/diff:/var/lib/docker/overlay2/f5e9ce82d74c36e0b6de7ac5d28dc4eacb2abae094b3550db66974795ad74446/diff:/var/lib/docker/overlay2/63fa2edc88f635760dd847938696c8fc50aad8a0c51ab6c6f93b0aa9a6fcefe6/diff:/var/lib/docker/overlay2/5fcaace21e215fd120a775470b4a78ef06d9f466e024f0b1baef234ddf87f04f/diff:/var/lib/docker/overlay2/6c9accb62919ca088333f8b3a193f43886e2c3042c5ec726e1c7fd19ee183024/diff:/var/lib/docker/overlay2/a9a1aea640018bd9e524c36422ea97cc97a391601b718810fed779260e48c87a/diff:/var/lib/docker/overlay2/2476b6e8d656e43d76d4288098236bc765cb37fa9dde696f09bfce485e99229e/diff:/var/lib/docker/overlay2/6cdf08ddc61561e961de67f3e14478397868c53a4d8552909a5b84e35b28db1f/diff:/var/lib/docker/overlay2/6f6f8b6686cc7838a52ce30fdc4e938cde2fb68b161e09a9bc81fa11011619a6/diff:/var/lib/docker/overlay2/9f55b91f762ea60dc3da71cf56e5ca24181e712389d99cb8d972bba474f5d6a4/diff:/var/lib/docker/overlay2/4c269b
df57eae0d21d2034dc086308278f9b00f2301c726b6de876b9ff97298d/diff:/var/lib/docker/overlay2/5fb8ed9b9e765df8150f27b7847ec7333b2c7d978dbc1161da97d630ec7e43e2/diff:/var/lib/docker/overlay2/3a297e9f6ab51d930ef61c49a0fea772cdc2a2704a077db6adb142eb044d9a93/diff:/var/lib/docker/overlay2/2068464f4655627fb513b31660ab34c938e559da10d44fd723ce9e1d744a037d/diff:/var/lib/docker/overlay2/f783596106daebadefbb7774015c369d757665d434c96581b426b2e5f5b453c4/diff:/var/lib/docker/overlay2/ac8927d3cc7829cc82e4c0214dd4adee97aedbe2b7d992cbbc08288443c8e320/diff:/var/lib/docker/overlay2/91eff4612dd56b2c82f4357b879f9cdcbc13704bf8f6dcbfc56febb104774843/diff:/var/lib/docker/overlay2/ee7366acc162efb7b878c4c56df021a8ef756fa595230ffe898cd0dd0355eb44/diff:/var/lib/docker/overlay2/ab5df115d2ec8cd71172942a2e449de32b93a3b6b5d90122c0e734c5e11d6bb2/diff:/var/lib/docker/overlay2/59fcbe9b552129cdc5c96e9e8ff27f4b88a12645aec3cf8f48f28d91521760e0/diff:/var/lib/docker/overlay2/0684681eb5880654b43e8803ef8f17b85e6129c85ff81c13b509563184a77625/diff:/var/lib/d
ocker/overlay2/8bbd24801b480df6ca8545e8f8bb09c17b7598c2868fb94ea5b8775ce2f311b4/diff:/var/lib/docker/overlay2/f28553cc59fccccfdfc5c24b7b8dfe4055c625d0a004731911c34b4ba32a9dfb/diff:/var/lib/docker/overlay2/2e47a8ef6e4481885d71f57a1d9ef99898b741644addd2796de5c2f4c696edb0/diff:/var/lib/docker/overlay2/e1f1eaaa809c974dabd197f590d19be05325f506a53a9a1f8ba29defd7096f60/diff:/var/lib/docker/overlay2/83ce12af60df76f98283ed8f3450cd5727b42d06055b18f04a18068b105ae128/diff:/var/lib/docker/overlay2/5fd34820f54e7f8f0c898c21b5d9d030e5b82c65c901897306c3db475481167f/diff:/var/lib/docker/overlay2/3ecb6f46fa47a8906ff5de1da5a63be9c664ff5bc66faf870126868d36bb77c6/diff:/var/lib/docker/overlay2/ccb92f12dd3e84b11b2c9b1ef6a0581ad5894648432ebe7cb5d16d48c7aacf6e/diff:/var/lib/docker/overlay2/7c6d11dc9abdd4916f3759c8ae4db8c3011cff872f2fd3cc502e7f663e496765/diff:/var/lib/docker/overlay2/b865b0351704115fa113e25f7651d1dc1e2f0348c332552e555e898094f34802/diff:/var/lib/docker/overlay2/bbcf207462c3f88368214d8e4ca222f28a828bd30661741d421665b4d10
80f07/diff:/var/lib/docker/overlay2/b554a32e9a2e4d3773e918754c27a1b32bc7ec5327d3bd1f52d7a146a07fa2c5/diff:/var/lib/docker/overlay2/d0a997bacfa9b1b54f61c62f00ad2797616ea9bb55182aad68ed805f96f5f72b/diff:/var/lib/docker/overlay2/e0c168ecfe6a93618f4f653c1aba422023114f242ab1045591d0c8454573d5c2/diff:/var/lib/docker/overlay2/fb67af38a46ef55935fcfb4f1be5f34b45b3d0e1c571538828117f23eedea417/diff:/var/lib/docker/overlay2/e96ed0776e5f27ef225469ac5f5e8ed2e299c72d5db88782599c0fdd1cec2fe3/diff:/var/lib/docker/overlay2/91b77e60e0a7864ace4f5a4d65f465bd7fe862616a87a74ee9fee21dc5dceb07/diff:/var/lib/docker/overlay2/9829211293f70b356dfa8d07b5dbbc3a6d05415cbd2840fd9dd948b8b315bf18/diff:/var/lib/docker/overlay2/dc35dda36e34a2f4f3a5d958b1a7d4d75db8655c4bc7b4b3d9591f43f9a645fc/diff:/var/lib/docker/overlay2/968c2bb04f641a9c8bd30d38659dc28973b31bfd577bb1aa355ae6c2ab4a0d34/diff:/var/lib/docker/overlay2/37432c6ae0b10a52e95b215fdd2e256362060f32c4a52f0d2021b6e10b3ed77b/diff:/var/lib/docker/overlay2/77687f9734b19f3e8a5bb08b07067e0b572775
20867b7a6ad80b67ffebe332d7/diff",
	                "MergedDir": "/var/lib/docker/overlay2/983caab56e6e21128f0b32a653617b71cbd07396cc92226128f32156658526cb/merged",
	                "UpperDir": "/var/lib/docker/overlay2/983caab56e6e21128f0b32a653617b71cbd07396cc92226128f32156658526cb/diff",
	                "WorkDir": "/var/lib/docker/overlay2/983caab56e6e21128f0b32a653617b71cbd07396cc92226128f32156658526cb/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "embed-certs-20220512010611-7184",
	                "Source": "/var/lib/docker/volumes/embed-certs-20220512010611-7184/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "embed-certs-20220512010611-7184",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1652251400-14138@sha256:8c847a4aa2afc5a7fc659f9731046bf9cc7e788283deecc83c8633014fb0828a",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "embed-certs-20220512010611-7184",
	                "name.minikube.sigs.k8s.io": "embed-certs-20220512010611-7184",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "6c7da326ef50888a0effb09dbadc0bfad9746a5feb2bc86f00982944d9f52aab",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "50418"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "50419"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "50415"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "50416"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "50417"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/6c7da326ef50",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "embed-certs-20220512010611-7184": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.58.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "f4e28399843b",
	                        "embed-certs-20220512010611-7184"
	                    ],
	                    "NetworkID": "17506d6362697ddcd7c4bad2fcbfa96514130db5d8a4562b1b83e75435155eb1",
	                    "EndpointID": "9ebc9e54e0ad4ed048c06fea5ac4ab54907e734c8ffda363d26e9eb3ad57346c",
	                    "Gateway": "192.168.58.1",
	                    "IPAddress": "192.168.58.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:3a:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p embed-certs-20220512010611-7184 -n embed-certs-20220512010611-7184
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p embed-certs-20220512010611-7184 -n embed-certs-20220512010611-7184: exit status 2 (6.8969747s)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestStartStop/group/embed-certs/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-windows-amd64.exe -p embed-certs-20220512010611-7184 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-windows-amd64.exe -p embed-certs-20220512010611-7184 logs -n 25: (19.610626s)
helpers_test.go:252: TestStartStop/group/embed-certs/serial/Pause logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|---------------------------------------------------|------------------------------------------------|-------------------|---------|---------------------|---------------------|
	| Command |                       Args                        |                    Profile                     |       User        | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------|------------------------------------------------|-------------------|---------|---------------------|---------------------|
	| addons  | enable dashboard -p                               | no-preload-20220512010315-7184                 | minikube4\jenkins | v1.25.2 | 12 May 22 01:06 GMT | 12 May 22 01:07 GMT |
	|         | no-preload-20220512010315-7184                    |                                                |                   |         |                     |                     |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4 |                                                |                   |         |                     |                     |
	| start   | -p                                                | embed-certs-20220512010611-7184                | minikube4\jenkins | v1.25.2 | 12 May 22 01:06 GMT | 12 May 22 01:08 GMT |
	|         | embed-certs-20220512010611-7184                   |                                                |                   |         |                     |                     |
	|         | --memory=2200 --alsologtostderr                   |                                                |                   |         |                     |                     |
	|         | --wait=true --embed-certs                         |                                                |                   |         |                     |                     |
	|         | --driver=docker                                   |                                                |                   |         |                     |                     |
	|         | --kubernetes-version=v1.23.5                      |                                                |                   |         |                     |                     |
	| addons  | enable metrics-server -p                          | embed-certs-20220512010611-7184                | minikube4\jenkins | v1.25.2 | 12 May 22 01:08 GMT | 12 May 22 01:08 GMT |
	|         | embed-certs-20220512010611-7184                   |                                                |                   |         |                     |                     |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4  |                                                |                   |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain            |                                                |                   |         |                     |                     |
	| stop    | -p                                                | embed-certs-20220512010611-7184                | minikube4\jenkins | v1.25.2 | 12 May 22 01:08 GMT | 12 May 22 01:08 GMT |
	|         | embed-certs-20220512010611-7184                   |                                                |                   |         |                     |                     |
	|         | --alsologtostderr -v=3                            |                                                |                   |         |                     |                     |
	| addons  | enable dashboard -p                               | embed-certs-20220512010611-7184                | minikube4\jenkins | v1.25.2 | 12 May 22 01:08 GMT | 12 May 22 01:08 GMT |
	|         | embed-certs-20220512010611-7184                   |                                                |                   |         |                     |                     |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4 |                                                |                   |         |                     |                     |
	| start   | -p                                                | force-systemd-env-20220512010244-7184          | minikube4\jenkins | v1.25.2 | 12 May 22 01:02 GMT | 12 May 22 01:10 GMT |
	|         | force-systemd-env-20220512010244-7184             |                                                |                   |         |                     |                     |
	|         | --memory=2048 --alsologtostderr -v=5              |                                                |                   |         |                     |                     |
	|         | --driver=docker                                   |                                                |                   |         |                     |                     |
	| ssh     | force-systemd-env-20220512010244-7184             | force-systemd-env-20220512010244-7184          | minikube4\jenkins | v1.25.2 | 12 May 22 01:11 GMT | 12 May 22 01:11 GMT |
	|         | ssh docker info --format                          |                                                |                   |         |                     |                     |
	|         | {{.CgroupDriver}}                                 |                                                |                   |         |                     |                     |
	| delete  | -p                                                | force-systemd-env-20220512010244-7184          | minikube4\jenkins | v1.25.2 | 12 May 22 01:11 GMT | 12 May 22 01:11 GMT |
	|         | force-systemd-env-20220512010244-7184             |                                                |                   |         |                     |                     |
	| delete  | -p                                                | disable-driver-mounts-20220512011134-7184      | minikube4\jenkins | v1.25.2 | 12 May 22 01:11 GMT | 12 May 22 01:11 GMT |
	|         | disable-driver-mounts-20220512011134-7184         |                                                |                   |         |                     |                     |
	| start   | -p                                                | old-k8s-version-20220512010246-7184            | minikube4\jenkins | v1.25.2 | 12 May 22 01:02 GMT | 12 May 22 01:11 GMT |
	|         | old-k8s-version-20220512010246-7184               |                                                |                   |         |                     |                     |
	|         | --memory=2200 --alsologtostderr                   |                                                |                   |         |                     |                     |
	|         | --wait=true --kvm-network=default                 |                                                |                   |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                     |                                                |                   |         |                     |                     |
	|         | --disable-driver-mounts                           |                                                |                   |         |                     |                     |
	|         | --keep-context=false                              |                                                |                   |         |                     |                     |
	|         | --driver=docker                                   |                                                |                   |         |                     |                     |
	|         | --kubernetes-version=v1.16.0                      |                                                |                   |         |                     |                     |
	| addons  | enable metrics-server -p                          | old-k8s-version-20220512010246-7184            | minikube4\jenkins | v1.25.2 | 12 May 22 01:12 GMT | 12 May 22 01:12 GMT |
	|         | old-k8s-version-20220512010246-7184               |                                                |                   |         |                     |                     |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4  |                                                |                   |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain            |                                                |                   |         |                     |                     |
	| stop    | -p                                                | old-k8s-version-20220512010246-7184            | minikube4\jenkins | v1.25.2 | 12 May 22 01:12 GMT | 12 May 22 01:12 GMT |
	|         | old-k8s-version-20220512010246-7184               |                                                |                   |         |                     |                     |
	|         | --alsologtostderr -v=3                            |                                                |                   |         |                     |                     |
	| addons  | enable dashboard -p                               | old-k8s-version-20220512010246-7184            | minikube4\jenkins | v1.25.2 | 12 May 22 01:12 GMT | 12 May 22 01:12 GMT |
	|         | old-k8s-version-20220512010246-7184               |                                                |                   |         |                     |                     |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4 |                                                |                   |         |                     |                     |
	| start   | -p no-preload-20220512010315-7184                 | no-preload-20220512010315-7184                 | minikube4\jenkins | v1.25.2 | 12 May 22 01:07 GMT | 12 May 22 01:13 GMT |
	|         | --memory=2200 --alsologtostderr                   |                                                |                   |         |                     |                     |
	|         | --wait=true --preload=false                       |                                                |                   |         |                     |                     |
	|         | --driver=docker                                   |                                                |                   |         |                     |                     |
	|         | --kubernetes-version=v1.23.6-rc.0                 |                                                |                   |         |                     |                     |
	| start   | -p                                                | default-k8s-different-port-20220512011148-7184 | minikube4\jenkins | v1.25.2 | 12 May 22 01:11 GMT | 12 May 22 01:14 GMT |
	|         | default-k8s-different-port-20220512011148-7184    |                                                |                   |         |                     |                     |
	|         | --memory=2200 --alsologtostderr --wait=true       |                                                |                   |         |                     |                     |
	|         | --apiserver-port=8444 --driver=docker             |                                                |                   |         |                     |                     |
	|         | --kubernetes-version=v1.23.5                      |                                                |                   |         |                     |                     |
	| addons  | enable metrics-server -p                          | default-k8s-different-port-20220512011148-7184 | minikube4\jenkins | v1.25.2 | 12 May 22 01:14 GMT | 12 May 22 01:14 GMT |
	|         | default-k8s-different-port-20220512011148-7184    |                                                |                   |         |                     |                     |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4  |                                                |                   |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain            |                                                |                   |         |                     |                     |
	| stop    | -p                                                | default-k8s-different-port-20220512011148-7184 | minikube4\jenkins | v1.25.2 | 12 May 22 01:14 GMT | 12 May 22 01:14 GMT |
	|         | default-k8s-different-port-20220512011148-7184    |                                                |                   |         |                     |                     |
	|         | --alsologtostderr -v=3                            |                                                |                   |         |                     |                     |
	| addons  | enable dashboard -p                               | default-k8s-different-port-20220512011148-7184 | minikube4\jenkins | v1.25.2 | 12 May 22 01:14 GMT | 12 May 22 01:14 GMT |
	|         | default-k8s-different-port-20220512011148-7184    |                                                |                   |         |                     |                     |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4 |                                                |                   |         |                     |                     |
	| ssh     | -p                                                | no-preload-20220512010315-7184                 | minikube4\jenkins | v1.25.2 | 12 May 22 01:14 GMT | 12 May 22 01:14 GMT |
	|         | no-preload-20220512010315-7184                    |                                                |                   |         |                     |                     |
	|         | sudo crictl images -o json                        |                                                |                   |         |                     |                     |
	| pause   | -p                                                | no-preload-20220512010315-7184                 | minikube4\jenkins | v1.25.2 | 12 May 22 01:14 GMT | 12 May 22 01:14 GMT |
	|         | no-preload-20220512010315-7184                    |                                                |                   |         |                     |                     |
	|         | --alsologtostderr -v=1                            |                                                |                   |         |                     |                     |
	| unpause | -p                                                | no-preload-20220512010315-7184                 | minikube4\jenkins | v1.25.2 | 12 May 22 01:15 GMT | 12 May 22 01:15 GMT |
	|         | no-preload-20220512010315-7184                    |                                                |                   |         |                     |                     |
	|         | --alsologtostderr -v=1                            |                                                |                   |         |                     |                     |
	| start   | -p                                                | embed-certs-20220512010611-7184                | minikube4\jenkins | v1.25.2 | 12 May 22 01:08 GMT | 12 May 22 01:15 GMT |
	|         | embed-certs-20220512010611-7184                   |                                                |                   |         |                     |                     |
	|         | --memory=2200 --alsologtostderr                   |                                                |                   |         |                     |                     |
	|         | --wait=true --embed-certs                         |                                                |                   |         |                     |                     |
	|         | --driver=docker                                   |                                                |                   |         |                     |                     |
	|         | --kubernetes-version=v1.23.5                      |                                                |                   |         |                     |                     |
	| delete  | -p                                                | no-preload-20220512010315-7184                 | minikube4\jenkins | v1.25.2 | 12 May 22 01:15 GMT | 12 May 22 01:16 GMT |
	|         | no-preload-20220512010315-7184                    |                                                |                   |         |                     |                     |
	| delete  | -p                                                | no-preload-20220512010315-7184                 | minikube4\jenkins | v1.25.2 | 12 May 22 01:16 GMT | 12 May 22 01:16 GMT |
	|         | no-preload-20220512010315-7184                    |                                                |                   |         |                     |                     |
	| ssh     | -p                                                | embed-certs-20220512010611-7184                | minikube4\jenkins | v1.25.2 | 12 May 22 01:16 GMT | 12 May 22 01:16 GMT |
	|         | embed-certs-20220512010611-7184                   |                                                |                   |         |                     |                     |
	|         | sudo crictl images -o json                        |                                                |                   |         |                     |                     |
	|---------|---------------------------------------------------|------------------------------------------------|-------------------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2022/05/12 01:16:16
	Running on machine: minikube4
	Binary: Built with gc go1.18.1 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0512 01:16:16.731520    2560 out.go:296] Setting OutFile to fd 1852 ...
	I0512 01:16:16.790528    2560 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0512 01:16:16.790528    2560 out.go:309] Setting ErrFile to fd 1792...
	I0512 01:16:16.790528    2560 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0512 01:16:16.802535    2560 out.go:303] Setting JSON to false
	I0512 01:16:16.804525    2560 start.go:115] hostinfo: {"hostname":"minikube4","uptime":16629,"bootTime":1652301547,"procs":164,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19042 Build 19042","kernelVersion":"10.0.19042 Build 19042","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"df6bfb5f-73f2-4acb-9365-df7854ecbb28"}
	W0512 01:16:16.804525    2560 start.go:123] gopshost.Virtualization returned error: not implemented yet
	I0512 01:16:16.810569    2560 out.go:177] * [newest-cni-20220512011616-7184] minikube v1.25.2 on Microsoft Windows 10 Enterprise N 10.0.19042 Build 19042
	I0512 01:16:16.813541    2560 notify.go:193] Checking for updates...
	I0512 01:16:16.816541    2560 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	I0512 01:16:16.820526    2560 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube4\minikube-integration\.minikube
	I0512 01:16:16.822531    2560 out.go:177]   - MINIKUBE_LOCATION=13639
	I0512 01:16:16.825517    2560 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0512 01:16:13.643767    4756 pod_ready.go:102] pod "metrics-server-6f89b5864b-jgnxp" in "kube-system" namespace has status "Ready":"False"
	I0512 01:16:15.656108    4756 pod_ready.go:102] pod "metrics-server-6f89b5864b-jgnxp" in "kube-system" namespace has status "Ready":"False"
	I0512 01:16:15.420131    4792 pod_ready.go:102] pod "metrics-server-b955d9d8-rm42p" in "kube-system" namespace has status "Ready":"False"
	I0512 01:16:17.995331    4792 pod_ready.go:102] pod "metrics-server-b955d9d8-rm42p" in "kube-system" namespace has status "Ready":"False"
	I0512 01:16:16.829557    2560 config.go:178] Loaded profile config "default-k8s-different-port-20220512011148-7184": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.5
	I0512 01:16:16.830548    2560 config.go:178] Loaded profile config "embed-certs-20220512010611-7184": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.5
	I0512 01:16:16.830548    2560 config.go:178] Loaded profile config "old-k8s-version-20220512010246-7184": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.16.0
	I0512 01:16:16.831523    2560 driver.go:358] Setting default libvirt URI to qemu:///system
	I0512 01:16:20.037097    2560 docker.go:137] docker version: linux-20.10.14
	I0512 01:16:20.046519    2560 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0512 01:16:22.283107    2560 cli_runner.go:217] Completed: docker system info --format "{{json .}}": (2.2364733s)
	I0512 01:16:22.284599    2560 info.go:265] docker info: {ID:5MOX:W55Z:6RSS:V5PU:46KT:D723:NTM4:N7FK:USOO:URA3:TW6J:2PNT Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:85 OomKillDisable:true NGoroutines:63 SystemTime:2022-05-12 01:16:21.1622095 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.102.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_
64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:53902323712 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.14 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3df54a852345ae127d1fa3092b95168e4a88e2f8 Expected:3df54a852345ae127d1fa3092b95168e4a88e2f8} RuncCommit:{ID:v1.0.3-0-gf46b6ba Expected:v1.0.3-0-gf46b6ba} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,
profile=default] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.5.0] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:C:\Program Files\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc.
Version:v0.17.0]] Warnings:<nil>}}
	I0512 01:16:22.294749    2560 out.go:177] * Using the docker driver based on user configuration
	I0512 01:16:18.160292    4756 pod_ready.go:102] pod "metrics-server-6f89b5864b-jgnxp" in "kube-system" namespace has status "Ready":"False"
	I0512 01:16:20.650197    4756 pod_ready.go:102] pod "metrics-server-6f89b5864b-jgnxp" in "kube-system" namespace has status "Ready":"False"
	I0512 01:16:22.296746    2560 start.go:284] selected driver: docker
	I0512 01:16:22.296746    2560 start.go:801] validating driver "docker" against <nil>
	I0512 01:16:22.296746    2560 start.go:812] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0512 01:16:22.368275    2560 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0512 01:16:24.545409    2560 cli_runner.go:217] Completed: docker system info --format "{{json .}}": (2.1770225s)
	I0512 01:16:24.545409    2560 info.go:265] docker info: {ID:5MOX:W55Z:6RSS:V5PU:46KT:D723:NTM4:N7FK:USOO:URA3:TW6J:2PNT Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:85 OomKillDisable:true NGoroutines:63 SystemTime:2022-05-12 01:16:23.4499135 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.102.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_
64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:53902323712 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.14 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3df54a852345ae127d1fa3092b95168e4a88e2f8 Expected:3df54a852345ae127d1fa3092b95168e4a88e2f8} RuncCommit:{ID:v1.0.3-0-gf46b6ba Expected:v1.0.3-0-gf46b6ba} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,
profile=default] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.5.0] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:C:\Program Files\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc.
Version:v0.17.0]] Warnings:<nil>}}
	I0512 01:16:24.545409    2560 start_flags.go:292] no existing cluster config was found, will generate one from the flags 
	W0512 01:16:24.545409    2560 out.go:239] ! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	I0512 01:16:24.546410    2560 start_flags.go:866] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0512 01:16:24.549409    2560 out.go:177] * Using Docker Desktop driver with the root privilege
	I0512 01:16:24.551412    2560 cni.go:95] Creating CNI manager for ""
	I0512 01:16:24.551412    2560 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0512 01:16:24.551412    2560 start_flags.go:306] config:
	{Name:newest-cni-20220512011616-7184 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1652251400-14138@sha256:8c847a4aa2afc5a7fc659f9731046bf9cc7e788283deecc83c8633014fb0828a Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.6-rc.0 ClusterName:newest-cni-20220512011616-7184 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:
cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:network-plugin Value:cni} {Component:kubeadm Key:pod-network-cidr Value:192.168.111.111/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube4:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetri
cs:false}
	I0512 01:16:24.556410    2560 out.go:177] * Starting control plane node newest-cni-20220512011616-7184 in cluster newest-cni-20220512011616-7184
	I0512 01:16:24.560409    2560 cache.go:120] Beginning downloading kic base image for docker with docker
	I0512 01:16:24.564405    2560 out.go:177] * Pulling base image ...
	I0512 01:16:20.420793    4792 pod_ready.go:102] pod "metrics-server-b955d9d8-rm42p" in "kube-system" namespace has status "Ready":"False"
	I0512 01:16:22.423088    4792 pod_ready.go:102] pod "metrics-server-b955d9d8-rm42p" in "kube-system" namespace has status "Ready":"False"
	I0512 01:16:24.429191    4792 pod_ready.go:102] pod "metrics-server-b955d9d8-rm42p" in "kube-system" namespace has status "Ready":"False"
	I0512 01:16:24.567405    2560 preload.go:132] Checking if preload exists for k8s version v1.23.6-rc.0 and runtime docker
	I0512 01:16:24.567405    2560 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1652251400-14138@sha256:8c847a4aa2afc5a7fc659f9731046bf9cc7e788283deecc83c8633014fb0828a in local docker daemon
	I0512 01:16:24.567405    2560 preload.go:148] Found local preload: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.23.6-rc.0-docker-overlay2-amd64.tar.lz4
	I0512 01:16:24.567405    2560 cache.go:57] Caching tarball of preloaded images
	I0512 01:16:24.568410    2560 preload.go:174] Found C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.23.6-rc.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0512 01:16:24.568410    2560 cache.go:60] Finished verifying existence of preloaded tar for  v1.23.6-rc.0 on docker
	I0512 01:16:24.568410    2560 profile.go:148] Saving config to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\newest-cni-20220512011616-7184\config.json ...
	I0512 01:16:24.569423    2560 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\newest-cni-20220512011616-7184\config.json: {Name:mk9ed3823f0455a8f954e369d660954dc104babf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0512 01:16:25.711699    2560 image.go:79] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1652251400-14138@sha256:8c847a4aa2afc5a7fc659f9731046bf9cc7e788283deecc83c8633014fb0828a in local docker daemon, skipping pull
	I0512 01:16:25.711699    2560 cache.go:141] gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1652251400-14138@sha256:8c847a4aa2afc5a7fc659f9731046bf9cc7e788283deecc83c8633014fb0828a exists in daemon, skipping load
	I0512 01:16:25.711699    2560 cache.go:206] Successfully downloaded all kic artifacts
	I0512 01:16:25.711699    2560 start.go:352] acquiring machines lock for newest-cni-20220512011616-7184: {Name:mkc09b0a00a54bffa6656c656699ec1148211894 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0512 01:16:25.712147    2560 start.go:356] acquired machines lock for "newest-cni-20220512011616-7184" in 110.4µs
	I0512 01:16:25.712147    2560 start.go:91] Provisioning new machine with config: &{Name:newest-cni-20220512011616-7184 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1652251400-14138@sha256:8c847a4aa2afc5a7fc659f9731046bf9cc7e788283deecc83c8633014fb0828a Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.6-rc.0 ClusterName:newest-cni-20220512011616-7184 Names
pace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:network-plugin Value:cni} {Component:kubeadm Key:pod-network-cidr Value:192.168.111.111/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.23.6-rc.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube4:/min
ikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false} &{Name: IP: Port:8443 KubernetesVersion:v1.23.6-rc.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0512 01:16:25.712675    2560 start.go:131] createHost starting for "" (driver="docker")
	I0512 01:16:25.716699    2560 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0512 01:16:25.717122    2560 start.go:165] libmachine.API.Create for "newest-cni-20220512011616-7184" (driver="docker")
	I0512 01:16:25.717165    2560 client.go:168] LocalClient.Create starting
	I0512 01:16:25.717354    2560 main.go:134] libmachine: Reading certificate data from C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem
	I0512 01:16:25.717354    2560 main.go:134] libmachine: Decoding PEM data...
	I0512 01:16:25.717354    2560 main.go:134] libmachine: Parsing certificate...
	I0512 01:16:25.717932    2560 main.go:134] libmachine: Reading certificate data from C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem
	I0512 01:16:25.717932    2560 main.go:134] libmachine: Decoding PEM data...
	I0512 01:16:25.717932    2560 main.go:134] libmachine: Parsing certificate...
	I0512 01:16:25.729695    2560 cli_runner.go:164] Run: docker network inspect newest-cni-20220512011616-7184 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0512 01:16:23.152744    4756 pod_ready.go:102] pod "metrics-server-6f89b5864b-jgnxp" in "kube-system" namespace has status "Ready":"False"
	I0512 01:16:25.649812    4756 pod_ready.go:102] pod "metrics-server-6f89b5864b-jgnxp" in "kube-system" namespace has status "Ready":"False"
	I0512 01:16:26.928418    4792 pod_ready.go:102] pod "metrics-server-b955d9d8-rm42p" in "kube-system" namespace has status "Ready":"False"
	I0512 01:16:28.937556    4792 pod_ready.go:102] pod "metrics-server-b955d9d8-rm42p" in "kube-system" namespace has status "Ready":"False"
	W0512 01:16:26.874363    2560 cli_runner.go:211] docker network inspect newest-cni-20220512011616-7184 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0512 01:16:26.972439    2560 cli_runner.go:217] Completed: docker network inspect newest-cni-20220512011616-7184 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}": (1.1445541s)
	I0512 01:16:26.988414    2560 network_create.go:272] running [docker network inspect newest-cni-20220512011616-7184] to gather additional debugging logs...
	I0512 01:16:26.988414    2560 cli_runner.go:164] Run: docker network inspect newest-cni-20220512011616-7184
	W0512 01:16:28.153691    2560 cli_runner.go:211] docker network inspect newest-cni-20220512011616-7184 returned with exit code 1
	I0512 01:16:28.153691    2560 cli_runner.go:217] Completed: docker network inspect newest-cni-20220512011616-7184: (1.1652173s)
	I0512 01:16:28.153691    2560 network_create.go:275] error running [docker network inspect newest-cni-20220512011616-7184]: docker network inspect newest-cni-20220512011616-7184: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: newest-cni-20220512011616-7184
	I0512 01:16:28.153691    2560 network_create.go:277] output of [docker network inspect newest-cni-20220512011616-7184]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: newest-cni-20220512011616-7184
	
	** /stderr **
	I0512 01:16:28.166728    2560 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0512 01:16:29.319410    2560 cli_runner.go:217] Completed: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}": (1.1524153s)
	I0512 01:16:29.341981    2560 network.go:288] reserving subnet 192.168.49.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[] amended:true}} dirty:map[192.168.49.0:0xc00014e160] misses:0}
	I0512 01:16:29.341981    2560 network.go:235] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0512 01:16:29.341981    2560 network_create.go:115] attempt to create docker network newest-cni-20220512011616-7184 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0512 01:16:29.348980    2560 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true newest-cni-20220512011616-7184
	I0512 01:16:30.640275    2560 cli_runner.go:217] Completed: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true newest-cni-20220512011616-7184: (1.2912279s)
	I0512 01:16:30.640275    2560 network_create.go:99] docker network newest-cni-20220512011616-7184 192.168.49.0/24 created
	I0512 01:16:30.640275    2560 kic.go:106] calculated static IP "192.168.49.2" for the "newest-cni-20220512011616-7184" container
	I0512 01:16:30.653276    2560 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0512 01:16:27.655025    4756 pod_ready.go:102] pod "metrics-server-6f89b5864b-jgnxp" in "kube-system" namespace has status "Ready":"False"
	I0512 01:16:30.153863    4756 pod_ready.go:102] pod "metrics-server-6f89b5864b-jgnxp" in "kube-system" namespace has status "Ready":"False"
	I0512 01:16:32.162511    4756 pod_ready.go:102] pod "metrics-server-6f89b5864b-jgnxp" in "kube-system" namespace has status "Ready":"False"
	I0512 01:16:31.416666    4792 pod_ready.go:102] pod "metrics-server-b955d9d8-rm42p" in "kube-system" namespace has status "Ready":"False"
	I0512 01:16:33.429148    4792 pod_ready.go:102] pod "metrics-server-b955d9d8-rm42p" in "kube-system" namespace has status "Ready":"False"
	I0512 01:16:31.806706    2560 cli_runner.go:217] Completed: docker ps -a --format {{.Names}}: (1.1533706s)
	I0512 01:16:31.813708    2560 cli_runner.go:164] Run: docker volume create newest-cni-20220512011616-7184 --label name.minikube.sigs.k8s.io=newest-cni-20220512011616-7184 --label created_by.minikube.sigs.k8s.io=true
	I0512 01:16:32.987718    2560 cli_runner.go:217] Completed: docker volume create newest-cni-20220512011616-7184 --label name.minikube.sigs.k8s.io=newest-cni-20220512011616-7184 --label created_by.minikube.sigs.k8s.io=true: (1.1739498s)
	I0512 01:16:32.987718    2560 oci.go:103] Successfully created a docker volume newest-cni-20220512011616-7184
	I0512 01:16:32.994718    2560 cli_runner.go:164] Run: docker run --rm --name newest-cni-20220512011616-7184-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-20220512011616-7184 --entrypoint /usr/bin/test -v newest-cni-20220512011616-7184:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1652251400-14138@sha256:8c847a4aa2afc5a7fc659f9731046bf9cc7e788283deecc83c8633014fb0828a -d /var/lib
	I0512 01:16:36.553127    2560 cli_runner.go:217] Completed: docker run --rm --name newest-cni-20220512011616-7184-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-20220512011616-7184 --entrypoint /usr/bin/test -v newest-cni-20220512011616-7184:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1652251400-14138@sha256:8c847a4aa2afc5a7fc659f9731046bf9cc7e788283deecc83c8633014fb0828a -d /var/lib: (3.5582267s)
	I0512 01:16:36.553127    2560 oci.go:107] Successfully prepared a docker volume newest-cni-20220512011616-7184
	I0512 01:16:36.553127    2560 preload.go:132] Checking if preload exists for k8s version v1.23.6-rc.0 and runtime docker
	I0512 01:16:36.553127    2560 kic.go:179] Starting extracting preloaded images to volume ...
	I0512 01:16:36.563406    2560 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.23.6-rc.0-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v newest-cni-20220512011616-7184:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1652251400-14138@sha256:8c847a4aa2afc5a7fc659f9731046bf9cc7e788283deecc83c8633014fb0828a -I lz4 -xf /preloaded.tar -C /extractDir
	I0512 01:16:34.165401    4756 pod_ready.go:102] pod "metrics-server-6f89b5864b-jgnxp" in "kube-system" namespace has status "Ready":"False"
	I0512 01:16:36.645803    4756 pod_ready.go:102] pod "metrics-server-6f89b5864b-jgnxp" in "kube-system" namespace has status "Ready":"False"
	I0512 01:16:35.924732    4792 pod_ready.go:102] pod "metrics-server-b955d9d8-rm42p" in "kube-system" namespace has status "Ready":"False"
	I0512 01:16:38.424726    4792 pod_ready.go:102] pod "metrics-server-b955d9d8-rm42p" in "kube-system" namespace has status "Ready":"False"
	I0512 01:16:38.650005    4756 pod_ready.go:102] pod "metrics-server-6f89b5864b-jgnxp" in "kube-system" namespace has status "Ready":"False"
	I0512 01:16:40.653524    4756 pod_ready.go:102] pod "metrics-server-6f89b5864b-jgnxp" in "kube-system" namespace has status "Ready":"False"
	I0512 01:16:40.425061    4792 pod_ready.go:102] pod "metrics-server-b955d9d8-rm42p" in "kube-system" namespace has status "Ready":"False"
	I0512 01:16:42.425638    4792 pod_ready.go:102] pod "metrics-server-b955d9d8-rm42p" in "kube-system" namespace has status "Ready":"False"
	I0512 01:16:44.930339    4792 pod_ready.go:102] pod "metrics-server-b955d9d8-rm42p" in "kube-system" namespace has status "Ready":"False"
	I0512 01:16:42.674179    4756 pod_ready.go:102] pod "metrics-server-6f89b5864b-jgnxp" in "kube-system" namespace has status "Ready":"False"
	I0512 01:16:45.151951    4756 pod_ready.go:102] pod "metrics-server-6f89b5864b-jgnxp" in "kube-system" namespace has status "Ready":"False"
	I0512 01:16:47.153770    4756 pod_ready.go:102] pod "metrics-server-6f89b5864b-jgnxp" in "kube-system" namespace has status "Ready":"False"
	I0512 01:16:47.427010    4792 pod_ready.go:102] pod "metrics-server-b955d9d8-rm42p" in "kube-system" namespace has status "Ready":"False"
	I0512 01:16:49.432201    4792 pod_ready.go:102] pod "metrics-server-b955d9d8-rm42p" in "kube-system" namespace has status "Ready":"False"
	I0512 01:16:49.157438    4756 pod_ready.go:102] pod "metrics-server-6f89b5864b-jgnxp" in "kube-system" namespace has status "Ready":"False"
	I0512 01:16:51.649156    4756 pod_ready.go:102] pod "metrics-server-6f89b5864b-jgnxp" in "kube-system" namespace has status "Ready":"False"
	I0512 01:16:51.924038    4792 pod_ready.go:102] pod "metrics-server-b955d9d8-rm42p" in "kube-system" namespace has status "Ready":"False"
	I0512 01:16:53.932652    4792 pod_ready.go:102] pod "metrics-server-b955d9d8-rm42p" in "kube-system" namespace has status "Ready":"False"
	I0512 01:16:53.667150    4756 pod_ready.go:102] pod "metrics-server-6f89b5864b-jgnxp" in "kube-system" namespace has status "Ready":"False"
	I0512 01:16:56.151572    4756 pod_ready.go:102] pod "metrics-server-6f89b5864b-jgnxp" in "kube-system" namespace has status "Ready":"False"
	I0512 01:16:56.423118    4792 pod_ready.go:102] pod "metrics-server-b955d9d8-rm42p" in "kube-system" namespace has status "Ready":"False"
	I0512 01:16:58.159462    4756 pod_ready.go:102] pod "metrics-server-6f89b5864b-jgnxp" in "kube-system" namespace has status "Ready":"False"
	I0512 01:17:08.598898    4792 pod_ready.go:102] pod "metrics-server-b955d9d8-rm42p" in "kube-system" namespace has status "Ready":"False"
	I0512 01:17:08.925617    2560 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.23.6-rc.0-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v newest-cni-20220512011616-7184:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1652251400-14138@sha256:8c847a4aa2afc5a7fc659f9731046bf9cc7e788283deecc83c8633014fb0828a -I lz4 -xf /preloaded.tar -C /extractDir: (32.3596272s)
	I0512 01:17:08.925617    2560 kic.go:188] duration metric: took 32.370826 seconds to extract preloaded images to volume
	I0512 01:17:08.934984    2560 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0512 01:17:11.115515    2560 cli_runner.go:217] Completed: docker system info --format "{{json .}}": (2.1804189s)
	I0512 01:17:11.115878    2560 info.go:265] docker info: {ID:5MOX:W55Z:6RSS:V5PU:46KT:D723:NTM4:N7FK:USOO:URA3:TW6J:2PNT Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:87 OomKillDisable:true NGoroutines:63 SystemTime:2022-05-12 01:17:10.0098897 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.102.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_
64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:53902323712 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.14 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3df54a852345ae127d1fa3092b95168e4a88e2f8 Expected:3df54a852345ae127d1fa3092b95168e4a88e2f8} RuncCommit:{ID:v1.0.3-0-gf46b6ba Expected:v1.0.3-0-gf46b6ba} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,
profile=default] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.5.0] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:C:\Program Files\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc.
Version:v0.17.0]] Warnings:<nil>}}
	I0512 01:17:11.127846    2560 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0512 01:17:08.597927    4756 pod_ready.go:102] pod "metrics-server-6f89b5864b-jgnxp" in "kube-system" namespace has status "Ready":"False"
	I0512 01:17:10.653825    4756 pod_ready.go:102] pod "metrics-server-6f89b5864b-jgnxp" in "kube-system" namespace has status "Ready":"False"
	I0512 01:17:10.945551    4792 pod_ready.go:102] pod "metrics-server-b955d9d8-rm42p" in "kube-system" namespace has status "Ready":"False"
	I0512 01:17:13.430017    4792 pod_ready.go:102] pod "metrics-server-b955d9d8-rm42p" in "kube-system" namespace has status "Ready":"False"
	I0512 01:17:13.264381    2560 cli_runner.go:217] Completed: docker info --format "'{{json .SecurityOptions}}'": (2.1364256s)
	I0512 01:17:13.270379    2560 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname newest-cni-20220512011616-7184 --name newest-cni-20220512011616-7184 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-20220512011616-7184 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=newest-cni-20220512011616-7184 --network newest-cni-20220512011616-7184 --ip 192.168.49.2 --volume newest-cni-20220512011616-7184:/var --security-opt apparmor=unconfined --memory=2200mb --memory-swap=2200mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1652251400-14138@sha256:8c847a4aa2afc5a7fc659f9731046bf9cc7e788283deecc83c8633014fb0828a
	I0512 01:17:15.535544    2560 cli_runner.go:217] Completed: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname newest-cni-20220512011616-7184 --name newest-cni-20220512011616-7184 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-20220512011616-7184 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=newest-cni-20220512011616-7184 --network newest-cni-20220512011616-7184 --ip 192.168.49.2 --volume newest-cni-20220512011616-7184:/var --security-opt apparmor=unconfined --memory=2200mb --memory-swap=2200mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1652251400-14138@sha256:8c847a4aa2afc5a7fc659f9731046bf9cc7e788283deecc83c8633014fb0828a: (2.2639077s)
	I0512 01:17:15.547222    2560 cli_runner.go:164] Run: docker container inspect newest-cni-20220512011616-7184 --format={{.State.Running}}
	I0512 01:17:16.710827    2560 cli_runner.go:217] Completed: docker container inspect newest-cni-20220512011616-7184 --format={{.State.Running}}: (1.1635447s)
	I0512 01:17:16.717829    2560 cli_runner.go:164] Run: docker container inspect newest-cni-20220512011616-7184 --format={{.State.Status}}
	
	* 
	* ==> Docker <==
	* -- Logs begin at Thu 2022-05-12 01:09:11 UTC, end at Thu 2022-05-12 01:17:23 UTC. --
	May 12 01:14:40 embed-certs-20220512010611-7184 dockerd[249]: time="2022-05-12T01:14:40.048239000Z" level=info msg="ignoring event" container=2b690b2fc9feb62ed8e8f9aa85d220ab2d1c28238e9c04955d0ab596503d37cd module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	May 12 01:14:50 embed-certs-20220512010611-7184 dockerd[249]: time="2022-05-12T01:14:50.341408600Z" level=info msg="Container failed to exit within 10s of signal 15 - using the force" container=ce7f1fe82bf2bc1a64ca95cc4e67c76cdb682894624330aced5ec47b5af10f71
	May 12 01:14:50 embed-certs-20220512010611-7184 dockerd[249]: time="2022-05-12T01:14:50.468774400Z" level=info msg="ignoring event" container=ce7f1fe82bf2bc1a64ca95cc4e67c76cdb682894624330aced5ec47b5af10f71 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	May 12 01:14:50 embed-certs-20220512010611-7184 dockerd[249]: time="2022-05-12T01:14:50.993120200Z" level=info msg="ignoring event" container=540f22bfbe5c4a11285178b260341d16a8a21aeb1942c45178b44c2dd8211d85 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	May 12 01:14:51 embed-certs-20220512010611-7184 dockerd[249]: time="2022-05-12T01:14:51.340757800Z" level=info msg="ignoring event" container=a71723de1519b021bb4e70ea2a0ba5f89f0d18f9b54d874291ac057ae4e8617b module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	May 12 01:14:51 embed-certs-20220512010611-7184 dockerd[249]: time="2022-05-12T01:14:51.722771800Z" level=info msg="ignoring event" container=5e45d5b75bc1e008b6730c96dd4aa282e0de28ee110173867fedd45b5da756de module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	May 12 01:14:52 embed-certs-20220512010611-7184 dockerd[249]: time="2022-05-12T01:14:52.329839000Z" level=info msg="ignoring event" container=2fe4977f8083adbe9b9c195ddaf124421a267b1f00c46d7db2e5fae7133214d3 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	May 12 01:14:52 embed-certs-20220512010611-7184 dockerd[249]: time="2022-05-12T01:14:52.722427900Z" level=info msg="ignoring event" container=315d2f077acbb782fe1f22566e36db5e17b7ac597f0ecda2539da4118b544a37 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	May 12 01:14:53 embed-certs-20220512010611-7184 dockerd[249]: time="2022-05-12T01:14:53.107205300Z" level=info msg="ignoring event" container=d06f2ce3c42ff40d1feefbdcd44f86f9a7964086e0ab2c9b45af5666b2456b9e module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	May 12 01:15:43 embed-certs-20220512010611-7184 dockerd[249]: time="2022-05-12T01:15:43.602422400Z" level=warning msg="Error getting v2 registry: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host"
	May 12 01:15:43 embed-certs-20220512010611-7184 dockerd[249]: time="2022-05-12T01:15:43.602558700Z" level=info msg="Attempting next endpoint for pull after error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host"
	May 12 01:15:43 embed-certs-20220512010611-7184 dockerd[249]: time="2022-05-12T01:15:43.675088400Z" level=error msg="Handler for POST /v1.41/images/create returned error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host"
	May 12 01:15:46 embed-certs-20220512010611-7184 dockerd[249]: time="2022-05-12T01:15:46.314464700Z" level=info msg="ignoring event" container=9b23d1a7ae8d50fc36c3a84e7fa27f6bd394c7ca3075393e8d59734f91c89562 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	May 12 01:15:53 embed-certs-20220512010611-7184 dockerd[249]: time="2022-05-12T01:15:53.059211800Z" level=info msg="ignoring event" container=16b5022cbfde459503ec1c6c97880167c863f99f6417673f21bee9897eb6e903 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	May 12 01:15:53 embed-certs-20220512010611-7184 dockerd[249]: time="2022-05-12T01:15:53.289349500Z" level=warning msg="reference for unknown type: application/vnd.docker.distribution.manifest.v1+prettyjws" digest="sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb" remote="k8s.gcr.io/echoserver:1.4"
	May 12 01:15:53 embed-certs-20220512010611-7184 dockerd[249]: time="2022-05-12T01:15:53.451241900Z" level=warning msg="Error persisting manifest" digest="sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb" error="error committing manifest to content store: commit failed: unexpected commit digest sha256:eaee4c452b076cdb05b391ed7e75e1ad0aca136665875ab5d7e2f3d9f4675769, expected sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb: failed precondition" remote="k8s.gcr.io/echoserver:1.4"
	May 12 01:16:09 embed-certs-20220512010611-7184 dockerd[249]: time="2022-05-12T01:16:09.652993800Z" level=warning msg="reference for unknown type: " digest="sha256:cc746e7a0b1eec0db01cbabbb6386b23d7af97e79fa9e36bb883a95b7eb96fe2" remote="docker.io/kubernetesui/dashboard@sha256:cc746e7a0b1eec0db01cbabbb6386b23d7af97e79fa9e36bb883a95b7eb96fe2"
	May 12 01:16:10 embed-certs-20220512010611-7184 dockerd[249]: time="2022-05-12T01:16:10.155996900Z" level=info msg="ignoring event" container=9f361ee2f262164c7dfece5dc895659583381b0c07a6b9c3e249e6c3ea5dc449 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	May 12 01:16:11 embed-certs-20220512010611-7184 dockerd[249]: time="2022-05-12T01:16:11.271856200Z" level=info msg="ignoring event" container=7db9afb6458522b7e6806386164b90cb132665dff4a75636da1317e786507031 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	May 12 01:16:28 embed-certs-20220512010611-7184 dockerd[249]: time="2022-05-12T01:16:28.966634100Z" level=info msg="ignoring event" container=87a66ed88e9c9053655f53c15fe6c3b508f305f2c9f635fe9860b89bdc33f901 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	May 12 01:16:37 embed-certs-20220512010611-7184 dockerd[249]: time="2022-05-12T01:16:37.190671900Z" level=warning msg="Error getting v2 registry: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host"
	May 12 01:16:37 embed-certs-20220512010611-7184 dockerd[249]: time="2022-05-12T01:16:37.190888400Z" level=info msg="Attempting next endpoint for pull after error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host"
	May 12 01:16:37 embed-certs-20220512010611-7184 dockerd[249]: time="2022-05-12T01:16:37.222221100Z" level=error msg="Handler for POST /v1.41/images/create returned error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host"
	May 12 01:17:03 embed-certs-20220512010611-7184 dockerd[249]: time="2022-05-12T01:17:03.081663200Z" level=error msg="Handler for POST /v1.41/images/create returned error: error creating temporary lease: context canceled"
	May 12 01:17:03 embed-certs-20220512010611-7184 dockerd[249]: time="2022-05-12T01:17:03.749962300Z" level=error msg="Handler for POST /v1.41/containers/fb24ff0a554f/pause returned error: Cannot pause container fb24ff0a554f61b00a9687ba07ba5ecf0249fe182de14e082a57f1c3023219b7: OCI runtime pause failed: unable to freeze: unknown"
	
	* 
	* ==> container status <==
	* time="2022-05-12T01:17:25Z" level=fatal msg="connect: connect endpoint 'unix:///var/run/dockershim.sock', make sure you are running as root and the endpoint has been started: context deadline exceeded"
	CONTAINER ID   IMAGE                    COMMAND                  CREATED              STATUS                       PORTS     NAMES
	02bf63a7e857   a90209bb39e3             "nginx -g 'daemon of…"   28 seconds ago       Created                                k8s_dashboard-metrics-scraper_dashboard-metrics-scraper-56974995fc-jm4p7_kubernetes-dashboard_03203ac1-9306-4299-a099-7915539d52af_3
	9dc69e11c6a1   kubernetesui/dashboard   "/dashboard --insecu…"   48 seconds ago       Up 47 seconds (Paused)                 k8s_kubernetes-dashboard_kubernetes-dashboard-8469778f77-bwdns_kubernetes-dashboard_4c48a657-b6a3-40e8-86b8-75310a5e2c36_0
	87a66ed88e9c   a90209bb39e3             "nginx -g 'daemon of…"   59 seconds ago       Exited (1) 56 seconds ago              k8s_dashboard-metrics-scraper_dashboard-metrics-scraper-56974995fc-jm4p7_kubernetes-dashboard_03203ac1-9306-4299-a099-7915539d52af_2
	2ab01f88686a   k8s.gcr.io/pause:3.6     "/pause"                 About a minute ago   Up About a minute (Paused)             k8s_POD_kubernetes-dashboard-8469778f77-bwdns_kubernetes-dashboard_4c48a657-b6a3-40e8-86b8-75310a5e2c36_0
	ca490861f8c5   k8s.gcr.io/pause:3.6     "/pause"                 About a minute ago   Up About a minute (Paused)             k8s_POD_dashboard-metrics-scraper-56974995fc-jm4p7_kubernetes-dashboard_03203ac1-9306-4299-a099-7915539d52af_0
	518dc8f1de40   6e38f40d628d             "/storage-provisioner"   About a minute ago   Up About a minute (Paused)             k8s_storage-provisioner_storage-provisioner_kube-system_16ae67b0-7538-42e0-b064-d19f5254d784_0
	f81a11687ddc   k8s.gcr.io/pause:3.6     "/pause"                 About a minute ago   Up About a minute (Paused)             k8s_POD_metrics-server-b955d9d8-dj72t_kube-system_a99dbe00-ec68-4bb2-babf-aaedfbb534ad_0
	771266c61ad4   k8s.gcr.io/pause:3.6     "/pause"                 About a minute ago   Up About a minute (Paused)             k8s_POD_storage-provisioner_kube-system_16ae67b0-7538-42e0-b064-d19f5254d784_0
	c33958dd750e   a4ca41631cc7             "/coredns -conf /etc…"   About a minute ago   Up About a minute (Paused)             k8s_coredns_coredns-64897985d-pfxm2_kube-system_ab04c290-ac38-41e8-8782-4cc5375dc8fd_0
	f5ebcb93e156   3c53fa8541f9             "/usr/local/bin/kube…"   About a minute ago   Up About a minute (Paused)             k8s_kube-proxy_kube-proxy-2cmfg_kube-system_5563a9b4-18bb-4f5c-a0a9-08608f7459ef_0
	c0de5d57b524   k8s.gcr.io/pause:3.6     "/pause"                 About a minute ago   Up About a minute (Paused)             k8s_POD_kube-proxy-2cmfg_kube-system_5563a9b4-18bb-4f5c-a0a9-08608f7459ef_0
	67ed526660f7   k8s.gcr.io/pause:3.6     "/pause"                 About a minute ago   Up About a minute (Paused)             k8s_POD_coredns-64897985d-pfxm2_kube-system_ab04c290-ac38-41e8-8782-4cc5375dc8fd_0
	a69052506663   b0c9e5e4dbb1             "kube-controller-man…"   2 minutes ago        Up 2 minutes (Paused)                  k8s_kube-controller-manager_kube-controller-manager-embed-certs-20220512010611-7184_kube-system_d917ace05297db24f56452b86e4773fb_2
	9f604816537e   884d49d6d8c9             "kube-scheduler --au…"   2 minutes ago        Up 2 minutes (Paused)                  k8s_kube-scheduler_kube-scheduler-embed-certs-20220512010611-7184_kube-system_0e1bb9864df8eba4c0a22d55822b2567_2
	b523f2c5d6b7   3fc1d62d6587             "kube-apiserver --ad…"   2 minutes ago        Up 2 minutes (Paused)                  k8s_kube-apiserver_kube-apiserver-embed-certs-20220512010611-7184_kube-system_b9a3c4e8134074af263a952ade5d5526_2
	fb24ff0a554f   25f8c7f3da61             "etcd --advertise-cl…"   2 minutes ago        Up 2 minutes                           k8s_etcd_etcd-embed-certs-20220512010611-7184_kube-system_3679893b1c4f0b06f0ecc0d962314512_2
	232a840dde2a   k8s.gcr.io/pause:3.6     "/pause"                 2 minutes ago        Up 2 minutes (Paused)                  k8s_POD_kube-scheduler-embed-certs-20220512010611-7184_kube-system_0e1bb9864df8eba4c0a22d55822b2567_0
	89f8f5f5be49   k8s.gcr.io/pause:3.6     "/pause"                 2 minutes ago        Up 2 minutes (Paused)                  k8s_POD_kube-controller-manager-embed-certs-20220512010611-7184_kube-system_d917ace05297db24f56452b86e4773fb_0
	1ac20faf9fc4   k8s.gcr.io/pause:3.6     "/pause"                 2 minutes ago        Up 2 minutes (Paused)                  k8s_POD_kube-apiserver-embed-certs-20220512010611-7184_kube-system_b9a3c4e8134074af263a952ade5d5526_0
	f8b333667e2b   k8s.gcr.io/pause:3.6     "/pause"                 2 minutes ago        Up 2 minutes (Paused)                  k8s_POD_etcd-embed-certs-20220512010611-7184_kube-system_3679893b1c4f0b06f0ecc0d962314512_0
	
	* 
	* ==> coredns [c33958dd750e] <==
	* .:53
	[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	CoreDNS-1.8.6
	linux/amd64, go1.17.1, 13a9191
	[INFO] Reloading
	[INFO] plugin/health: Going into lameduck mode for 5s
	[INFO] plugin/reload: Running configuration MD5 = c23ed519c17e71ee396ed052e6209e94
	[INFO] Reloading complete
	
	* 
	* ==> describe nodes <==
	* 
	* ==> dmesg <==
	* [May12 00:51] WSL2: Performing memory compaction.
	[May12 00:52] WSL2: Performing memory compaction.
	[May12 00:54] WSL2: Performing memory compaction.
	[May12 00:56] overlayfs: upperdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
	[  +0.000004] overlayfs: workdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
	[  +0.010948] overlayfs: upperdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
	[  +0.000005] overlayfs: workdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
	[  +0.036593] overlayfs: upperdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
	[  +0.000001] overlayfs: upperdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
	[  +0.000004] overlayfs: workdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
	[  +0.000001] overlayfs: workdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
	[May12 00:57] WSL2: Performing memory compaction.
	[May12 00:58] WSL2: Performing memory compaction.
	[May12 01:00] WSL2: Performing memory compaction.
	[May12 01:01] WSL2: Performing memory compaction.
	[May12 01:02] WSL2: Performing memory compaction.
	[May12 01:03] WSL2: Performing memory compaction.
	[May12 01:05] WSL2: Performing memory compaction.
	[May12 01:06] WSL2: Performing memory compaction.
	[May12 01:07] WSL2: Performing memory compaction.
	[May12 01:08] WSL2: Performing memory compaction.
	[May12 01:09] WSL2: Performing memory compaction.
	[May12 01:12] WSL2: Performing memory compaction.
	[May12 01:14] WSL2: Performing memory compaction.
	[May12 01:16] WSL2: Performing memory compaction.
	
	* 
	* ==> etcd [fb24ff0a554f] <==
	* {"level":"warn","ts":"2022-05-12T01:17:05.708Z","caller":"etcdserver/v3_server.go:815","msg":"waiting for ReadIndex response took too long, retrying","sent-request-id":3238511125514081863,"retry-timeout":"500ms"}
	{"level":"warn","ts":"2022-05-12T01:17:06.208Z","caller":"etcdserver/v3_server.go:815","msg":"waiting for ReadIndex response took too long, retrying","sent-request-id":3238511125514081863,"retry-timeout":"500ms"}
	{"level":"warn","ts":"2022-05-12T01:17:06.487Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2022-05-12T01:17:00.906Z","time spent":"5.5810697s","remote":"127.0.0.1:45074","response type":"/etcdserverpb.Maintenance/Status","request count":-1,"request size":-1,"response count":-1,"response size":-1,"request content":""}
	{"level":"warn","ts":"2022-05-12T01:17:06.709Z","caller":"etcdserver/v3_server.go:815","msg":"waiting for ReadIndex response took too long, retrying","sent-request-id":3238511125514081863,"retry-timeout":"500ms"}
	{"level":"warn","ts":"2022-05-12T01:17:06.720Z","caller":"wal/wal.go:802","msg":"slow fdatasync","took":"6.6140238s","expected-duration":"1s"}
	{"level":"info","ts":"2022-05-12T01:17:06.721Z","caller":"traceutil/trace.go:171","msg":"trace[941811362] linearizableReadLoop","detail":"{readStateIndex:731; appliedIndex:730; }","duration":"6.0290245s","start":"2022-05-12T01:17:00.692Z","end":"2022-05-12T01:17:06.721Z","steps":["trace[941811362] 'read index received'  (duration: 6.0288339s)","trace[941811362] 'applied index is now lower than readState.Index'  (duration: 186.9µs)"],"step_count":2}
	{"level":"warn","ts":"2022-05-12T01:17:07.222Z","caller":"etcdserver/v3_server.go:815","msg":"waiting for ReadIndex response took too long, retrying","sent-request-id":3238511125514081866,"retry-timeout":"500ms"}
	{"level":"warn","ts":"2022-05-12T01:17:07.723Z","caller":"etcdserver/v3_server.go:815","msg":"waiting for ReadIndex response took too long, retrying","sent-request-id":3238511125514081866,"retry-timeout":"500ms"}
	{"level":"warn","ts":"2022-05-12T01:17:07.832Z","caller":"wal/wal.go:802","msg":"slow fdatasync","took":"1.0878529s","expected-duration":"1s"}
	{"level":"info","ts":"2022-05-12T01:17:07.833Z","caller":"traceutil/trace.go:171","msg":"trace[1368074018] linearizableReadLoop","detail":"{readStateIndex:731; appliedIndex:731; }","duration":"1.1114614s","start":"2022-05-12T01:17:06.721Z","end":"2022-05-12T01:17:07.833Z","steps":["trace[1368074018] 'read index received'  (duration: 1.1114502s)","trace[1368074018] 'applied index is now lower than readState.Index'  (duration: 8.1µs)"],"step_count":2}
	{"level":"warn","ts":"2022-05-12T01:17:08.587Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"7.4513641s","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/daemonsets/\" range_end:\"/registry/daemonsets0\" count_only:true ","response":"range_response_count:0 size:7"}
	{"level":"warn","ts":"2022-05-12T01:17:08.587Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"5.4105104s","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/runtimeclasses/\" range_end:\"/registry/runtimeclasses0\" count_only:true ","response":"range_response_count:0 size:5"}
	{"level":"warn","ts":"2022-05-12T01:17:08.587Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"6.8961984s","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/horizontalpodautoscalers/\" range_end:\"/registry/horizontalpodautoscalers0\" count_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2022-05-12T01:17:08.587Z","caller":"traceutil/trace.go:171","msg":"trace[357524218] range","detail":"{range_begin:/registry/runtimeclasses/; range_end:/registry/runtimeclasses0; response_count:0; response_revision:700; }","duration":"5.4105601s","start":"2022-05-12T01:17:03.176Z","end":"2022-05-12T01:17:08.587Z","steps":["trace[357524218] 'agreement among raft nodes before linearized reading'  (duration: 4.6562658s)","trace[357524218] 'count revisions from in-memory index tree'  (duration: 754.2073ms)"],"step_count":2}
	{"level":"warn","ts":"2022-05-12T01:17:08.587Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"6.612548s","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/cronjobs/\" range_end:\"/registry/cronjobs0\" count_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2022-05-12T01:17:08.587Z","caller":"traceutil/trace.go:171","msg":"trace[565821533] range","detail":"{range_begin:/registry/horizontalpodautoscalers/; range_end:/registry/horizontalpodautoscalers0; response_count:0; response_revision:700; }","duration":"6.8963807s","start":"2022-05-12T01:17:01.691Z","end":"2022-05-12T01:17:08.587Z","steps":["trace[565821533] 'agreement among raft nodes before linearized reading'  (duration: 6.1418729s)","trace[565821533] 'count revisions from in-memory index tree'  (duration: 754.3134ms)"],"step_count":2}
	{"level":"info","ts":"2022-05-12T01:17:08.587Z","caller":"traceutil/trace.go:171","msg":"trace[673773244] range","detail":"{range_begin:/registry/cronjobs/; range_end:/registry/cronjobs0; response_count:0; response_revision:700; }","duration":"6.6129827s","start":"2022-05-12T01:17:01.974Z","end":"2022-05-12T01:17:08.587Z","steps":["trace[673773244] 'agreement among raft nodes before linearized reading'  (duration: 5.8586757s)","trace[673773244] 'count revisions from in-memory index tree'  (duration: 753.8425ms)"],"step_count":2}
	{"level":"warn","ts":"2022-05-12T01:17:08.587Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2022-05-12T01:17:03.176Z","time spent":"5.4109103s","remote":"127.0.0.1:42566","response type":"/etcdserverpb.KV/Range","request count":0,"request size":56,"response count":0,"response size":28,"request content":"key:\"/registry/runtimeclasses/\" range_end:\"/registry/runtimeclasses0\" count_only:true "}
	{"level":"warn","ts":"2022-05-12T01:17:08.587Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2022-05-12T01:17:01.691Z","time spent":"6.8964365s","remote":"127.0.0.1:42530","response type":"/etcdserverpb.KV/Range","request count":0,"request size":76,"response count":0,"response size":28,"request content":"key:\"/registry/horizontalpodautoscalers/\" range_end:\"/registry/horizontalpodautoscalers0\" count_only:true "}
	{"level":"warn","ts":"2022-05-12T01:17:08.587Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2022-05-12T01:17:01.974Z","time spent":"6.6130236s","remote":"127.0.0.1:42544","response type":"/etcdserverpb.KV/Range","request count":0,"request size":44,"response count":0,"response size":28,"request content":"key:\"/registry/cronjobs/\" range_end:\"/registry/cronjobs0\" count_only:true "}
	{"level":"warn","ts":"2022-05-12T01:17:08.587Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"6.4690489s","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" ","response":"range_response_count:1 size:1133"}
	{"level":"info","ts":"2022-05-12T01:17:08.588Z","caller":"traceutil/trace.go:171","msg":"trace[1431651457] range","detail":"{range_begin:/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath; range_end:; response_count:1; response_revision:700; }","duration":"6.469675s","start":"2022-05-12T01:17:02.118Z","end":"2022-05-12T01:17:08.588Z","steps":["trace[1431651457] 'agreement among raft nodes before linearized reading'  (duration: 5.7147497s)","trace[1431651457] 'range keys from in-memory index tree'  (duration: 754.1747ms)"],"step_count":2}
	{"level":"warn","ts":"2022-05-12T01:17:08.588Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2022-05-12T01:17:02.118Z","time spent":"6.4697368s","remote":"127.0.0.1:42502","response type":"/etcdserverpb.KV/Range","request count":0,"request size":67,"response count":1,"response size":1156,"request content":"key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" "}
	{"level":"info","ts":"2022-05-12T01:17:08.587Z","caller":"traceutil/trace.go:171","msg":"trace[1806771618] range","detail":"{range_begin:/registry/daemonsets/; range_end:/registry/daemonsets0; response_count:0; response_revision:700; }","duration":"7.4515507s","start":"2022-05-12T01:17:01.135Z","end":"2022-05-12T01:17:08.587Z","steps":["trace[1806771618] 'agreement among raft nodes before linearized reading'  (duration: 6.6972502s)","trace[1806771618] 'count revisions from in-memory index tree'  (duration: 753.9944ms)"],"step_count":2}
	{"level":"warn","ts":"2022-05-12T01:17:08.588Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2022-05-12T01:17:01.135Z","time spent":"7.4522811s","remote":"127.0.0.1:42618","response type":"/etcdserverpb.KV/Range","request count":0,"request size":48,"response count":1,"response size":30,"request content":"key:\"/registry/daemonsets/\" range_end:\"/registry/daemonsets0\" count_only:true "}
	
	* 
	* ==> kernel <==
	*  01:17:35 up  2:25,  0 users,  load average: 6.66, 5.86, 4.58
	Linux embed-certs-20220512010611-7184 5.10.102.1-microsoft-standard-WSL2 #1 SMP Wed Mar 2 00:30:59 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 20.04.4 LTS"
	
	* 
	* ==> kube-apiserver [b523f2c5d6b7] <==
	* Trace[1175639905]: ---"About to write a response" 792ms (01:16:27.381)
	Trace[1175639905]: [792.5516ms] [792.5516ms] END
	W0512 01:16:41.573395       1 handler_proxy.go:104] no RequestInfo found in the context
	E0512 01:16:41.573548       1 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0512 01:16:41.573563       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	{"level":"warn","ts":"2022-05-12T01:17:02.692Z","logger":"etcd-client","caller":"v3/retry_interceptor.go:62","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0xc000b16a80/#initially=[https://127.0.0.1:2379]","attempt":0,"error":"rpc error: code = DeadlineExceeded desc = context deadline exceeded"}
	{"level":"warn","ts":"2022-05-12T01:17:03.041Z","logger":"etcd-client","caller":"v3/retry_interceptor.go:62","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0xc000f30a80/#initially=[https://127.0.0.1:2379]","attempt":0,"error":"rpc error: code = Canceled desc = context canceled"}
	E0512 01:17:03.041842       1 status.go:71] apiserver received an error that is not an metav1.Status: &errors.errorString{s:"context canceled"}: context canceled
	E0512 01:17:03.041902       1 writers.go:117] apiserver was unable to write a JSON response: http: Handler timeout
	E0512 01:17:03.042243       1 writers.go:117] apiserver was unable to write a JSON response: http: Handler timeout
	{"level":"warn","ts":"2022-05-12T01:17:03.042Z","logger":"etcd-client","caller":"v3/retry_interceptor.go:62","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0xc001d1cfc0/#initially=[https://127.0.0.1:2379]","attempt":0,"error":"rpc error: code = Canceled desc = context canceled"}
	I0512 01:17:03.042321       1 trace.go:205] Trace[1136970699]: "GuaranteedUpdate etcd3" type:*core.Event (12-May-2022 01:17:01.993) (total time: 1048ms):
	Trace[1136970699]: [1.0485025s] [1.0485025s] END
	E0512 01:17:03.042360       1 finisher.go:175] FinishRequest: post-timeout activity - time-elapsed: 100.3µs, panicked: false, err: context canceled, panic-reason: <nil>
	E0512 01:17:03.043808       1 status.go:71] apiserver received an error that is not an metav1.Status: &errors.errorString{s:"http: Handler timeout"}: http: Handler timeout
	E0512 01:17:03.045166       1 writers.go:130] apiserver was unable to write a fallback JSON response: http: Handler timeout
	E0512 01:17:03.046500       1 status.go:71] apiserver received an error that is not an metav1.Status: &errors.errorString{s:"http: Handler timeout"}: http: Handler timeout
	I0512 01:17:03.049168       1 trace.go:205] Trace[682879259]: "Get" url:/api/v1/namespaces/kube-system/pods/metrics-server-b955d9d8-dj72t,user-agent:Go-http-client/2.0,audit-id:ea2ec619-e586-4108-aec3-d9f5f8f52844,client:192.168.58.2,accept:application/vnd.kubernetes.protobuf,application/json,protocol:HTTP/2.0 (12-May-2022 01:17:01.992) (total time: 1056ms):
	Trace[682879259]: [1.0563334s] [1.0563334s] END
	E0512 01:17:03.050459       1 writers.go:130] apiserver was unable to write a fallback JSON response: http: Handler timeout
	E0512 01:17:03.051076       1 timeout.go:137] post-timeout activity - time-elapsed: 9.5828ms, GET "/api/v1/namespaces/kube-system/pods/metrics-server-b955d9d8-dj72t" result: <nil>
	I0512 01:17:03.052138       1 trace.go:205] Trace[1558827293]: "Patch" url:/api/v1/namespaces/kube-system/events/metrics-server-b955d9d8-dj72t.16ee3698e0dba3bc,user-agent:kubelet/v1.23.5 (linux/amd64) kubernetes/c285e78,audit-id:85f0e591-b471-4b40-bcf2-498fb391bb54,client:192.168.58.2,accept:application/vnd.kubernetes.protobuf,application/json,protocol:HTTP/2.0 (12-May-2022 01:17:01.993) (total time: 1058ms):
	Trace[1558827293]: [1.0585354s] [1.0585354s] END
	E0512 01:17:03.053083       1 timeout.go:137] post-timeout activity - time-elapsed: 10.9133ms, PATCH "/api/v1/namespaces/kube-system/events/metrics-server-b955d9d8-dj72t.16ee3698e0dba3bc" result: <nil>
	
	* 
	* ==> kube-controller-manager [a69052506663] <==
	* I0512 01:15:30.956479       1 event.go:294] "Event occurred" object="kube-system/coredns-64897985d" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-64897985d-dhfqv"
	I0512 01:15:31.299178       1 event.go:294] "Event occurred" object="kube-system/coredns" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set coredns-64897985d to 1"
	I0512 01:15:31.454242       1 event.go:294] "Event occurred" object="kube-system/coredns-64897985d" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-64897985d-dhfqv"
	I0512 01:15:39.184465       1 event.go:294] "Event occurred" object="kube-system/metrics-server" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set metrics-server-b955d9d8 to 1"
	I0512 01:15:39.365986       1 event.go:294] "Event occurred" object="kube-system/metrics-server-b955d9d8" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"metrics-server-b955d9d8-\" is forbidden: error looking up service account kube-system/metrics-server: serviceaccount \"metrics-server\" not found"
	E0512 01:15:39.457361       1 replica_set.go:536] sync "kube-system/metrics-server-b955d9d8" failed with pods "metrics-server-b955d9d8-" is forbidden: error looking up service account kube-system/metrics-server: serviceaccount "metrics-server" not found
	I0512 01:15:39.560403       1 event.go:294] "Event occurred" object="kube-system/metrics-server-b955d9d8" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: metrics-server-b955d9d8-dj72t"
	I0512 01:15:43.510991       1 event.go:294] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set dashboard-metrics-scraper-56974995fc to 1"
	I0512 01:15:43.578790       1 event.go:294] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set kubernetes-dashboard-8469778f77 to 1"
	I0512 01:15:43.582032       1 event.go:294] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-56974995fc" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"dashboard-metrics-scraper-56974995fc-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	I0512 01:15:43.587651       1 event.go:294] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-8469778f77" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"kubernetes-dashboard-8469778f77-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	E0512 01:15:43.653430       1 replica_set.go:536] sync "kubernetes-dashboard/kubernetes-dashboard-8469778f77" failed with pods "kubernetes-dashboard-8469778f77-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	E0512 01:15:43.654946       1 replica_set.go:536] sync "kubernetes-dashboard/dashboard-metrics-scraper-56974995fc" failed with pods "dashboard-metrics-scraper-56974995fc-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0512 01:15:43.750551       1 event.go:294] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-8469778f77" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"kubernetes-dashboard-8469778f77-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	I0512 01:15:43.750596       1 event.go:294] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-56974995fc" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"dashboard-metrics-scraper-56974995fc-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	E0512 01:15:43.864113       1 replica_set.go:536] sync "kubernetes-dashboard/dashboard-metrics-scraper-56974995fc" failed with pods "dashboard-metrics-scraper-56974995fc-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	E0512 01:15:43.880490       1 replica_set.go:536] sync "kubernetes-dashboard/kubernetes-dashboard-8469778f77" failed with pods "kubernetes-dashboard-8469778f77-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0512 01:15:43.881012       1 event.go:294] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-56974995fc" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: dashboard-metrics-scraper-56974995fc-jm4p7"
	I0512 01:15:44.008614       1 event.go:294] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-8469778f77" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kubernetes-dashboard-8469778f77-bwdns"
	E0512 01:16:00.071080       1 resource_quota_controller.go:413] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0512 01:16:00.568964       1 garbagecollector.go:707] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0512 01:16:30.254315       1 resource_quota_controller.go:413] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0512 01:16:30.755177       1 garbagecollector.go:707] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0512 01:17:00.286308       1 resource_quota_controller.go:413] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0512 01:17:00.794867       1 garbagecollector.go:707] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	
	* 
	* ==> kube-proxy [f5ebcb93e156] <==
	* E0512 01:15:34.658734       1 proxier.go:647] "Failed to read builtin modules file, you can ignore this message when kube-proxy is running inside container without mounting /lib/modules" err="open /lib/modules/5.10.102.1-microsoft-standard-WSL2/modules.builtin: no such file or directory" filePath="/lib/modules/5.10.102.1-microsoft-standard-WSL2/modules.builtin"
	I0512 01:15:34.664705       1 proxier.go:657] "Failed to load kernel module with modprobe, you can ignore this message when kube-proxy is running inside container without mounting /lib/modules" moduleName="ip_vs"
	I0512 01:15:34.675967       1 proxier.go:657] "Failed to load kernel module with modprobe, you can ignore this message when kube-proxy is running inside container without mounting /lib/modules" moduleName="ip_vs_rr"
	I0512 01:15:34.753502       1 proxier.go:657] "Failed to load kernel module with modprobe, you can ignore this message when kube-proxy is running inside container without mounting /lib/modules" moduleName="ip_vs_wrr"
	I0512 01:15:34.757225       1 proxier.go:657] "Failed to load kernel module with modprobe, you can ignore this message when kube-proxy is running inside container without mounting /lib/modules" moduleName="ip_vs_sh"
	I0512 01:15:34.762049       1 proxier.go:657] "Failed to load kernel module with modprobe, you can ignore this message when kube-proxy is running inside container without mounting /lib/modules" moduleName="nf_conntrack"
	I0512 01:15:34.857624       1 node.go:163] Successfully retrieved node IP: 192.168.58.2
	I0512 01:15:34.857672       1 server_others.go:138] "Detected node IP" address="192.168.58.2"
	I0512 01:15:34.857872       1 server_others.go:561] "Unknown proxy mode, assuming iptables proxy" proxyMode=""
	I0512 01:15:35.178606       1 server_others.go:206] "Using iptables Proxier"
	I0512 01:15:35.178749       1 server_others.go:213] "kube-proxy running in dual-stack mode" ipFamily=IPv4
	I0512 01:15:35.178770       1 server_others.go:214] "Creating dualStackProxier for iptables"
	I0512 01:15:35.178848       1 server_others.go:491] "Detect-local-mode set to ClusterCIDR, but no IPv6 cluster CIDR defined, , defaulting to no-op detect-local for IPv6"
	I0512 01:15:35.180125       1 server.go:656] "Version info" version="v1.23.5"
	I0512 01:15:35.182125       1 config.go:317] "Starting service config controller"
	I0512 01:15:35.182160       1 shared_informer.go:240] Waiting for caches to sync for service config
	I0512 01:15:35.182196       1 config.go:226] "Starting endpoint slice config controller"
	I0512 01:15:35.182203       1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
	I0512 01:15:35.349933       1 shared_informer.go:247] Caches are synced for endpoint slice config 
	I0512 01:15:35.349960       1 shared_informer.go:247] Caches are synced for service config 
	
	* 
	* ==> kube-scheduler [9f604816537e] <==
	* W0512 01:15:12.668516       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0512 01:15:12.668626       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0512 01:15:12.865214       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0512 01:15:12.865337       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0512 01:15:12.951479       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0512 01:15:12.951611       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0512 01:15:12.951711       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0512 01:15:12.951727       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0512 01:15:12.951724       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0512 01:15:12.951762       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0512 01:15:12.956630       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0512 01:15:12.956735       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0512 01:15:13.081369       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0512 01:15:13.081492       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0512 01:15:13.152421       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0512 01:15:13.152543       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0512 01:15:13.152642       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0512 01:15:13.152665       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0512 01:15:13.161242       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0512 01:15:13.161288       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0512 01:15:13.252324       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0512 01:15:13.252481       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0512 01:15:13.253501       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0512 01:15:13.253614       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	I0512 01:15:15.060824       1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	
	* 
	* ==> kubelet <==
	* -- Logs begin at Thu 2022-05-12 01:09:11 UTC, end at Thu 2022-05-12 01:17:36 UTC. --
	May 12 01:17:01 embed-certs-20220512010611-7184 kubelet[5195]: time="2022-05-12T01:17:01Z" level=error msg="Failed to remove cgroup" error="rmdir /sys/fs/cgroup/rdma/kubepods/burstable/podcfb29c14-064a-460f-8a47-3f5667911e1a/0a485f32b86d23942a3247815b04bb8eb0388c63d62e1e34aa821bb34bc22b79: device or resource busy"
	May 12 01:17:01 embed-certs-20220512010611-7184 kubelet[5195]: I0512 01:17:01.201681    5195 pod_container_manager_linux.go:194] "Failed to delete cgroup paths" cgroupName=[kubepods burstable pod002009a6866b0a2506f8d5c8c4da7548] err="unable to destroy cgroup paths for cgroup [kubepods burstable pod002009a6866b0a2506f8d5c8c4da7548] : Failed to remove paths: map[rdma:/sys/fs/cgroup/rdma/kubepods/burstable/pod002009a6866b0a2506f8d5c8c4da7548]"
	May 12 01:17:01 embed-certs-20220512010611-7184 kubelet[5195]: I0512 01:17:01.201713    5195 pod_container_manager_linux.go:194] "Failed to delete cgroup paths" cgroupName=[kubepods burstable podcfb29c14-064a-460f-8a47-3f5667911e1a] err="unable to destroy cgroup paths for cgroup [kubepods burstable podcfb29c14-064a-460f-8a47-3f5667911e1a] : Failed to remove paths: map[rdma:/sys/fs/cgroup/rdma/kubepods/burstable/podcfb29c14-064a-460f-8a47-3f5667911e1a]"
	May 12 01:17:01 embed-certs-20220512010611-7184 kubelet[5195]: time="2022-05-12T01:17:01Z" level=error msg="Failed to remove cgroup" error="rmdir /sys/fs/cgroup/rdma/kubepods/burstable/pod9fc427d2e6746d2b3f18846f6f0fcafb/2563f60fad220f07db8f45cc96d5a42c26fb34d6d3e5df05ee88aa982896f7b0: device or resource busy"
	May 12 01:17:01 embed-certs-20220512010611-7184 kubelet[5195]: time="2022-05-12T01:17:01Z" level=error msg="Failed to remove cgroup" error="rmdir /sys/fs/cgroup/rdma/kubepods/besteffort/pod88181eaf-3164-49ec-a268-6e0f32698745/52962f3d9e9a919c14831d3b3674cd3bd03654b4cac2792505be02ed4c3c48fa: device or resource busy"
	May 12 01:17:01 embed-certs-20220512010611-7184 kubelet[5195]: time="2022-05-12T01:17:01Z" level=error msg="Failed to remove cgroup" error="rmdir /sys/fs/cgroup/rdma/kubepods/burstable/pod7bdf10a6ac21254bb5823aca69fc5310/12bad5d82201d738c336fe1786c87d6e5afeae7df56f63d77efc9dbcd020d8e7: device or resource busy"
	May 12 01:17:01 embed-certs-20220512010611-7184 kubelet[5195]: I0512 01:17:01.201861    5195 pod_container_manager_linux.go:194] "Failed to delete cgroup paths" cgroupName=[kubepods burstable pod9fc427d2e6746d2b3f18846f6f0fcafb] err="unable to destroy cgroup paths for cgroup [kubepods burstable pod9fc427d2e6746d2b3f18846f6f0fcafb] : Failed to remove paths: map[rdma:/sys/fs/cgroup/rdma/kubepods/burstable/pod9fc427d2e6746d2b3f18846f6f0fcafb]"
	May 12 01:17:01 embed-certs-20220512010611-7184 kubelet[5195]: I0512 01:17:01.202022    5195 pod_container_manager_linux.go:194] "Failed to delete cgroup paths" cgroupName=[kubepods besteffort pod88181eaf-3164-49ec-a268-6e0f32698745] err="unable to destroy cgroup paths for cgroup [kubepods besteffort pod88181eaf-3164-49ec-a268-6e0f32698745] : Failed to remove paths: map[rdma:/sys/fs/cgroup/rdma/kubepods/besteffort/pod88181eaf-3164-49ec-a268-6e0f32698745]"
	May 12 01:17:01 embed-certs-20220512010611-7184 kubelet[5195]: I0512 01:17:01.202124    5195 pod_container_manager_linux.go:194] "Failed to delete cgroup paths" cgroupName=[kubepods burstable pod7bdf10a6ac21254bb5823aca69fc5310] err="unable to destroy cgroup paths for cgroup [kubepods burstable pod7bdf10a6ac21254bb5823aca69fc5310] : Failed to remove paths: map[rdma:/sys/fs/cgroup/rdma/kubepods/burstable/pod7bdf10a6ac21254bb5823aca69fc5310]"
	May 12 01:17:01 embed-certs-20220512010611-7184 kubelet[5195]: time="2022-05-12T01:17:01Z" level=error msg="Failed to remove cgroup" error="rmdir /sys/fs/cgroup/rdma/kubepods/besteffort/pod59444036ebaaae96eddd41dadabbc71a/26850d91e05e50e404cfbae0eb9a3758099cd1a8ad614d8e6c7b3f9e1d0d9b18: device or resource busy"
	May 12 01:17:01 embed-certs-20220512010611-7184 kubelet[5195]: I0512 01:17:01.212190    5195 pod_container_manager_linux.go:194] "Failed to delete cgroup paths" cgroupName=[kubepods besteffort pod59444036ebaaae96eddd41dadabbc71a] err="unable to destroy cgroup paths for cgroup [kubepods besteffort pod59444036ebaaae96eddd41dadabbc71a] : Failed to remove paths: map[rdma:/sys/fs/cgroup/rdma/kubepods/besteffort/pod59444036ebaaae96eddd41dadabbc71a]"
	May 12 01:17:01 embed-certs-20220512010611-7184 kubelet[5195]: time="2022-05-12T01:17:01Z" level=error msg="Failed to remove cgroup" error="rmdir /sys/fs/cgroup/rdma/kubepods/besteffort/podf54a67f2-f423-4525-a613-569e73288c94/4170b36e0b4c5f27b0b2178c8592e49502600a4ad2a613c36dcdb3953d5ec28d: device or resource busy"
	May 12 01:17:01 embed-certs-20220512010611-7184 kubelet[5195]: time="2022-05-12T01:17:01Z" level=error msg="Failed to remove cgroup" error="rmdir /sys/fs/cgroup/rdma/kubepods/besteffort/pod8728e39b-198d-41a2-ba6f-5934ef025209/52d5f6fdaf1406ef4c262aba07c84c6a554ab7c117b38097d889ae1a972c7a58: device or resource busy"
	May 12 01:17:01 embed-certs-20220512010611-7184 kubelet[5195]: I0512 01:17:01.212251    5195 pod_container_manager_linux.go:194] "Failed to delete cgroup paths" cgroupName=[kubepods besteffort podf54a67f2-f423-4525-a613-569e73288c94] err="unable to destroy cgroup paths for cgroup [kubepods besteffort podf54a67f2-f423-4525-a613-569e73288c94] : Failed to remove paths: map[rdma:/sys/fs/cgroup/rdma/kubepods/besteffort/podf54a67f2-f423-4525-a613-569e73288c94]"
	May 12 01:17:01 embed-certs-20220512010611-7184 kubelet[5195]: I0512 01:17:01.212283    5195 pod_container_manager_linux.go:194] "Failed to delete cgroup paths" cgroupName=[kubepods besteffort pod8728e39b-198d-41a2-ba6f-5934ef025209] err="unable to destroy cgroup paths for cgroup [kubepods besteffort pod8728e39b-198d-41a2-ba6f-5934ef025209] : Failed to remove paths: map[rdma:/sys/fs/cgroup/rdma/kubepods/besteffort/pod8728e39b-198d-41a2-ba6f-5934ef025209]"
	May 12 01:17:01 embed-certs-20220512010611-7184 kubelet[5195]: time="2022-05-12T01:17:01Z" level=error msg="Failed to remove cgroup" error="rmdir /sys/fs/cgroup/rdma/kubepods/besteffort/pod97886edb-5d10-49e9-8875-3c306b063e34/3258411661dbdb88691d22599bb15da544c488943246e63a9fc7ccaf03ba5585: device or resource busy"
	May 12 01:17:01 embed-certs-20220512010611-7184 kubelet[5195]: I0512 01:17:01.212391    5195 pod_container_manager_linux.go:194] "Failed to delete cgroup paths" cgroupName=[kubepods besteffort pod97886edb-5d10-49e9-8875-3c306b063e34] err="unable to destroy cgroup paths for cgroup [kubepods besteffort pod97886edb-5d10-49e9-8875-3c306b063e34] : Failed to remove paths: map[rdma:/sys/fs/cgroup/rdma/kubepods/besteffort/pod97886edb-5d10-49e9-8875-3c306b063e34]"
	May 12 01:17:01 embed-certs-20220512010611-7184 kubelet[5195]: time="2022-05-12T01:17:01Z" level=error msg="Failed to remove cgroup" error="rmdir /sys/fs/cgroup/rdma/kubepods/burstable/podba40e2cfcc5b57908dd25747d02cea61/11a6c9cca5627b8c45dc17777925da0d62dc2ad308b44f90bf8af74477b3f232: device or resource busy"
	May 12 01:17:01 embed-certs-20220512010611-7184 kubelet[5195]: time="2022-05-12T01:17:01Z" level=error msg="Failed to remove cgroup" error="rmdir /sys/fs/cgroup/rdma/kubepods/besteffort/pod070fb71a-1145-4881-a9cd-076ab7a6d77b/662135225f6c66890eeb0b9b3bdfa106d9b4f7a32ed7054006ac470f1dbfbfe9: device or resource busy"
	May 12 01:17:01 embed-certs-20220512010611-7184 kubelet[5195]: I0512 01:17:01.212443    5195 pod_container_manager_linux.go:194] "Failed to delete cgroup paths" cgroupName=[kubepods burstable podba40e2cfcc5b57908dd25747d02cea61] err="unable to destroy cgroup paths for cgroup [kubepods burstable podba40e2cfcc5b57908dd25747d02cea61] : Failed to remove paths: map[rdma:/sys/fs/cgroup/rdma/kubepods/burstable/podba40e2cfcc5b57908dd25747d02cea61]"
	May 12 01:17:01 embed-certs-20220512010611-7184 kubelet[5195]: I0512 01:17:01.212447    5195 pod_container_manager_linux.go:194] "Failed to delete cgroup paths" cgroupName=[kubepods besteffort pod070fb71a-1145-4881-a9cd-076ab7a6d77b] err="unable to destroy cgroup paths for cgroup [kubepods besteffort pod070fb71a-1145-4881-a9cd-076ab7a6d77b] : Failed to remove paths: map[rdma:/sys/fs/cgroup/rdma/kubepods/besteffort/pod070fb71a-1145-4881-a9cd-076ab7a6d77b]"
	May 12 01:17:02 embed-certs-20220512010611-7184 systemd[1]: Stopping kubelet: The Kubernetes Node Agent...
	May 12 01:17:02 embed-certs-20220512010611-7184 kubelet[5195]: I0512 01:17:02.959854    5195 dynamic_cafile_content.go:170] "Shutting down controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	May 12 01:17:03 embed-certs-20220512010611-7184 systemd[1]: kubelet.service: Succeeded.
	May 12 01:17:03 embed-certs-20220512010611-7184 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	
	* 
	* ==> kubernetes-dashboard [9dc69e11c6a1] <==
	* 2022/05/12 01:16:37 Starting overwatch
	2022/05/12 01:16:37 Using namespace: kubernetes-dashboard
	2022/05/12 01:16:37 Using in-cluster config to connect to apiserver
	2022/05/12 01:16:37 Using secret token for csrf signing
	2022/05/12 01:16:37 Initializing csrf token from kubernetes-dashboard-csrf secret
	2022/05/12 01:16:37 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2022/05/12 01:16:37 Successful initial request to the apiserver, version: v1.23.5
	2022/05/12 01:16:37 Generating JWE encryption key
	2022/05/12 01:16:37 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2022/05/12 01:16:37 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2022/05/12 01:16:38 Initializing JWE encryption key from synchronized object
	2022/05/12 01:16:38 Creating in-cluster Sidecar client
	2022/05/12 01:16:38 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2022/05/12 01:16:38 Serving insecurely on HTTP port: 9090
	
	* 
	* ==> storage-provisioner [518dc8f1de40] <==
	* I0512 01:15:42.268097       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0512 01:15:42.359123       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0512 01:15:42.359210       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0512 01:15:42.457547       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0512 01:15:42.457946       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-20220512010611-7184_c3a349cb-879f-44fa-ac06-9be541385714!
	I0512 01:15:42.458764       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"49631300-4fcf-4b37-b7ed-3c03968e9dd4", APIVersion:"v1", ResourceVersion:"523", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-20220512010611-7184_c3a349cb-879f-44fa-ac06-9be541385714 became leader
	I0512 01:15:42.658482       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-20220512010611-7184_c3a349cb-879f-44fa-ac06-9be541385714!
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0512 01:17:35.494037    4928 logs.go:192] command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.23.5/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.23.5/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	Unable to connect to the server: net/http: TLS handshake timeout
	 output: "\n** stderr ** \nUnable to connect to the server: net/http: TLS handshake timeout\n\n** /stderr **"
	! unable to fetch logs for: describe nodes

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p embed-certs-20220512010611-7184 -n embed-certs-20220512010611-7184
E0512 01:17:41.914805    7184 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\no-preload-20220512010315-7184\client.crt: The system cannot find the path specified.
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p embed-certs-20220512010611-7184 -n embed-certs-20220512010611-7184: exit status 2 (6.9416544s)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "embed-certs-20220512010611-7184" apiserver is not running, skipping kubectl commands (state="Paused")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect embed-certs-20220512010611-7184
helpers_test.go:231: (dbg) Done: docker inspect embed-certs-20220512010611-7184: (1.131197s)
helpers_test.go:235: (dbg) docker inspect embed-certs-20220512010611-7184:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "f4e28399843b28941bba40e3d3af83b376043607edf06c84a6114ce246bbbf24",
	        "Created": "2022-05-12T01:06:59.3694127Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 211306,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2022-05-12T01:09:11.1082331Z",
	            "FinishedAt": "2022-05-12T01:08:51.0918361Z"
	        },
	        "Image": "sha256:8a42e1145657f551cd435eddb43b96ab44d0facbe44106da934225366eeb7757",
	        "ResolvConfPath": "/var/lib/docker/containers/f4e28399843b28941bba40e3d3af83b376043607edf06c84a6114ce246bbbf24/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/f4e28399843b28941bba40e3d3af83b376043607edf06c84a6114ce246bbbf24/hostname",
	        "HostsPath": "/var/lib/docker/containers/f4e28399843b28941bba40e3d3af83b376043607edf06c84a6114ce246bbbf24/hosts",
	        "LogPath": "/var/lib/docker/containers/f4e28399843b28941bba40e3d3af83b376043607edf06c84a6114ce246bbbf24/f4e28399843b28941bba40e3d3af83b376043607edf06c84a6114ce246bbbf24-json.log",
	        "Name": "/embed-certs-20220512010611-7184",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "embed-certs-20220512010611-7184:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "embed-certs-20220512010611-7184",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 2306867200,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/983caab56e6e21128f0b32a653617b71cbd07396cc92226128f32156658526cb-init/diff:/var/lib/docker/overlay2/f5e9ce82d74c36e0b6de7ac5d28dc4eacb2abae094b3550db66974795ad74446/diff:/var/lib/docker/overlay2/63fa2edc88f635760dd847938696c8fc50aad8a0c51ab6c6f93b0aa9a6fcefe6/diff:/var/lib/docker/overlay2/5fcaace21e215fd120a775470b4a78ef06d9f466e024f0b1baef234ddf87f04f/diff:/var/lib/docker/overlay2/6c9accb62919ca088333f8b3a193f43886e2c3042c5ec726e1c7fd19ee183024/diff:/var/lib/docker/overlay2/a9a1aea640018bd9e524c36422ea97cc97a391601b718810fed779260e48c87a/diff:/var/lib/docker/overlay2/2476b6e8d656e43d76d4288098236bc765cb37fa9dde696f09bfce485e99229e/diff:/var/lib/docker/overlay2/6cdf08ddc61561e961de67f3e14478397868c53a4d8552909a5b84e35b28db1f/diff:/var/lib/docker/overlay2/6f6f8b6686cc7838a52ce30fdc4e938cde2fb68b161e09a9bc81fa11011619a6/diff:/var/lib/docker/overlay2/9f55b91f762ea60dc3da71cf56e5ca24181e712389d99cb8d972bba474f5d6a4/diff:/var/lib/docker/overlay2/4c269b
df57eae0d21d2034dc086308278f9b00f2301c726b6de876b9ff97298d/diff:/var/lib/docker/overlay2/5fb8ed9b9e765df8150f27b7847ec7333b2c7d978dbc1161da97d630ec7e43e2/diff:/var/lib/docker/overlay2/3a297e9f6ab51d930ef61c49a0fea772cdc2a2704a077db6adb142eb044d9a93/diff:/var/lib/docker/overlay2/2068464f4655627fb513b31660ab34c938e559da10d44fd723ce9e1d744a037d/diff:/var/lib/docker/overlay2/f783596106daebadefbb7774015c369d757665d434c96581b426b2e5f5b453c4/diff:/var/lib/docker/overlay2/ac8927d3cc7829cc82e4c0214dd4adee97aedbe2b7d992cbbc08288443c8e320/diff:/var/lib/docker/overlay2/91eff4612dd56b2c82f4357b879f9cdcbc13704bf8f6dcbfc56febb104774843/diff:/var/lib/docker/overlay2/ee7366acc162efb7b878c4c56df021a8ef756fa595230ffe898cd0dd0355eb44/diff:/var/lib/docker/overlay2/ab5df115d2ec8cd71172942a2e449de32b93a3b6b5d90122c0e734c5e11d6bb2/diff:/var/lib/docker/overlay2/59fcbe9b552129cdc5c96e9e8ff27f4b88a12645aec3cf8f48f28d91521760e0/diff:/var/lib/docker/overlay2/0684681eb5880654b43e8803ef8f17b85e6129c85ff81c13b509563184a77625/diff:/var/lib/d
ocker/overlay2/8bbd24801b480df6ca8545e8f8bb09c17b7598c2868fb94ea5b8775ce2f311b4/diff:/var/lib/docker/overlay2/f28553cc59fccccfdfc5c24b7b8dfe4055c625d0a004731911c34b4ba32a9dfb/diff:/var/lib/docker/overlay2/2e47a8ef6e4481885d71f57a1d9ef99898b741644addd2796de5c2f4c696edb0/diff:/var/lib/docker/overlay2/e1f1eaaa809c974dabd197f590d19be05325f506a53a9a1f8ba29defd7096f60/diff:/var/lib/docker/overlay2/83ce12af60df76f98283ed8f3450cd5727b42d06055b18f04a18068b105ae128/diff:/var/lib/docker/overlay2/5fd34820f54e7f8f0c898c21b5d9d030e5b82c65c901897306c3db475481167f/diff:/var/lib/docker/overlay2/3ecb6f46fa47a8906ff5de1da5a63be9c664ff5bc66faf870126868d36bb77c6/diff:/var/lib/docker/overlay2/ccb92f12dd3e84b11b2c9b1ef6a0581ad5894648432ebe7cb5d16d48c7aacf6e/diff:/var/lib/docker/overlay2/7c6d11dc9abdd4916f3759c8ae4db8c3011cff872f2fd3cc502e7f663e496765/diff:/var/lib/docker/overlay2/b865b0351704115fa113e25f7651d1dc1e2f0348c332552e555e898094f34802/diff:/var/lib/docker/overlay2/bbcf207462c3f88368214d8e4ca222f28a828bd30661741d421665b4d10
80f07/diff:/var/lib/docker/overlay2/b554a32e9a2e4d3773e918754c27a1b32bc7ec5327d3bd1f52d7a146a07fa2c5/diff:/var/lib/docker/overlay2/d0a997bacfa9b1b54f61c62f00ad2797616ea9bb55182aad68ed805f96f5f72b/diff:/var/lib/docker/overlay2/e0c168ecfe6a93618f4f653c1aba422023114f242ab1045591d0c8454573d5c2/diff:/var/lib/docker/overlay2/fb67af38a46ef55935fcfb4f1be5f34b45b3d0e1c571538828117f23eedea417/diff:/var/lib/docker/overlay2/e96ed0776e5f27ef225469ac5f5e8ed2e299c72d5db88782599c0fdd1cec2fe3/diff:/var/lib/docker/overlay2/91b77e60e0a7864ace4f5a4d65f465bd7fe862616a87a74ee9fee21dc5dceb07/diff:/var/lib/docker/overlay2/9829211293f70b356dfa8d07b5dbbc3a6d05415cbd2840fd9dd948b8b315bf18/diff:/var/lib/docker/overlay2/dc35dda36e34a2f4f3a5d958b1a7d4d75db8655c4bc7b4b3d9591f43f9a645fc/diff:/var/lib/docker/overlay2/968c2bb04f641a9c8bd30d38659dc28973b31bfd577bb1aa355ae6c2ab4a0d34/diff:/var/lib/docker/overlay2/37432c6ae0b10a52e95b215fdd2e256362060f32c4a52f0d2021b6e10b3ed77b/diff:/var/lib/docker/overlay2/77687f9734b19f3e8a5bb08b07067e0b572775
20867b7a6ad80b67ffebe332d7/diff",
	                "MergedDir": "/var/lib/docker/overlay2/983caab56e6e21128f0b32a653617b71cbd07396cc92226128f32156658526cb/merged",
	                "UpperDir": "/var/lib/docker/overlay2/983caab56e6e21128f0b32a653617b71cbd07396cc92226128f32156658526cb/diff",
	                "WorkDir": "/var/lib/docker/overlay2/983caab56e6e21128f0b32a653617b71cbd07396cc92226128f32156658526cb/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "embed-certs-20220512010611-7184",
	                "Source": "/var/lib/docker/volumes/embed-certs-20220512010611-7184/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "embed-certs-20220512010611-7184",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1652251400-14138@sha256:8c847a4aa2afc5a7fc659f9731046bf9cc7e788283deecc83c8633014fb0828a",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "embed-certs-20220512010611-7184",
	                "name.minikube.sigs.k8s.io": "embed-certs-20220512010611-7184",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "6c7da326ef50888a0effb09dbadc0bfad9746a5feb2bc86f00982944d9f52aab",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "50418"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "50419"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "50415"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "50416"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "50417"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/6c7da326ef50",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "embed-certs-20220512010611-7184": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.58.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "f4e28399843b",
	                        "embed-certs-20220512010611-7184"
	                    ],
	                    "NetworkID": "17506d6362697ddcd7c4bad2fcbfa96514130db5d8a4562b1b83e75435155eb1",
	                    "EndpointID": "9ebc9e54e0ad4ed048c06fea5ac4ab54907e734c8ffda363d26e9eb3ad57346c",
	                    "Gateway": "192.168.58.1",
	                    "IPAddress": "192.168.58.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:3a:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p embed-certs-20220512010611-7184 -n embed-certs-20220512010611-7184
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p embed-certs-20220512010611-7184 -n embed-certs-20220512010611-7184: exit status 2 (6.8267076s)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestStartStop/group/embed-certs/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-windows-amd64.exe -p embed-certs-20220512010611-7184 logs -n 25

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/Pause
helpers_test.go:247: (dbg) Done: out/minikube-windows-amd64.exe -p embed-certs-20220512010611-7184 logs -n 25: (59.4697281s)
helpers_test.go:252: TestStartStop/group/embed-certs/serial/Pause logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|---------------------------------------------------|------------------------------------------------|-------------------|---------|---------------------|---------------------|
	| Command |                       Args                        |                    Profile                     |       User        | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------|------------------------------------------------|-------------------|---------|---------------------|---------------------|
	| start   | -p                                                | embed-certs-20220512010611-7184                | minikube4\jenkins | v1.25.2 | 12 May 22 01:06 GMT | 12 May 22 01:08 GMT |
	|         | embed-certs-20220512010611-7184                   |                                                |                   |         |                     |                     |
	|         | --memory=2200 --alsologtostderr                   |                                                |                   |         |                     |                     |
	|         | --wait=true --embed-certs                         |                                                |                   |         |                     |                     |
	|         | --driver=docker                                   |                                                |                   |         |                     |                     |
	|         | --kubernetes-version=v1.23.5                      |                                                |                   |         |                     |                     |
	| addons  | enable metrics-server -p                          | embed-certs-20220512010611-7184                | minikube4\jenkins | v1.25.2 | 12 May 22 01:08 GMT | 12 May 22 01:08 GMT |
	|         | embed-certs-20220512010611-7184                   |                                                |                   |         |                     |                     |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4  |                                                |                   |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain            |                                                |                   |         |                     |                     |
	| stop    | -p                                                | embed-certs-20220512010611-7184                | minikube4\jenkins | v1.25.2 | 12 May 22 01:08 GMT | 12 May 22 01:08 GMT |
	|         | embed-certs-20220512010611-7184                   |                                                |                   |         |                     |                     |
	|         | --alsologtostderr -v=3                            |                                                |                   |         |                     |                     |
	| addons  | enable dashboard -p                               | embed-certs-20220512010611-7184                | minikube4\jenkins | v1.25.2 | 12 May 22 01:08 GMT | 12 May 22 01:08 GMT |
	|         | embed-certs-20220512010611-7184                   |                                                |                   |         |                     |                     |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4 |                                                |                   |         |                     |                     |
	| start   | -p                                                | force-systemd-env-20220512010244-7184          | minikube4\jenkins | v1.25.2 | 12 May 22 01:02 GMT | 12 May 22 01:10 GMT |
	|         | force-systemd-env-20220512010244-7184             |                                                |                   |         |                     |                     |
	|         | --memory=2048 --alsologtostderr -v=5              |                                                |                   |         |                     |                     |
	|         | --driver=docker                                   |                                                |                   |         |                     |                     |
	| ssh     | force-systemd-env-20220512010244-7184             | force-systemd-env-20220512010244-7184          | minikube4\jenkins | v1.25.2 | 12 May 22 01:11 GMT | 12 May 22 01:11 GMT |
	|         | ssh docker info --format                          |                                                |                   |         |                     |                     |
	|         | {{.CgroupDriver}}                                 |                                                |                   |         |                     |                     |
	| delete  | -p                                                | force-systemd-env-20220512010244-7184          | minikube4\jenkins | v1.25.2 | 12 May 22 01:11 GMT | 12 May 22 01:11 GMT |
	|         | force-systemd-env-20220512010244-7184             |                                                |                   |         |                     |                     |
	| delete  | -p                                                | disable-driver-mounts-20220512011134-7184      | minikube4\jenkins | v1.25.2 | 12 May 22 01:11 GMT | 12 May 22 01:11 GMT |
	|         | disable-driver-mounts-20220512011134-7184         |                                                |                   |         |                     |                     |
	| start   | -p                                                | old-k8s-version-20220512010246-7184            | minikube4\jenkins | v1.25.2 | 12 May 22 01:02 GMT | 12 May 22 01:11 GMT |
	|         | old-k8s-version-20220512010246-7184               |                                                |                   |         |                     |                     |
	|         | --memory=2200 --alsologtostderr                   |                                                |                   |         |                     |                     |
	|         | --wait=true --kvm-network=default                 |                                                |                   |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                     |                                                |                   |         |                     |                     |
	|         | --disable-driver-mounts                           |                                                |                   |         |                     |                     |
	|         | --keep-context=false                              |                                                |                   |         |                     |                     |
	|         | --driver=docker                                   |                                                |                   |         |                     |                     |
	|         | --kubernetes-version=v1.16.0                      |                                                |                   |         |                     |                     |
	| addons  | enable metrics-server -p                          | old-k8s-version-20220512010246-7184            | minikube4\jenkins | v1.25.2 | 12 May 22 01:12 GMT | 12 May 22 01:12 GMT |
	|         | old-k8s-version-20220512010246-7184               |                                                |                   |         |                     |                     |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4  |                                                |                   |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain            |                                                |                   |         |                     |                     |
	| stop    | -p                                                | old-k8s-version-20220512010246-7184            | minikube4\jenkins | v1.25.2 | 12 May 22 01:12 GMT | 12 May 22 01:12 GMT |
	|         | old-k8s-version-20220512010246-7184               |                                                |                   |         |                     |                     |
	|         | --alsologtostderr -v=3                            |                                                |                   |         |                     |                     |
	| addons  | enable dashboard -p                               | old-k8s-version-20220512010246-7184            | minikube4\jenkins | v1.25.2 | 12 May 22 01:12 GMT | 12 May 22 01:12 GMT |
	|         | old-k8s-version-20220512010246-7184               |                                                |                   |         |                     |                     |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4 |                                                |                   |         |                     |                     |
	| start   | -p no-preload-20220512010315-7184                 | no-preload-20220512010315-7184                 | minikube4\jenkins | v1.25.2 | 12 May 22 01:07 GMT | 12 May 22 01:13 GMT |
	|         | --memory=2200 --alsologtostderr                   |                                                |                   |         |                     |                     |
	|         | --wait=true --preload=false                       |                                                |                   |         |                     |                     |
	|         | --driver=docker                                   |                                                |                   |         |                     |                     |
	|         | --kubernetes-version=v1.23.6-rc.0                 |                                                |                   |         |                     |                     |
	| start   | -p                                                | default-k8s-different-port-20220512011148-7184 | minikube4\jenkins | v1.25.2 | 12 May 22 01:11 GMT | 12 May 22 01:14 GMT |
	|         | default-k8s-different-port-20220512011148-7184    |                                                |                   |         |                     |                     |
	|         | --memory=2200 --alsologtostderr --wait=true       |                                                |                   |         |                     |                     |
	|         | --apiserver-port=8444 --driver=docker             |                                                |                   |         |                     |                     |
	|         | --kubernetes-version=v1.23.5                      |                                                |                   |         |                     |                     |
	| addons  | enable metrics-server -p                          | default-k8s-different-port-20220512011148-7184 | minikube4\jenkins | v1.25.2 | 12 May 22 01:14 GMT | 12 May 22 01:14 GMT |
	|         | default-k8s-different-port-20220512011148-7184    |                                                |                   |         |                     |                     |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4  |                                                |                   |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain            |                                                |                   |         |                     |                     |
	| stop    | -p                                                | default-k8s-different-port-20220512011148-7184 | minikube4\jenkins | v1.25.2 | 12 May 22 01:14 GMT | 12 May 22 01:14 GMT |
	|         | default-k8s-different-port-20220512011148-7184    |                                                |                   |         |                     |                     |
	|         | --alsologtostderr -v=3                            |                                                |                   |         |                     |                     |
	| addons  | enable dashboard -p                               | default-k8s-different-port-20220512011148-7184 | minikube4\jenkins | v1.25.2 | 12 May 22 01:14 GMT | 12 May 22 01:14 GMT |
	|         | default-k8s-different-port-20220512011148-7184    |                                                |                   |         |                     |                     |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4 |                                                |                   |         |                     |                     |
	| ssh     | -p                                                | no-preload-20220512010315-7184                 | minikube4\jenkins | v1.25.2 | 12 May 22 01:14 GMT | 12 May 22 01:14 GMT |
	|         | no-preload-20220512010315-7184                    |                                                |                   |         |                     |                     |
	|         | sudo crictl images -o json                        |                                                |                   |         |                     |                     |
	| pause   | -p                                                | no-preload-20220512010315-7184                 | minikube4\jenkins | v1.25.2 | 12 May 22 01:14 GMT | 12 May 22 01:14 GMT |
	|         | no-preload-20220512010315-7184                    |                                                |                   |         |                     |                     |
	|         | --alsologtostderr -v=1                            |                                                |                   |         |                     |                     |
	| unpause | -p                                                | no-preload-20220512010315-7184                 | minikube4\jenkins | v1.25.2 | 12 May 22 01:15 GMT | 12 May 22 01:15 GMT |
	|         | no-preload-20220512010315-7184                    |                                                |                   |         |                     |                     |
	|         | --alsologtostderr -v=1                            |                                                |                   |         |                     |                     |
	| start   | -p                                                | embed-certs-20220512010611-7184                | minikube4\jenkins | v1.25.2 | 12 May 22 01:08 GMT | 12 May 22 01:15 GMT |
	|         | embed-certs-20220512010611-7184                   |                                                |                   |         |                     |                     |
	|         | --memory=2200 --alsologtostderr                   |                                                |                   |         |                     |                     |
	|         | --wait=true --embed-certs                         |                                                |                   |         |                     |                     |
	|         | --driver=docker                                   |                                                |                   |         |                     |                     |
	|         | --kubernetes-version=v1.23.5                      |                                                |                   |         |                     |                     |
	| delete  | -p                                                | no-preload-20220512010315-7184                 | minikube4\jenkins | v1.25.2 | 12 May 22 01:15 GMT | 12 May 22 01:16 GMT |
	|         | no-preload-20220512010315-7184                    |                                                |                   |         |                     |                     |
	| delete  | -p                                                | no-preload-20220512010315-7184                 | minikube4\jenkins | v1.25.2 | 12 May 22 01:16 GMT | 12 May 22 01:16 GMT |
	|         | no-preload-20220512010315-7184                    |                                                |                   |         |                     |                     |
	| ssh     | -p                                                | embed-certs-20220512010611-7184                | minikube4\jenkins | v1.25.2 | 12 May 22 01:16 GMT | 12 May 22 01:16 GMT |
	|         | embed-certs-20220512010611-7184                   |                                                |                   |         |                     |                     |
	|         | sudo crictl images -o json                        |                                                |                   |         |                     |                     |
	| logs    | embed-certs-20220512010611-7184                   | embed-certs-20220512010611-7184                | minikube4\jenkins | v1.25.2 | 12 May 22 01:17 GMT | 12 May 22 01:17 GMT |
	|         | logs -n 25                                        |                                                |                   |         |                     |                     |
	|---------|---------------------------------------------------|------------------------------------------------|-------------------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2022/05/12 01:16:16
	Running on machine: minikube4
	Binary: Built with gc go1.18.1 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0512 01:16:16.731520    2560 out.go:296] Setting OutFile to fd 1852 ...
	I0512 01:16:16.790528    2560 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0512 01:16:16.790528    2560 out.go:309] Setting ErrFile to fd 1792...
	I0512 01:16:16.790528    2560 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0512 01:16:16.802535    2560 out.go:303] Setting JSON to false
	I0512 01:16:16.804525    2560 start.go:115] hostinfo: {"hostname":"minikube4","uptime":16629,"bootTime":1652301547,"procs":164,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19042 Build 19042","kernelVersion":"10.0.19042 Build 19042","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"df6bfb5f-73f2-4acb-9365-df7854ecbb28"}
	W0512 01:16:16.804525    2560 start.go:123] gopshost.Virtualization returned error: not implemented yet
	I0512 01:16:16.810569    2560 out.go:177] * [newest-cni-20220512011616-7184] minikube v1.25.2 on Microsoft Windows 10 Enterprise N 10.0.19042 Build 19042
	I0512 01:16:16.813541    2560 notify.go:193] Checking for updates...
	I0512 01:16:16.816541    2560 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	I0512 01:16:16.820526    2560 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube4\minikube-integration\.minikube
	I0512 01:16:16.822531    2560 out.go:177]   - MINIKUBE_LOCATION=13639
	I0512 01:16:16.825517    2560 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0512 01:16:13.643767    4756 pod_ready.go:102] pod "metrics-server-6f89b5864b-jgnxp" in "kube-system" namespace has status "Ready":"False"
	I0512 01:16:15.656108    4756 pod_ready.go:102] pod "metrics-server-6f89b5864b-jgnxp" in "kube-system" namespace has status "Ready":"False"
	I0512 01:16:15.420131    4792 pod_ready.go:102] pod "metrics-server-b955d9d8-rm42p" in "kube-system" namespace has status "Ready":"False"
	I0512 01:16:17.995331    4792 pod_ready.go:102] pod "metrics-server-b955d9d8-rm42p" in "kube-system" namespace has status "Ready":"False"
	I0512 01:16:16.829557    2560 config.go:178] Loaded profile config "default-k8s-different-port-20220512011148-7184": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.5
	I0512 01:16:16.830548    2560 config.go:178] Loaded profile config "embed-certs-20220512010611-7184": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.5
	I0512 01:16:16.830548    2560 config.go:178] Loaded profile config "old-k8s-version-20220512010246-7184": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.16.0
	I0512 01:16:16.831523    2560 driver.go:358] Setting default libvirt URI to qemu:///system
	I0512 01:16:20.037097    2560 docker.go:137] docker version: linux-20.10.14
	I0512 01:16:20.046519    2560 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0512 01:16:22.283107    2560 cli_runner.go:217] Completed: docker system info --format "{{json .}}": (2.2364733s)
	I0512 01:16:22.284599    2560 info.go:265] docker info: {ID:5MOX:W55Z:6RSS:V5PU:46KT:D723:NTM4:N7FK:USOO:URA3:TW6J:2PNT Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:85 OomKillDisable:true NGoroutines:63 SystemTime:2022-05-12 01:16:21.1622095 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.102.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_
64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:53902323712 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.14 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3df54a852345ae127d1fa3092b95168e4a88e2f8 Expected:3df54a852345ae127d1fa3092b95168e4a88e2f8} RuncCommit:{ID:v1.0.3-0-gf46b6ba Expected:v1.0.3-0-gf46b6ba} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,
profile=default] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.5.0] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:C:\Program Files\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc.
Version:v0.17.0]] Warnings:<nil>}}
	I0512 01:16:22.294749    2560 out.go:177] * Using the docker driver based on user configuration
	I0512 01:16:18.160292    4756 pod_ready.go:102] pod "metrics-server-6f89b5864b-jgnxp" in "kube-system" namespace has status "Ready":"False"
	I0512 01:16:20.650197    4756 pod_ready.go:102] pod "metrics-server-6f89b5864b-jgnxp" in "kube-system" namespace has status "Ready":"False"
	I0512 01:16:22.296746    2560 start.go:284] selected driver: docker
	I0512 01:16:22.296746    2560 start.go:801] validating driver "docker" against <nil>
	I0512 01:16:22.296746    2560 start.go:812] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0512 01:16:22.368275    2560 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0512 01:16:24.545409    2560 cli_runner.go:217] Completed: docker system info --format "{{json .}}": (2.1770225s)
	I0512 01:16:24.545409    2560 info.go:265] docker info: {ID:5MOX:W55Z:6RSS:V5PU:46KT:D723:NTM4:N7FK:USOO:URA3:TW6J:2PNT Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:85 OomKillDisable:true NGoroutines:63 SystemTime:2022-05-12 01:16:23.4499135 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.102.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_
64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:53902323712 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.14 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3df54a852345ae127d1fa3092b95168e4a88e2f8 Expected:3df54a852345ae127d1fa3092b95168e4a88e2f8} RuncCommit:{ID:v1.0.3-0-gf46b6ba Expected:v1.0.3-0-gf46b6ba} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,
profile=default] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.5.0] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:C:\Program Files\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc.
Version:v0.17.0]] Warnings:<nil>}}
	I0512 01:16:24.545409    2560 start_flags.go:292] no existing cluster config was found, will generate one from the flags 
	W0512 01:16:24.545409    2560 out.go:239] ! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	I0512 01:16:24.546410    2560 start_flags.go:866] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0512 01:16:24.549409    2560 out.go:177] * Using Docker Desktop driver with the root privilege
	I0512 01:16:24.551412    2560 cni.go:95] Creating CNI manager for ""
	I0512 01:16:24.551412    2560 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0512 01:16:24.551412    2560 start_flags.go:306] config:
	{Name:newest-cni-20220512011616-7184 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1652251400-14138@sha256:8c847a4aa2afc5a7fc659f9731046bf9cc7e788283deecc83c8633014fb0828a Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.6-rc.0 ClusterName:newest-cni-20220512011616-7184 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:
cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:network-plugin Value:cni} {Component:kubeadm Key:pod-network-cidr Value:192.168.111.111/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube4:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetri
cs:false}
	I0512 01:16:24.556410    2560 out.go:177] * Starting control plane node newest-cni-20220512011616-7184 in cluster newest-cni-20220512011616-7184
	I0512 01:16:24.560409    2560 cache.go:120] Beginning downloading kic base image for docker with docker
	I0512 01:16:24.564405    2560 out.go:177] * Pulling base image ...
	I0512 01:16:20.420793    4792 pod_ready.go:102] pod "metrics-server-b955d9d8-rm42p" in "kube-system" namespace has status "Ready":"False"
	I0512 01:16:22.423088    4792 pod_ready.go:102] pod "metrics-server-b955d9d8-rm42p" in "kube-system" namespace has status "Ready":"False"
	I0512 01:16:24.429191    4792 pod_ready.go:102] pod "metrics-server-b955d9d8-rm42p" in "kube-system" namespace has status "Ready":"False"
	I0512 01:16:24.567405    2560 preload.go:132] Checking if preload exists for k8s version v1.23.6-rc.0 and runtime docker
	I0512 01:16:24.567405    2560 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1652251400-14138@sha256:8c847a4aa2afc5a7fc659f9731046bf9cc7e788283deecc83c8633014fb0828a in local docker daemon
	I0512 01:16:24.567405    2560 preload.go:148] Found local preload: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.23.6-rc.0-docker-overlay2-amd64.tar.lz4
	I0512 01:16:24.567405    2560 cache.go:57] Caching tarball of preloaded images
	I0512 01:16:24.568410    2560 preload.go:174] Found C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.23.6-rc.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0512 01:16:24.568410    2560 cache.go:60] Finished verifying existence of preloaded tar for  v1.23.6-rc.0 on docker
	I0512 01:16:24.568410    2560 profile.go:148] Saving config to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\newest-cni-20220512011616-7184\config.json ...
	I0512 01:16:24.569423    2560 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\newest-cni-20220512011616-7184\config.json: {Name:mk9ed3823f0455a8f954e369d660954dc104babf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0512 01:16:25.711699    2560 image.go:79] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1652251400-14138@sha256:8c847a4aa2afc5a7fc659f9731046bf9cc7e788283deecc83c8633014fb0828a in local docker daemon, skipping pull
	I0512 01:16:25.711699    2560 cache.go:141] gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1652251400-14138@sha256:8c847a4aa2afc5a7fc659f9731046bf9cc7e788283deecc83c8633014fb0828a exists in daemon, skipping load
	I0512 01:16:25.711699    2560 cache.go:206] Successfully downloaded all kic artifacts
	I0512 01:16:25.711699    2560 start.go:352] acquiring machines lock for newest-cni-20220512011616-7184: {Name:mkc09b0a00a54bffa6656c656699ec1148211894 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0512 01:16:25.712147    2560 start.go:356] acquired machines lock for "newest-cni-20220512011616-7184" in 110.4µs
	I0512 01:16:25.712147    2560 start.go:91] Provisioning new machine with config: &{Name:newest-cni-20220512011616-7184 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1652251400-14138@sha256:8c847a4aa2afc5a7fc659f9731046bf9cc7e788283deecc83c8633014fb0828a Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.6-rc.0 ClusterName:newest-cni-20220512011616-7184 Names
pace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:network-plugin Value:cni} {Component:kubeadm Key:pod-network-cidr Value:192.168.111.111/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.23.6-rc.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube4:/min
ikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false} &{Name: IP: Port:8443 KubernetesVersion:v1.23.6-rc.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0512 01:16:25.712675    2560 start.go:131] createHost starting for "" (driver="docker")
	I0512 01:16:25.716699    2560 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0512 01:16:25.717122    2560 start.go:165] libmachine.API.Create for "newest-cni-20220512011616-7184" (driver="docker")
	I0512 01:16:25.717165    2560 client.go:168] LocalClient.Create starting
	I0512 01:16:25.717354    2560 main.go:134] libmachine: Reading certificate data from C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem
	I0512 01:16:25.717354    2560 main.go:134] libmachine: Decoding PEM data...
	I0512 01:16:25.717354    2560 main.go:134] libmachine: Parsing certificate...
	I0512 01:16:25.717932    2560 main.go:134] libmachine: Reading certificate data from C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem
	I0512 01:16:25.717932    2560 main.go:134] libmachine: Decoding PEM data...
	I0512 01:16:25.717932    2560 main.go:134] libmachine: Parsing certificate...
	I0512 01:16:25.729695    2560 cli_runner.go:164] Run: docker network inspect newest-cni-20220512011616-7184 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0512 01:16:23.152744    4756 pod_ready.go:102] pod "metrics-server-6f89b5864b-jgnxp" in "kube-system" namespace has status "Ready":"False"
	I0512 01:16:25.649812    4756 pod_ready.go:102] pod "metrics-server-6f89b5864b-jgnxp" in "kube-system" namespace has status "Ready":"False"
	I0512 01:16:26.928418    4792 pod_ready.go:102] pod "metrics-server-b955d9d8-rm42p" in "kube-system" namespace has status "Ready":"False"
	I0512 01:16:28.937556    4792 pod_ready.go:102] pod "metrics-server-b955d9d8-rm42p" in "kube-system" namespace has status "Ready":"False"
	W0512 01:16:26.874363    2560 cli_runner.go:211] docker network inspect newest-cni-20220512011616-7184 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0512 01:16:26.972439    2560 cli_runner.go:217] Completed: docker network inspect newest-cni-20220512011616-7184 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}": (1.1445541s)
	I0512 01:16:26.988414    2560 network_create.go:272] running [docker network inspect newest-cni-20220512011616-7184] to gather additional debugging logs...
	I0512 01:16:26.988414    2560 cli_runner.go:164] Run: docker network inspect newest-cni-20220512011616-7184
	W0512 01:16:28.153691    2560 cli_runner.go:211] docker network inspect newest-cni-20220512011616-7184 returned with exit code 1
	I0512 01:16:28.153691    2560 cli_runner.go:217] Completed: docker network inspect newest-cni-20220512011616-7184: (1.1652173s)
	I0512 01:16:28.153691    2560 network_create.go:275] error running [docker network inspect newest-cni-20220512011616-7184]: docker network inspect newest-cni-20220512011616-7184: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: newest-cni-20220512011616-7184
	I0512 01:16:28.153691    2560 network_create.go:277] output of [docker network inspect newest-cni-20220512011616-7184]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: newest-cni-20220512011616-7184
	
	** /stderr **
	I0512 01:16:28.166728    2560 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0512 01:16:29.319410    2560 cli_runner.go:217] Completed: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}": (1.1524153s)
	I0512 01:16:29.341981    2560 network.go:288] reserving subnet 192.168.49.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[] amended:true}} dirty:map[192.168.49.0:0xc00014e160] misses:0}
	I0512 01:16:29.341981    2560 network.go:235] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0512 01:16:29.341981    2560 network_create.go:115] attempt to create docker network newest-cni-20220512011616-7184 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0512 01:16:29.348980    2560 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true newest-cni-20220512011616-7184
	I0512 01:16:30.640275    2560 cli_runner.go:217] Completed: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true newest-cni-20220512011616-7184: (1.2912279s)
	I0512 01:16:30.640275    2560 network_create.go:99] docker network newest-cni-20220512011616-7184 192.168.49.0/24 created
	I0512 01:16:30.640275    2560 kic.go:106] calculated static IP "192.168.49.2" for the "newest-cni-20220512011616-7184" container
	I0512 01:16:30.653276    2560 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0512 01:16:27.655025    4756 pod_ready.go:102] pod "metrics-server-6f89b5864b-jgnxp" in "kube-system" namespace has status "Ready":"False"
	I0512 01:16:30.153863    4756 pod_ready.go:102] pod "metrics-server-6f89b5864b-jgnxp" in "kube-system" namespace has status "Ready":"False"
	I0512 01:16:32.162511    4756 pod_ready.go:102] pod "metrics-server-6f89b5864b-jgnxp" in "kube-system" namespace has status "Ready":"False"
	I0512 01:16:31.416666    4792 pod_ready.go:102] pod "metrics-server-b955d9d8-rm42p" in "kube-system" namespace has status "Ready":"False"
	I0512 01:16:33.429148    4792 pod_ready.go:102] pod "metrics-server-b955d9d8-rm42p" in "kube-system" namespace has status "Ready":"False"
	I0512 01:16:31.806706    2560 cli_runner.go:217] Completed: docker ps -a --format {{.Names}}: (1.1533706s)
	I0512 01:16:31.813708    2560 cli_runner.go:164] Run: docker volume create newest-cni-20220512011616-7184 --label name.minikube.sigs.k8s.io=newest-cni-20220512011616-7184 --label created_by.minikube.sigs.k8s.io=true
	I0512 01:16:32.987718    2560 cli_runner.go:217] Completed: docker volume create newest-cni-20220512011616-7184 --label name.minikube.sigs.k8s.io=newest-cni-20220512011616-7184 --label created_by.minikube.sigs.k8s.io=true: (1.1739498s)
	I0512 01:16:32.987718    2560 oci.go:103] Successfully created a docker volume newest-cni-20220512011616-7184
	I0512 01:16:32.994718    2560 cli_runner.go:164] Run: docker run --rm --name newest-cni-20220512011616-7184-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-20220512011616-7184 --entrypoint /usr/bin/test -v newest-cni-20220512011616-7184:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1652251400-14138@sha256:8c847a4aa2afc5a7fc659f9731046bf9cc7e788283deecc83c8633014fb0828a -d /var/lib
	I0512 01:16:36.553127    2560 cli_runner.go:217] Completed: docker run --rm --name newest-cni-20220512011616-7184-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-20220512011616-7184 --entrypoint /usr/bin/test -v newest-cni-20220512011616-7184:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1652251400-14138@sha256:8c847a4aa2afc5a7fc659f9731046bf9cc7e788283deecc83c8633014fb0828a -d /var/lib: (3.5582267s)
	I0512 01:16:36.553127    2560 oci.go:107] Successfully prepared a docker volume newest-cni-20220512011616-7184
	I0512 01:16:36.553127    2560 preload.go:132] Checking if preload exists for k8s version v1.23.6-rc.0 and runtime docker
	I0512 01:16:36.553127    2560 kic.go:179] Starting extracting preloaded images to volume ...
	I0512 01:16:36.563406    2560 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.23.6-rc.0-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v newest-cni-20220512011616-7184:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1652251400-14138@sha256:8c847a4aa2afc5a7fc659f9731046bf9cc7e788283deecc83c8633014fb0828a -I lz4 -xf /preloaded.tar -C /extractDir
	I0512 01:16:34.165401    4756 pod_ready.go:102] pod "metrics-server-6f89b5864b-jgnxp" in "kube-system" namespace has status "Ready":"False"
	I0512 01:16:36.645803    4756 pod_ready.go:102] pod "metrics-server-6f89b5864b-jgnxp" in "kube-system" namespace has status "Ready":"False"
	I0512 01:16:35.924732    4792 pod_ready.go:102] pod "metrics-server-b955d9d8-rm42p" in "kube-system" namespace has status "Ready":"False"
	I0512 01:16:38.424726    4792 pod_ready.go:102] pod "metrics-server-b955d9d8-rm42p" in "kube-system" namespace has status "Ready":"False"
	I0512 01:16:38.650005    4756 pod_ready.go:102] pod "metrics-server-6f89b5864b-jgnxp" in "kube-system" namespace has status "Ready":"False"
	I0512 01:16:40.653524    4756 pod_ready.go:102] pod "metrics-server-6f89b5864b-jgnxp" in "kube-system" namespace has status "Ready":"False"
	I0512 01:16:40.425061    4792 pod_ready.go:102] pod "metrics-server-b955d9d8-rm42p" in "kube-system" namespace has status "Ready":"False"
	I0512 01:16:42.425638    4792 pod_ready.go:102] pod "metrics-server-b955d9d8-rm42p" in "kube-system" namespace has status "Ready":"False"
	I0512 01:16:44.930339    4792 pod_ready.go:102] pod "metrics-server-b955d9d8-rm42p" in "kube-system" namespace has status "Ready":"False"
	I0512 01:16:42.674179    4756 pod_ready.go:102] pod "metrics-server-6f89b5864b-jgnxp" in "kube-system" namespace has status "Ready":"False"
	I0512 01:16:45.151951    4756 pod_ready.go:102] pod "metrics-server-6f89b5864b-jgnxp" in "kube-system" namespace has status "Ready":"False"
	I0512 01:16:47.153770    4756 pod_ready.go:102] pod "metrics-server-6f89b5864b-jgnxp" in "kube-system" namespace has status "Ready":"False"
	I0512 01:16:47.427010    4792 pod_ready.go:102] pod "metrics-server-b955d9d8-rm42p" in "kube-system" namespace has status "Ready":"False"
	I0512 01:16:49.432201    4792 pod_ready.go:102] pod "metrics-server-b955d9d8-rm42p" in "kube-system" namespace has status "Ready":"False"
	I0512 01:16:49.157438    4756 pod_ready.go:102] pod "metrics-server-6f89b5864b-jgnxp" in "kube-system" namespace has status "Ready":"False"
	I0512 01:16:51.649156    4756 pod_ready.go:102] pod "metrics-server-6f89b5864b-jgnxp" in "kube-system" namespace has status "Ready":"False"
	I0512 01:16:51.924038    4792 pod_ready.go:102] pod "metrics-server-b955d9d8-rm42p" in "kube-system" namespace has status "Ready":"False"
	I0512 01:16:53.932652    4792 pod_ready.go:102] pod "metrics-server-b955d9d8-rm42p" in "kube-system" namespace has status "Ready":"False"
	I0512 01:16:53.667150    4756 pod_ready.go:102] pod "metrics-server-6f89b5864b-jgnxp" in "kube-system" namespace has status "Ready":"False"
	I0512 01:16:56.151572    4756 pod_ready.go:102] pod "metrics-server-6f89b5864b-jgnxp" in "kube-system" namespace has status "Ready":"False"
	I0512 01:16:56.423118    4792 pod_ready.go:102] pod "metrics-server-b955d9d8-rm42p" in "kube-system" namespace has status "Ready":"False"
	I0512 01:16:58.159462    4756 pod_ready.go:102] pod "metrics-server-6f89b5864b-jgnxp" in "kube-system" namespace has status "Ready":"False"
	I0512 01:17:08.598898    4792 pod_ready.go:102] pod "metrics-server-b955d9d8-rm42p" in "kube-system" namespace has status "Ready":"False"
	I0512 01:17:08.925617    2560 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.23.6-rc.0-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v newest-cni-20220512011616-7184:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1652251400-14138@sha256:8c847a4aa2afc5a7fc659f9731046bf9cc7e788283deecc83c8633014fb0828a -I lz4 -xf /preloaded.tar -C /extractDir: (32.3596272s)
	I0512 01:17:08.925617    2560 kic.go:188] duration metric: took 32.370826 seconds to extract preloaded images to volume
	I0512 01:17:08.934984    2560 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0512 01:17:11.115515    2560 cli_runner.go:217] Completed: docker system info --format "{{json .}}": (2.1804189s)
	I0512 01:17:11.115878    2560 info.go:265] docker info: {ID:5MOX:W55Z:6RSS:V5PU:46KT:D723:NTM4:N7FK:USOO:URA3:TW6J:2PNT Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:87 OomKillDisable:true NGoroutines:63 SystemTime:2022-05-12 01:17:10.0098897 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.102.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_
64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:53902323712 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.14 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3df54a852345ae127d1fa3092b95168e4a88e2f8 Expected:3df54a852345ae127d1fa3092b95168e4a88e2f8} RuncCommit:{ID:v1.0.3-0-gf46b6ba Expected:v1.0.3-0-gf46b6ba} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,
profile=default] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.5.0] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:C:\Program Files\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc.
Version:v0.17.0]] Warnings:<nil>}}
	I0512 01:17:11.127846    2560 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0512 01:17:08.597927    4756 pod_ready.go:102] pod "metrics-server-6f89b5864b-jgnxp" in "kube-system" namespace has status "Ready":"False"
	I0512 01:17:10.653825    4756 pod_ready.go:102] pod "metrics-server-6f89b5864b-jgnxp" in "kube-system" namespace has status "Ready":"False"
	I0512 01:17:10.945551    4792 pod_ready.go:102] pod "metrics-server-b955d9d8-rm42p" in "kube-system" namespace has status "Ready":"False"
	I0512 01:17:13.430017    4792 pod_ready.go:102] pod "metrics-server-b955d9d8-rm42p" in "kube-system" namespace has status "Ready":"False"
	I0512 01:17:13.264381    2560 cli_runner.go:217] Completed: docker info --format "'{{json .SecurityOptions}}'": (2.1364256s)
	I0512 01:17:13.270379    2560 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname newest-cni-20220512011616-7184 --name newest-cni-20220512011616-7184 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-20220512011616-7184 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=newest-cni-20220512011616-7184 --network newest-cni-20220512011616-7184 --ip 192.168.49.2 --volume newest-cni-20220512011616-7184:/var --security-opt apparmor=unconfined --memory=2200mb --memory-swap=2200mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1652251400-14138@sha256:8c847a4aa2afc5a7fc659f9731046bf9cc7e788283deecc83c8633014fb0828a
	I0512 01:17:15.535544    2560 cli_runner.go:217] Completed: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname newest-cni-20220512011616-7184 --name newest-cni-20220512011616-7184 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-20220512011616-7184 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=newest-cni-20220512011616-7184 --network newest-cni-20220512011616-7184 --ip 192.168.49.2 --volume newest-cni-20220512011616-7184:/var --security-opt apparmor=unconfined --memory=2200mb --memory-swap=2200mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1652251400-14138@sha256:8c847a4aa2afc5a7fc659f9731046bf9cc7e788283deecc83c8633014fb0828a: (2.2639077s)
	I0512 01:17:15.547222    2560 cli_runner.go:164] Run: docker container inspect newest-cni-20220512011616-7184 --format={{.State.Running}}
	I0512 01:17:16.710827    2560 cli_runner.go:217] Completed: docker container inspect newest-cni-20220512011616-7184 --format={{.State.Running}}: (1.1635447s)
	I0512 01:17:16.717829    2560 cli_runner.go:164] Run: docker container inspect newest-cni-20220512011616-7184 --format={{.State.Status}}
	I0512 01:17:12.659098    4756 pod_ready.go:102] pod "metrics-server-6f89b5864b-jgnxp" in "kube-system" namespace has status "Ready":"False"
	I0512 01:17:15.167906    4756 pod_ready.go:102] pod "metrics-server-6f89b5864b-jgnxp" in "kube-system" namespace has status "Ready":"False"
	I0512 01:17:15.435756    4792 pod_ready.go:102] pod "metrics-server-b955d9d8-rm42p" in "kube-system" namespace has status "Ready":"False"
	I0512 01:17:17.930373    4792 pod_ready.go:102] pod "metrics-server-b955d9d8-rm42p" in "kube-system" namespace has status "Ready":"False"
	I0512 01:17:19.931825    4792 pod_ready.go:102] pod "metrics-server-b955d9d8-rm42p" in "kube-system" namespace has status "Ready":"False"
	I0512 01:17:17.856156    2560 cli_runner.go:217] Completed: docker container inspect newest-cni-20220512011616-7184 --format={{.State.Status}}: (1.1371814s)
	I0512 01:17:17.864286    2560 cli_runner.go:164] Run: docker exec newest-cni-20220512011616-7184 stat /var/lib/dpkg/alternatives/iptables
	I0512 01:17:19.150505    2560 cli_runner.go:217] Completed: docker exec newest-cni-20220512011616-7184 stat /var/lib/dpkg/alternatives/iptables: (1.2860794s)
	I0512 01:17:19.150572    2560 oci.go:247] the created container "newest-cni-20220512011616-7184" has a running status.
	I0512 01:17:19.150643    2560 kic.go:210] Creating ssh key for kic: C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\newest-cni-20220512011616-7184\id_rsa...
	I0512 01:17:19.237213    2560 kic_runner.go:191] docker (temp): C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\newest-cni-20220512011616-7184\id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0512 01:17:20.509935    2560 cli_runner.go:164] Run: docker container inspect newest-cni-20220512011616-7184 --format={{.State.Status}}
	I0512 01:17:21.585980    2560 cli_runner.go:217] Completed: docker container inspect newest-cni-20220512011616-7184 --format={{.State.Status}}: (1.0749832s)
	I0512 01:17:21.603344    2560 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0512 01:17:21.603344    2560 kic_runner.go:114] Args: [docker exec --privileged newest-cni-20220512011616-7184 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0512 01:17:17.657313    4756 pod_ready.go:102] pod "metrics-server-6f89b5864b-jgnxp" in "kube-system" namespace has status "Ready":"False"
	I0512 01:17:19.666544    4756 pod_ready.go:102] pod "metrics-server-6f89b5864b-jgnxp" in "kube-system" namespace has status "Ready":"False"
	I0512 01:17:22.154476    4756 pod_ready.go:102] pod "metrics-server-6f89b5864b-jgnxp" in "kube-system" namespace has status "Ready":"False"
	I0512 01:17:22.420308    4792 pod_ready.go:102] pod "metrics-server-b955d9d8-rm42p" in "kube-system" namespace has status "Ready":"False"
	I0512 01:17:24.435331    4792 pod_ready.go:102] pod "metrics-server-b955d9d8-rm42p" in "kube-system" namespace has status "Ready":"False"
	I0512 01:17:22.958356    2560 kic_runner.go:123] Done: [docker exec --privileged newest-cni-20220512011616-7184 chown docker:docker /home/docker/.ssh/authorized_keys]: (1.3538666s)
	I0512 01:17:22.964307    2560 kic.go:250] ensuring only current user has permissions to key file located at : C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\newest-cni-20220512011616-7184\id_rsa...
	I0512 01:17:23.489777    2560 cli_runner.go:164] Run: docker container inspect newest-cni-20220512011616-7184 --format={{.State.Status}}
	I0512 01:17:24.608251    2560 cli_runner.go:217] Completed: docker container inspect newest-cni-20220512011616-7184 --format={{.State.Status}}: (1.1184162s)
	I0512 01:17:24.608251    2560 machine.go:88] provisioning docker machine ...
	I0512 01:17:24.608251    2560 ubuntu.go:169] provisioning hostname "newest-cni-20220512011616-7184"
	I0512 01:17:24.614237    2560 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220512011616-7184
	I0512 01:17:25.738277    2560 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220512011616-7184: (1.123982s)
	I0512 01:17:25.741285    2560 main.go:134] libmachine: Using SSH client type: native
	I0512 01:17:25.742278    2560 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x422e20] 0x425c80 <nil>  [] 0s} 127.0.0.1 50805 <nil> <nil>}
	I0512 01:17:25.742278    2560 main.go:134] libmachine: About to run SSH command:
	sudo hostname newest-cni-20220512011616-7184 && echo "newest-cni-20220512011616-7184" | sudo tee /etc/hostname
	I0512 01:17:25.969129    2560 main.go:134] libmachine: SSH cmd err, output: <nil>: newest-cni-20220512011616-7184
	
	I0512 01:17:25.979905    2560 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220512011616-7184
	I0512 01:17:24.660694    4756 pod_ready.go:102] pod "metrics-server-6f89b5864b-jgnxp" in "kube-system" namespace has status "Ready":"False"
	I0512 01:17:27.151723    4756 pod_ready.go:102] pod "metrics-server-6f89b5864b-jgnxp" in "kube-system" namespace has status "Ready":"False"
	I0512 01:17:26.931437    4792 pod_ready.go:102] pod "metrics-server-b955d9d8-rm42p" in "kube-system" namespace has status "Ready":"False"
	I0512 01:17:28.935085    4792 pod_ready.go:102] pod "metrics-server-b955d9d8-rm42p" in "kube-system" namespace has status "Ready":"False"
	I0512 01:17:27.105931    2560 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220512011616-7184: (1.1258559s)
	I0512 01:17:27.109954    2560 main.go:134] libmachine: Using SSH client type: native
	I0512 01:17:27.110535    2560 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x422e20] 0x425c80 <nil>  [] 0s} 127.0.0.1 50805 <nil> <nil>}
	I0512 01:17:27.110535    2560 main.go:134] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-20220512011616-7184' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-20220512011616-7184/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-20220512011616-7184' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0512 01:17:27.298537    2560 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I0512 01:17:27.298537    2560 ubuntu.go:175] set auth options {CertDir:C:\Users\jenkins.minikube4\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube4\minikube-integration\.minikube}
	I0512 01:17:27.299127    2560 ubuntu.go:177] setting up certificates
	I0512 01:17:27.299127    2560 provision.go:83] configureAuth start
	I0512 01:17:27.309174    2560 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-20220512011616-7184
	I0512 01:17:28.416942    2560 cli_runner.go:217] Completed: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-20220512011616-7184: (1.107643s)
	I0512 01:17:28.417164    2560 provision.go:138] copyHostCerts
	I0512 01:17:28.417431    2560 exec_runner.go:144] found C:\Users\jenkins.minikube4\minikube-integration\.minikube/cert.pem, removing ...
	I0512 01:17:28.417431    2560 exec_runner.go:207] rm: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cert.pem
	I0512 01:17:28.418091    2560 exec_runner.go:151] cp: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube4\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0512 01:17:28.418739    2560 exec_runner.go:144] found C:\Users\jenkins.minikube4\minikube-integration\.minikube/key.pem, removing ...
	I0512 01:17:28.418739    2560 exec_runner.go:207] rm: C:\Users\jenkins.minikube4\minikube-integration\.minikube\key.pem
	I0512 01:17:28.419454    2560 exec_runner.go:151] cp: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube4\minikube-integration\.minikube/key.pem (1675 bytes)
	I0512 01:17:28.420291    2560 exec_runner.go:144] found C:\Users\jenkins.minikube4\minikube-integration\.minikube/ca.pem, removing ...
	I0512 01:17:28.420291    2560 exec_runner.go:207] rm: C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.pem
	I0512 01:17:28.420971    2560 exec_runner.go:151] cp: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube4\minikube-integration\.minikube/ca.pem (1078 bytes)
	I0512 01:17:28.421571    2560 provision.go:112] generating server cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.newest-cni-20220512011616-7184 san=[192.168.49.2 127.0.0.1 localhost 127.0.0.1 minikube newest-cni-20220512011616-7184]
	I0512 01:17:28.723293    2560 provision.go:172] copyRemoteCerts
	I0512 01:17:28.733906    2560 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0512 01:17:28.740275    2560 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220512011616-7184
	I0512 01:17:29.852904    2560 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220512011616-7184: (1.1125204s)
	I0512 01:17:29.853524    2560 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50805 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\newest-cni-20220512011616-7184\id_rsa Username:docker}
	I0512 01:17:30.010776    2560 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (1.2768049s)
	I0512 01:17:30.011598    2560 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0512 01:17:30.073129    2560 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1261 bytes)
	I0512 01:17:30.127753    2560 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0512 01:17:30.185782    2560 provision.go:86] duration metric: configureAuth took 2.8865074s
	I0512 01:17:30.185782    2560 ubuntu.go:193] setting minikube options for container-runtime
	I0512 01:17:30.186467    2560 config.go:178] Loaded profile config "newest-cni-20220512011616-7184": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.6-rc.0
	I0512 01:17:30.195638    2560 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220512011616-7184
	I0512 01:17:31.309186    2560 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220512011616-7184: (1.1134076s)
	I0512 01:17:31.315103    2560 main.go:134] libmachine: Using SSH client type: native
	I0512 01:17:31.315334    2560 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x422e20] 0x425c80 <nil>  [] 0s} 127.0.0.1 50805 <nil> <nil>}
	I0512 01:17:31.315334    2560 main.go:134] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0512 01:17:31.450063    2560 main.go:134] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0512 01:17:31.450063    2560 ubuntu.go:71] root file system type: overlay
	I0512 01:17:31.450063    2560 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0512 01:17:31.460061    2560 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220512011616-7184
	I0512 01:17:29.160553    4756 pod_ready.go:102] pod "metrics-server-6f89b5864b-jgnxp" in "kube-system" namespace has status "Ready":"False"
	I0512 01:17:31.655475    4756 pod_ready.go:102] pod "metrics-server-6f89b5864b-jgnxp" in "kube-system" namespace has status "Ready":"False"
	I0512 01:17:31.415060    4792 pod_ready.go:102] pod "metrics-server-b955d9d8-rm42p" in "kube-system" namespace has status "Ready":"False"
	I0512 01:17:33.419688    4792 pod_ready.go:102] pod "metrics-server-b955d9d8-rm42p" in "kube-system" namespace has status "Ready":"False"
	I0512 01:17:32.550749    2560 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220512011616-7184: (1.0905455s)
	I0512 01:17:32.555870    2560 main.go:134] libmachine: Using SSH client type: native
	I0512 01:17:32.555870    2560 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x422e20] 0x425c80 <nil>  [] 0s} 127.0.0.1 50805 <nil> <nil>}
	I0512 01:17:32.556523    2560 main.go:134] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0512 01:17:32.722356    2560 main.go:134] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0512 01:17:32.729347    2560 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220512011616-7184
	I0512 01:17:33.795102    2560 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220512011616-7184: (1.0656998s)
	I0512 01:17:33.800662    2560 main.go:134] libmachine: Using SSH client type: native
	I0512 01:17:33.800755    2560 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x422e20] 0x425c80 <nil>  [] 0s} 127.0.0.1 50805 <nil> <nil>}
	I0512 01:17:33.800755    2560 main.go:134] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0512 01:17:35.149227    2560 main.go:134] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2022-05-05 13:17:28.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2022-05-12 01:17:32.702021000 +0000
	@@ -1,30 +1,32 @@
	 [Unit]
	 Description=Docker Application Container Engine
	 Documentation=https://docs.docker.com
	-After=network-online.target docker.socket firewalld.service containerd.service
	+BindsTo=containerd.service
	+After=network-online.target firewalld.service containerd.service
	 Wants=network-online.target
	-Requires=docker.socket containerd.service
	+Requires=docker.socket
	+StartLimitBurst=3
	+StartLimitIntervalSec=60
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutSec=0
	-RestartSec=2
	-Restart=always
	-
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	+Restart=on-failure
	 
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP $MAINPID
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	@@ -32,16 +34,16 @@
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	 
	 # kill only the docker process, not all processes in the cgroup
	 KillMode=process
	-OOMScoreAdjust=-500
	 
	 [Install]
	 WantedBy=multi-user.target
	Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install enable docker
	
	I0512 01:17:35.149390    2560 machine.go:91] provisioned docker machine in 10.5405971s
	I0512 01:17:35.149490    2560 client.go:171] LocalClient.Create took 1m9.4287569s
	I0512 01:17:35.149548    2560 start.go:173] duration metric: libmachine.API.Create for "newest-cni-20220512011616-7184" took 1m9.4288897s
	I0512 01:17:35.149548    2560 start.go:306] post-start starting for "newest-cni-20220512011616-7184" (driver="docker")
	I0512 01:17:35.149548    2560 start.go:316] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0512 01:17:35.165028    2560 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0512 01:17:35.173495    2560 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220512011616-7184
	I0512 01:17:36.352732    2560 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220512011616-7184: (1.1791205s)
	I0512 01:17:36.352732    2560 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50805 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\newest-cni-20220512011616-7184\id_rsa Username:docker}
	I0512 01:17:36.527022    2560 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (1.3619239s)
	I0512 01:17:36.539024    2560 ssh_runner.go:195] Run: cat /etc/os-release
	I0512 01:17:36.556028    2560 main.go:134] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0512 01:17:36.556028    2560 main.go:134] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0512 01:17:36.556028    2560 main.go:134] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0512 01:17:36.556028    2560 info.go:137] Remote host: Ubuntu 20.04.4 LTS
	I0512 01:17:36.556028    2560 filesync.go:126] Scanning C:\Users\jenkins.minikube4\minikube-integration\.minikube\addons for local assets ...
	I0512 01:17:36.556028    2560 filesync.go:126] Scanning C:\Users\jenkins.minikube4\minikube-integration\.minikube\files for local assets ...
	I0512 01:17:36.558054    2560 filesync.go:149] local asset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\71842.pem -> 71842.pem in /etc/ssl/certs
	I0512 01:17:36.571139    2560 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0512 01:17:36.594018    2560 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\71842.pem --> /etc/ssl/certs/71842.pem (1708 bytes)
	I0512 01:17:36.654090    2560 start.go:309] post-start completed in 1.5044639s
	I0512 01:17:36.665029    2560 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-20220512011616-7184
	I0512 01:17:33.657645    4756 pod_ready.go:102] pod "metrics-server-6f89b5864b-jgnxp" in "kube-system" namespace has status "Ready":"False"
	I0512 01:17:35.662792    4756 pod_ready.go:102] pod "metrics-server-6f89b5864b-jgnxp" in "kube-system" namespace has status "Ready":"False"
	I0512 01:17:35.424945    4792 pod_ready.go:102] pod "metrics-server-b955d9d8-rm42p" in "kube-system" namespace has status "Ready":"False"
	I0512 01:17:37.435024    4792 pod_ready.go:102] pod "metrics-server-b955d9d8-rm42p" in "kube-system" namespace has status "Ready":"False"
	I0512 01:17:39.933361    4792 pod_ready.go:102] pod "metrics-server-b955d9d8-rm42p" in "kube-system" namespace has status "Ready":"False"
	I0512 01:17:37.756522    2560 cli_runner.go:217] Completed: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-20220512011616-7184: (1.0914367s)
	I0512 01:17:37.756522    2560 profile.go:148] Saving config to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\newest-cni-20220512011616-7184\config.json ...
	I0512 01:17:37.779140    2560 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0512 01:17:37.791145    2560 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220512011616-7184
	I0512 01:17:38.874454    2560 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220512011616-7184: (1.0832535s)
	I0512 01:17:38.874454    2560 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50805 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\newest-cni-20220512011616-7184\id_rsa Username:docker}
	I0512 01:17:39.022729    2560 ssh_runner.go:235] Completed: sh -c "df -h /var | awk 'NR==2{print $5}'": (1.2435248s)
	I0512 01:17:39.035347    2560 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0512 01:17:39.050896    2560 start.go:134] duration metric: createHost completed in 1m13.3344515s
	I0512 01:17:39.050896    2560 start.go:81] releasing machines lock for "newest-cni-20220512011616-7184", held for 1m13.3349796s
	I0512 01:17:39.057882    2560 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-20220512011616-7184
	I0512 01:17:40.147992    2560 cli_runner.go:217] Completed: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-20220512011616-7184: (1.0899252s)
	I0512 01:17:40.153495    2560 ssh_runner.go:195] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0512 01:17:40.163576    2560 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220512011616-7184
	I0512 01:17:40.166606    2560 ssh_runner.go:195] Run: systemctl --version
	I0512 01:17:40.177016    2560 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220512011616-7184
	I0512 01:17:41.292156    2560 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220512011616-7184: (1.128485s)
	I0512 01:17:41.292156    2560 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50805 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\newest-cni-20220512011616-7184\id_rsa Username:docker}
	I0512 01:17:41.308165    2560 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220512011616-7184: (1.1310907s)
	I0512 01:17:41.308165    2560 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50805 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\newest-cni-20220512011616-7184\id_rsa Username:docker}
	I0512 01:17:41.474589    2560 ssh_runner.go:235] Completed: curl -sS -m 2 https://k8s.gcr.io/: (1.3210262s)
	I0512 01:17:41.474589    2560 ssh_runner.go:235] Completed: systemctl --version: (1.3079162s)
	I0512 01:17:41.489555    2560 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0512 01:17:41.541787    2560 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0512 01:17:41.587722    2560 cruntime.go:273] skipping containerd shutdown because we are bound to it
	I0512 01:17:41.608074    2560 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0512 01:17:41.637225    2560 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/dockershim.sock
	image-endpoint: unix:///var/run/dockershim.sock
	" | sudo tee /etc/crictl.yaml"
	I0512 01:17:41.699216    2560 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0512 01:17:41.936006    2560 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0512 01:17:42.159856    2560 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0512 01:17:42.212627    2560 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0512 01:17:42.386680    2560 ssh_runner.go:195] Run: sudo systemctl start docker
	I0512 01:17:42.437061    2560 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0512 01:17:42.542968    2560 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0512 01:17:38.157682    4756 pod_ready.go:102] pod "metrics-server-6f89b5864b-jgnxp" in "kube-system" namespace has status "Ready":"False"
	I0512 01:17:40.159332    4756 pod_ready.go:102] pod "metrics-server-6f89b5864b-jgnxp" in "kube-system" namespace has status "Ready":"False"
	I0512 01:17:42.631887    2560 out.go:204] * Preparing Kubernetes v1.23.6-rc.0 on Docker 20.10.15 ...
	I0512 01:17:42.641193    2560 cli_runner.go:164] Run: docker exec -t newest-cni-20220512011616-7184 dig +short host.docker.internal
	I0512 01:17:44.040786    2560 cli_runner.go:217] Completed: docker exec -t newest-cni-20220512011616-7184 dig +short host.docker.internal: (1.3995208s)
	I0512 01:17:44.040786    2560 network.go:96] got host ip for mount in container by digging dns: 192.168.65.2
	I0512 01:17:44.049783    2560 ssh_runner.go:195] Run: grep 192.168.65.2	host.minikube.internal$ /etc/hosts
	I0512 01:17:44.065551    2560 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.65.2	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0512 01:17:44.110156    2560 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" newest-cni-20220512011616-7184
	I0512 01:17:45.235777    2560 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" newest-cni-20220512011616-7184: (1.1255009s)
	I0512 01:17:45.238150    2560 out.go:177]   - kubelet.network-plugin=cni
	I0512 01:17:45.240675    2560 out.go:177]   - kubeadm.pod-network-cidr=192.168.111.111/16
	I0512 01:17:41.964742    4792 pod_ready.go:102] pod "metrics-server-b955d9d8-rm42p" in "kube-system" namespace has status "Ready":"False"
	I0512 01:17:44.421599    4792 pod_ready.go:102] pod "metrics-server-b955d9d8-rm42p" in "kube-system" namespace has status "Ready":"False"
	I0512 01:17:45.242694    2560 preload.go:132] Checking if preload exists for k8s version v1.23.6-rc.0 and runtime docker
	I0512 01:17:45.249604    2560 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0512 01:17:45.322239    2560 docker.go:610] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.23.6-rc.0
	k8s.gcr.io/kube-controller-manager:v1.23.6-rc.0
	k8s.gcr.io/kube-proxy:v1.23.6-rc.0
	k8s.gcr.io/kube-scheduler:v1.23.6-rc.0
	k8s.gcr.io/etcd:3.5.1-0
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0512 01:17:45.322239    2560 docker.go:541] Images already preloaded, skipping extraction
	I0512 01:17:45.330804    2560 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0512 01:17:45.403625    2560 docker.go:610] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.23.6-rc.0
	k8s.gcr.io/kube-controller-manager:v1.23.6-rc.0
	k8s.gcr.io/kube-scheduler:v1.23.6-rc.0
	k8s.gcr.io/kube-proxy:v1.23.6-rc.0
	k8s.gcr.io/etcd:3.5.1-0
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0512 01:17:45.403625    2560 cache_images.go:84] Images are preloaded, skipping loading
	I0512 01:17:45.415253    2560 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0512 01:17:45.603001    2560 cni.go:95] Creating CNI manager for ""
	I0512 01:17:45.603001    2560 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0512 01:17:45.603001    2560 kubeadm.go:87] Using pod CIDR: 192.168.111.111/16
	I0512 01:17:45.603001    2560 kubeadm.go:158] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:192.168.111.111/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.23.6-rc.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-20220512011616-7184 NodeName:newest-cni-20220512011616-7184 DNSDomain:cluster.local CRISocket:/var/run/dockershim.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota feature-gates:ServerSideApply=true] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true feature-gates:ServerSideApply=true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[feature-gates:ServerSideApply=true leader-elect:f
alse] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	I0512 01:17:45.603597    2560 kubeadm.go:162] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/dockershim.sock
	  name: "newest-cni-20220512011616-7184"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	    feature-gates: "ServerSideApply=true"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    feature-gates: "ServerSideApply=true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    feature-gates: "ServerSideApply=true"
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.23.6-rc.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "192.168.111.111/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "192.168.111.111/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0512 01:17:45.603768    2560 kubeadm.go:936] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.23.6-rc.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=docker --feature-gates=ServerSideApply=true --hostname-override=newest-cni-20220512011616-7184 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.23.6-rc.0 ClusterName:newest-cni-20220512011616-7184 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:network-plugin Value:cni} {Component:kubeadm Key:pod-network-cidr Value:192.168.111.111/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0512 01:17:45.617904    2560 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.23.6-rc.0
	I0512 01:17:45.659779    2560 binaries.go:44] Found k8s binaries, skipping transfer
	I0512 01:17:45.674372    2560 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0512 01:17:45.696620    2560 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (419 bytes)
	I0512 01:17:45.746491    2560 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (357 bytes)
	I0512 01:17:45.787646    2560 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2193 bytes)
	I0512 01:17:45.845821    2560 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0512 01:17:45.862036    2560 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0512 01:17:45.893944    2560 certs.go:54] Setting up C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\newest-cni-20220512011616-7184 for IP: 192.168.49.2
	I0512 01:17:45.894636    2560 certs.go:182] skipping minikubeCA CA generation: C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.key
	I0512 01:17:45.894636    2560 certs.go:182] skipping proxyClientCA CA generation: C:\Users\jenkins.minikube4\minikube-integration\.minikube\proxy-client-ca.key
	I0512 01:17:45.895546    2560 certs.go:302] generating minikube-user signed cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\newest-cni-20220512011616-7184\client.key
	I0512 01:17:45.895838    2560 crypto.go:68] Generating cert C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\newest-cni-20220512011616-7184\client.crt with IP's: []
	I0512 01:17:46.222222    2560 crypto.go:156] Writing cert to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\newest-cni-20220512011616-7184\client.crt ...
	I0512 01:17:46.222222    2560 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\newest-cni-20220512011616-7184\client.crt: {Name:mk13be4e904b1f2d1082cd58518f5cc2205883ee Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0512 01:17:46.222817    2560 crypto.go:164] Writing key to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\newest-cni-20220512011616-7184\client.key ...
	I0512 01:17:46.222817    2560 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\newest-cni-20220512011616-7184\client.key: {Name:mk5acb271de66581ae37d416909cab65f282d634 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0512 01:17:46.223920    2560 certs.go:302] generating minikube signed cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\newest-cni-20220512011616-7184\apiserver.key.dd3b5fb2
	I0512 01:17:46.224999    2560 crypto.go:68] Generating cert C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\newest-cni-20220512011616-7184\apiserver.crt.dd3b5fb2 with IP's: [192.168.49.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I0512 01:17:46.650294    2560 crypto.go:156] Writing cert to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\newest-cni-20220512011616-7184\apiserver.crt.dd3b5fb2 ...
	I0512 01:17:46.650347    2560 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\newest-cni-20220512011616-7184\apiserver.crt.dd3b5fb2: {Name:mkb5c32d37ad79c637a1c42c98331837403bc648 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0512 01:17:46.652531    2560 crypto.go:164] Writing key to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\newest-cni-20220512011616-7184\apiserver.key.dd3b5fb2 ...
	I0512 01:17:46.652531    2560 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\newest-cni-20220512011616-7184\apiserver.key.dd3b5fb2: {Name:mk3a5046ca3307a7252c546463ce5c3fc6fd8fb4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0512 01:17:46.653595    2560 certs.go:320] copying C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\newest-cni-20220512011616-7184\apiserver.crt.dd3b5fb2 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\newest-cni-20220512011616-7184\apiserver.crt
	I0512 01:17:46.660566    2560 certs.go:324] copying C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\newest-cni-20220512011616-7184\apiserver.key.dd3b5fb2 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\newest-cni-20220512011616-7184\apiserver.key
	I0512 01:17:46.661795    2560 certs.go:302] generating aggregator signed cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\newest-cni-20220512011616-7184\proxy-client.key
	I0512 01:17:46.662738    2560 crypto.go:68] Generating cert C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\newest-cni-20220512011616-7184\proxy-client.crt with IP's: []
	I0512 01:17:42.662315    4756 pod_ready.go:102] pod "metrics-server-6f89b5864b-jgnxp" in "kube-system" namespace has status "Ready":"False"
	I0512 01:17:45.167621    4756 pod_ready.go:102] pod "metrics-server-6f89b5864b-jgnxp" in "kube-system" namespace has status "Ready":"False"
	I0512 01:17:46.895459    2560 crypto.go:156] Writing cert to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\newest-cni-20220512011616-7184\proxy-client.crt ...
	I0512 01:17:46.895459    2560 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\newest-cni-20220512011616-7184\proxy-client.crt: {Name:mk38674d708aaad6edbd492739619a23f80f834f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0512 01:17:46.896801    2560 crypto.go:164] Writing key to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\newest-cni-20220512011616-7184\proxy-client.key ...
	I0512 01:17:46.896801    2560 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\newest-cni-20220512011616-7184\proxy-client.key: {Name:mkebc53a8d9906c592fd3d95ee04eca578b8e723 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0512 01:17:46.904166    2560 certs.go:388] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\7184.pem (1338 bytes)
	W0512 01:17:46.904622    2560 certs.go:384] ignoring C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\7184_empty.pem, impossibly tiny 0 bytes
	I0512 01:17:46.904622    2560 certs.go:388] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca-key.pem (1675 bytes)
	I0512 01:17:46.904880    2560 certs.go:388] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem (1078 bytes)
	I0512 01:17:46.905195    2560 certs.go:388] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0512 01:17:46.905411    2560 certs.go:388] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\key.pem (1675 bytes)
	I0512 01:17:46.905590    2560 certs.go:388] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\71842.pem (1708 bytes)
	I0512 01:17:46.909479    2560 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\newest-cni-20220512011616-7184\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0512 01:17:46.981766    2560 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\newest-cni-20220512011616-7184\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0512 01:17:47.045713    2560 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\newest-cni-20220512011616-7184\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0512 01:17:47.111500    2560 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\newest-cni-20220512011616-7184\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0512 01:17:47.168189    2560 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0512 01:17:47.235257    2560 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0512 01:17:47.293245    2560 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0512 01:17:47.348441    2560 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0512 01:17:47.408549    2560 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\7184.pem --> /usr/share/ca-certificates/7184.pem (1338 bytes)
	I0512 01:17:47.467525    2560 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\71842.pem --> /usr/share/ca-certificates/71842.pem (1708 bytes)
	I0512 01:17:47.527576    2560 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0512 01:17:47.585062    2560 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0512 01:17:47.640492    2560 ssh_runner.go:195] Run: openssl version
	I0512 01:17:47.676519    2560 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/7184.pem && ln -fs /usr/share/ca-certificates/7184.pem /etc/ssl/certs/7184.pem"
	I0512 01:17:47.723983    2560 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/7184.pem
	I0512 01:17:47.751470    2560 certs.go:431] hashing: -rw-r--r-- 1 root root 1338 May 11 23:10 /usr/share/ca-certificates/7184.pem
	I0512 01:17:47.762478    2560 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/7184.pem
	I0512 01:17:47.784466    2560 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/7184.pem /etc/ssl/certs/51391683.0"
	I0512 01:17:47.825391    2560 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/71842.pem && ln -fs /usr/share/ca-certificates/71842.pem /etc/ssl/certs/71842.pem"
	I0512 01:17:47.867109    2560 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/71842.pem
	I0512 01:17:47.877108    2560 certs.go:431] hashing: -rw-r--r-- 1 root root 1708 May 11 23:10 /usr/share/ca-certificates/71842.pem
	I0512 01:17:47.887117    2560 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/71842.pem
	I0512 01:17:47.912943    2560 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/71842.pem /etc/ssl/certs/3ec20f2e.0"
	I0512 01:17:47.964740    2560 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0512 01:17:48.003931    2560 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0512 01:17:48.020432    2560 certs.go:431] hashing: -rw-r--r-- 1 root root 1111 May 11 23:00 /usr/share/ca-certificates/minikubeCA.pem
	I0512 01:17:48.032921    2560 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0512 01:17:48.062957    2560 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0512 01:17:48.090997    2560 kubeadm.go:391] StartCluster: {Name:newest-cni-20220512011616-7184 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1652251400-14138@sha256:8c847a4aa2afc5a7fc659f9731046bf9cc7e788283deecc83c8633014fb0828a Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.6-rc.0 ClusterName:newest-cni-20220512011616-7184 Namespace:default APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:network-plugin Value:cni} {Component:kubeadm Key:pod-network-cidr Value:192.168.111.111/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.23.6-rc.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube4:/minikube-host
Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0512 01:17:48.100216    2560 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0512 01:17:48.186883    2560 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0512 01:17:48.226327    2560 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0512 01:17:48.256406    2560 kubeadm.go:221] ignoring SystemVerification for kubeadm because of docker driver
	I0512 01:17:48.268236    2560 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0512 01:17:48.297172    2560 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0512 01:17:48.297172    2560 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6-rc.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0512 01:17:46.427044    4792 pod_ready.go:102] pod "metrics-server-b955d9d8-rm42p" in "kube-system" namespace has status "Ready":"False"
	I0512 01:17:48.429259    4792 pod_ready.go:102] pod "metrics-server-b955d9d8-rm42p" in "kube-system" namespace has status "Ready":"False"
	I0512 01:17:49.455599    2560 out.go:204]   - Generating certificates and keys ...
	
	* 
	* ==> Docker <==
	* -- Logs begin at Thu 2022-05-12 01:09:11 UTC, end at Thu 2022-05-12 01:17:57 UTC. --
	May 12 01:14:40 embed-certs-20220512010611-7184 dockerd[249]: time="2022-05-12T01:14:40.048239000Z" level=info msg="ignoring event" container=2b690b2fc9feb62ed8e8f9aa85d220ab2d1c28238e9c04955d0ab596503d37cd module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	May 12 01:14:50 embed-certs-20220512010611-7184 dockerd[249]: time="2022-05-12T01:14:50.341408600Z" level=info msg="Container failed to exit within 10s of signal 15 - using the force" container=ce7f1fe82bf2bc1a64ca95cc4e67c76cdb682894624330aced5ec47b5af10f71
	May 12 01:14:50 embed-certs-20220512010611-7184 dockerd[249]: time="2022-05-12T01:14:50.468774400Z" level=info msg="ignoring event" container=ce7f1fe82bf2bc1a64ca95cc4e67c76cdb682894624330aced5ec47b5af10f71 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	May 12 01:14:50 embed-certs-20220512010611-7184 dockerd[249]: time="2022-05-12T01:14:50.993120200Z" level=info msg="ignoring event" container=540f22bfbe5c4a11285178b260341d16a8a21aeb1942c45178b44c2dd8211d85 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	May 12 01:14:51 embed-certs-20220512010611-7184 dockerd[249]: time="2022-05-12T01:14:51.340757800Z" level=info msg="ignoring event" container=a71723de1519b021bb4e70ea2a0ba5f89f0d18f9b54d874291ac057ae4e8617b module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	May 12 01:14:51 embed-certs-20220512010611-7184 dockerd[249]: time="2022-05-12T01:14:51.722771800Z" level=info msg="ignoring event" container=5e45d5b75bc1e008b6730c96dd4aa282e0de28ee110173867fedd45b5da756de module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	May 12 01:14:52 embed-certs-20220512010611-7184 dockerd[249]: time="2022-05-12T01:14:52.329839000Z" level=info msg="ignoring event" container=2fe4977f8083adbe9b9c195ddaf124421a267b1f00c46d7db2e5fae7133214d3 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	May 12 01:14:52 embed-certs-20220512010611-7184 dockerd[249]: time="2022-05-12T01:14:52.722427900Z" level=info msg="ignoring event" container=315d2f077acbb782fe1f22566e36db5e17b7ac597f0ecda2539da4118b544a37 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	May 12 01:14:53 embed-certs-20220512010611-7184 dockerd[249]: time="2022-05-12T01:14:53.107205300Z" level=info msg="ignoring event" container=d06f2ce3c42ff40d1feefbdcd44f86f9a7964086e0ab2c9b45af5666b2456b9e module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	May 12 01:15:43 embed-certs-20220512010611-7184 dockerd[249]: time="2022-05-12T01:15:43.602422400Z" level=warning msg="Error getting v2 registry: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host"
	May 12 01:15:43 embed-certs-20220512010611-7184 dockerd[249]: time="2022-05-12T01:15:43.602558700Z" level=info msg="Attempting next endpoint for pull after error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host"
	May 12 01:15:43 embed-certs-20220512010611-7184 dockerd[249]: time="2022-05-12T01:15:43.675088400Z" level=error msg="Handler for POST /v1.41/images/create returned error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host"
	May 12 01:15:46 embed-certs-20220512010611-7184 dockerd[249]: time="2022-05-12T01:15:46.314464700Z" level=info msg="ignoring event" container=9b23d1a7ae8d50fc36c3a84e7fa27f6bd394c7ca3075393e8d59734f91c89562 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	May 12 01:15:53 embed-certs-20220512010611-7184 dockerd[249]: time="2022-05-12T01:15:53.059211800Z" level=info msg="ignoring event" container=16b5022cbfde459503ec1c6c97880167c863f99f6417673f21bee9897eb6e903 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	May 12 01:15:53 embed-certs-20220512010611-7184 dockerd[249]: time="2022-05-12T01:15:53.289349500Z" level=warning msg="reference for unknown type: application/vnd.docker.distribution.manifest.v1+prettyjws" digest="sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb" remote="k8s.gcr.io/echoserver:1.4"
	May 12 01:15:53 embed-certs-20220512010611-7184 dockerd[249]: time="2022-05-12T01:15:53.451241900Z" level=warning msg="Error persisting manifest" digest="sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb" error="error committing manifest to content store: commit failed: unexpected commit digest sha256:eaee4c452b076cdb05b391ed7e75e1ad0aca136665875ab5d7e2f3d9f4675769, expected sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb: failed precondition" remote="k8s.gcr.io/echoserver:1.4"
	May 12 01:16:09 embed-certs-20220512010611-7184 dockerd[249]: time="2022-05-12T01:16:09.652993800Z" level=warning msg="reference for unknown type: " digest="sha256:cc746e7a0b1eec0db01cbabbb6386b23d7af97e79fa9e36bb883a95b7eb96fe2" remote="docker.io/kubernetesui/dashboard@sha256:cc746e7a0b1eec0db01cbabbb6386b23d7af97e79fa9e36bb883a95b7eb96fe2"
	May 12 01:16:10 embed-certs-20220512010611-7184 dockerd[249]: time="2022-05-12T01:16:10.155996900Z" level=info msg="ignoring event" container=9f361ee2f262164c7dfece5dc895659583381b0c07a6b9c3e249e6c3ea5dc449 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	May 12 01:16:11 embed-certs-20220512010611-7184 dockerd[249]: time="2022-05-12T01:16:11.271856200Z" level=info msg="ignoring event" container=7db9afb6458522b7e6806386164b90cb132665dff4a75636da1317e786507031 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	May 12 01:16:28 embed-certs-20220512010611-7184 dockerd[249]: time="2022-05-12T01:16:28.966634100Z" level=info msg="ignoring event" container=87a66ed88e9c9053655f53c15fe6c3b508f305f2c9f635fe9860b89bdc33f901 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	May 12 01:16:37 embed-certs-20220512010611-7184 dockerd[249]: time="2022-05-12T01:16:37.190671900Z" level=warning msg="Error getting v2 registry: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host"
	May 12 01:16:37 embed-certs-20220512010611-7184 dockerd[249]: time="2022-05-12T01:16:37.190888400Z" level=info msg="Attempting next endpoint for pull after error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host"
	May 12 01:16:37 embed-certs-20220512010611-7184 dockerd[249]: time="2022-05-12T01:16:37.222221100Z" level=error msg="Handler for POST /v1.41/images/create returned error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host"
	May 12 01:17:03 embed-certs-20220512010611-7184 dockerd[249]: time="2022-05-12T01:17:03.081663200Z" level=error msg="Handler for POST /v1.41/images/create returned error: error creating temporary lease: context canceled"
	May 12 01:17:03 embed-certs-20220512010611-7184 dockerd[249]: time="2022-05-12T01:17:03.749962300Z" level=error msg="Handler for POST /v1.41/containers/fb24ff0a554f/pause returned error: Cannot pause container fb24ff0a554f61b00a9687ba07ba5ecf0249fe182de14e082a57f1c3023219b7: OCI runtime pause failed: unable to freeze: unknown"
	
	* 
	* ==> container status <==
	* time="2022-05-12T01:17:59Z" level=fatal msg="connect: connect endpoint 'unix:///var/run/dockershim.sock', make sure you are running as root and the endpoint has been started: context deadline exceeded"
	CONTAINER ID   IMAGE                    COMMAND                  CREATED              STATUS                          PORTS     NAMES
	02bf63a7e857   a90209bb39e3             "nginx -g 'daemon of…"   About a minute ago   Created                                   k8s_dashboard-metrics-scraper_dashboard-metrics-scraper-56974995fc-jm4p7_kubernetes-dashboard_03203ac1-9306-4299-a099-7915539d52af_3
	9dc69e11c6a1   kubernetesui/dashboard   "/dashboard --insecu…"   About a minute ago   Up About a minute (Paused)                k8s_kubernetes-dashboard_kubernetes-dashboard-8469778f77-bwdns_kubernetes-dashboard_4c48a657-b6a3-40e8-86b8-75310a5e2c36_0
	87a66ed88e9c   a90209bb39e3             "nginx -g 'daemon of…"   About a minute ago   Exited (1) About a minute ago             k8s_dashboard-metrics-scraper_dashboard-metrics-scraper-56974995fc-jm4p7_kubernetes-dashboard_03203ac1-9306-4299-a099-7915539d52af_2
	2ab01f88686a   k8s.gcr.io/pause:3.6     "/pause"                 2 minutes ago        Up 2 minutes (Paused)                     k8s_POD_kubernetes-dashboard-8469778f77-bwdns_kubernetes-dashboard_4c48a657-b6a3-40e8-86b8-75310a5e2c36_0
	ca490861f8c5   k8s.gcr.io/pause:3.6     "/pause"                 2 minutes ago        Up 2 minutes (Paused)                     k8s_POD_dashboard-metrics-scraper-56974995fc-jm4p7_kubernetes-dashboard_03203ac1-9306-4299-a099-7915539d52af_0
	518dc8f1de40   6e38f40d628d             "/storage-provisioner"   2 minutes ago        Up 2 minutes (Paused)                     k8s_storage-provisioner_storage-provisioner_kube-system_16ae67b0-7538-42e0-b064-d19f5254d784_0
	f81a11687ddc   k8s.gcr.io/pause:3.6     "/pause"                 2 minutes ago        Up 2 minutes (Paused)                     k8s_POD_metrics-server-b955d9d8-dj72t_kube-system_a99dbe00-ec68-4bb2-babf-aaedfbb534ad_0
	771266c61ad4   k8s.gcr.io/pause:3.6     "/pause"                 2 minutes ago        Up 2 minutes (Paused)                     k8s_POD_storage-provisioner_kube-system_16ae67b0-7538-42e0-b064-d19f5254d784_0
	c33958dd750e   a4ca41631cc7             "/coredns -conf /etc…"   2 minutes ago        Up 2 minutes (Paused)                     k8s_coredns_coredns-64897985d-pfxm2_kube-system_ab04c290-ac38-41e8-8782-4cc5375dc8fd_0
	f5ebcb93e156   3c53fa8541f9             "/usr/local/bin/kube…"   2 minutes ago        Up 2 minutes (Paused)                     k8s_kube-proxy_kube-proxy-2cmfg_kube-system_5563a9b4-18bb-4f5c-a0a9-08608f7459ef_0
	c0de5d57b524   k8s.gcr.io/pause:3.6     "/pause"                 2 minutes ago        Up 2 minutes (Paused)                     k8s_POD_kube-proxy-2cmfg_kube-system_5563a9b4-18bb-4f5c-a0a9-08608f7459ef_0
	67ed526660f7   k8s.gcr.io/pause:3.6     "/pause"                 2 minutes ago        Up 2 minutes (Paused)                     k8s_POD_coredns-64897985d-pfxm2_kube-system_ab04c290-ac38-41e8-8782-4cc5375dc8fd_0
	a69052506663   b0c9e5e4dbb1             "kube-controller-man…"   2 minutes ago        Up 2 minutes (Paused)                     k8s_kube-controller-manager_kube-controller-manager-embed-certs-20220512010611-7184_kube-system_d917ace05297db24f56452b86e4773fb_2
	9f604816537e   884d49d6d8c9             "kube-scheduler --au…"   2 minutes ago        Up 2 minutes (Paused)                     k8s_kube-scheduler_kube-scheduler-embed-certs-20220512010611-7184_kube-system_0e1bb9864df8eba4c0a22d55822b2567_2
	b523f2c5d6b7   3fc1d62d6587             "kube-apiserver --ad…"   2 minutes ago        Up 2 minutes (Paused)                     k8s_kube-apiserver_kube-apiserver-embed-certs-20220512010611-7184_kube-system_b9a3c4e8134074af263a952ade5d5526_2
	fb24ff0a554f   25f8c7f3da61             "etcd --advertise-cl…"   2 minutes ago        Up 2 minutes                              k8s_etcd_etcd-embed-certs-20220512010611-7184_kube-system_3679893b1c4f0b06f0ecc0d962314512_2
	232a840dde2a   k8s.gcr.io/pause:3.6     "/pause"                 2 minutes ago        Up 2 minutes (Paused)                     k8s_POD_kube-scheduler-embed-certs-20220512010611-7184_kube-system_0e1bb9864df8eba4c0a22d55822b2567_0
	89f8f5f5be49   k8s.gcr.io/pause:3.6     "/pause"                 2 minutes ago        Up 2 minutes (Paused)                     k8s_POD_kube-controller-manager-embed-certs-20220512010611-7184_kube-system_d917ace05297db24f56452b86e4773fb_0
	1ac20faf9fc4   k8s.gcr.io/pause:3.6     "/pause"                 2 minutes ago        Up 2 minutes (Paused)                     k8s_POD_kube-apiserver-embed-certs-20220512010611-7184_kube-system_b9a3c4e8134074af263a952ade5d5526_0
	f8b333667e2b   k8s.gcr.io/pause:3.6     "/pause"                 2 minutes ago        Up 2 minutes (Paused)                     k8s_POD_etcd-embed-certs-20220512010611-7184_kube-system_3679893b1c4f0b06f0ecc0d962314512_0
	
	* 
	* ==> coredns [c33958dd750e] <==
	* .:53
	[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	CoreDNS-1.8.6
	linux/amd64, go1.17.1, 13a9191
	[INFO] Reloading
	[INFO] plugin/health: Going into lameduck mode for 5s
	[INFO] plugin/reload: Running configuration MD5 = c23ed519c17e71ee396ed052e6209e94
	[INFO] Reloading complete
	
	* 
	* ==> describe nodes <==
	* 
	* ==> dmesg <==
	* [May12 00:51] WSL2: Performing memory compaction.
	[May12 00:52] WSL2: Performing memory compaction.
	[May12 00:54] WSL2: Performing memory compaction.
	[May12 00:56] overlayfs: upperdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
	[  +0.000004] overlayfs: workdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
	[  +0.010948] overlayfs: upperdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
	[  +0.000005] overlayfs: workdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
	[  +0.036593] overlayfs: upperdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
	[  +0.000001] overlayfs: upperdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
	[  +0.000004] overlayfs: workdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
	[  +0.000001] overlayfs: workdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
	[May12 00:57] WSL2: Performing memory compaction.
	[May12 00:58] WSL2: Performing memory compaction.
	[May12 01:00] WSL2: Performing memory compaction.
	[May12 01:01] WSL2: Performing memory compaction.
	[May12 01:02] WSL2: Performing memory compaction.
	[May12 01:03] WSL2: Performing memory compaction.
	[May12 01:05] WSL2: Performing memory compaction.
	[May12 01:06] WSL2: Performing memory compaction.
	[May12 01:07] WSL2: Performing memory compaction.
	[May12 01:08] WSL2: Performing memory compaction.
	[May12 01:09] WSL2: Performing memory compaction.
	[May12 01:12] WSL2: Performing memory compaction.
	[May12 01:14] WSL2: Performing memory compaction.
	[May12 01:16] WSL2: Performing memory compaction.
	
	* 
	* ==> etcd [fb24ff0a554f] <==
	* {"level":"warn","ts":"2022-05-12T01:17:05.708Z","caller":"etcdserver/v3_server.go:815","msg":"waiting for ReadIndex response took too long, retrying","sent-request-id":3238511125514081863,"retry-timeout":"500ms"}
	{"level":"warn","ts":"2022-05-12T01:17:06.208Z","caller":"etcdserver/v3_server.go:815","msg":"waiting for ReadIndex response took too long, retrying","sent-request-id":3238511125514081863,"retry-timeout":"500ms"}
	{"level":"warn","ts":"2022-05-12T01:17:06.487Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2022-05-12T01:17:00.906Z","time spent":"5.5810697s","remote":"127.0.0.1:45074","response type":"/etcdserverpb.Maintenance/Status","request count":-1,"request size":-1,"response count":-1,"response size":-1,"request content":""}
	{"level":"warn","ts":"2022-05-12T01:17:06.709Z","caller":"etcdserver/v3_server.go:815","msg":"waiting for ReadIndex response took too long, retrying","sent-request-id":3238511125514081863,"retry-timeout":"500ms"}
	{"level":"warn","ts":"2022-05-12T01:17:06.720Z","caller":"wal/wal.go:802","msg":"slow fdatasync","took":"6.6140238s","expected-duration":"1s"}
	{"level":"info","ts":"2022-05-12T01:17:06.721Z","caller":"traceutil/trace.go:171","msg":"trace[941811362] linearizableReadLoop","detail":"{readStateIndex:731; appliedIndex:730; }","duration":"6.0290245s","start":"2022-05-12T01:17:00.692Z","end":"2022-05-12T01:17:06.721Z","steps":["trace[941811362] 'read index received'  (duration: 6.0288339s)","trace[941811362] 'applied index is now lower than readState.Index'  (duration: 186.9µs)"],"step_count":2}
	{"level":"warn","ts":"2022-05-12T01:17:07.222Z","caller":"etcdserver/v3_server.go:815","msg":"waiting for ReadIndex response took too long, retrying","sent-request-id":3238511125514081866,"retry-timeout":"500ms"}
	{"level":"warn","ts":"2022-05-12T01:17:07.723Z","caller":"etcdserver/v3_server.go:815","msg":"waiting for ReadIndex response took too long, retrying","sent-request-id":3238511125514081866,"retry-timeout":"500ms"}
	{"level":"warn","ts":"2022-05-12T01:17:07.832Z","caller":"wal/wal.go:802","msg":"slow fdatasync","took":"1.0878529s","expected-duration":"1s"}
	{"level":"info","ts":"2022-05-12T01:17:07.833Z","caller":"traceutil/trace.go:171","msg":"trace[1368074018] linearizableReadLoop","detail":"{readStateIndex:731; appliedIndex:731; }","duration":"1.1114614s","start":"2022-05-12T01:17:06.721Z","end":"2022-05-12T01:17:07.833Z","steps":["trace[1368074018] 'read index received'  (duration: 1.1114502s)","trace[1368074018] 'applied index is now lower than readState.Index'  (duration: 8.1µs)"],"step_count":2}
	{"level":"warn","ts":"2022-05-12T01:17:08.587Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"7.4513641s","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/daemonsets/\" range_end:\"/registry/daemonsets0\" count_only:true ","response":"range_response_count:0 size:7"}
	{"level":"warn","ts":"2022-05-12T01:17:08.587Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"5.4105104s","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/runtimeclasses/\" range_end:\"/registry/runtimeclasses0\" count_only:true ","response":"range_response_count:0 size:5"}
	{"level":"warn","ts":"2022-05-12T01:17:08.587Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"6.8961984s","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/horizontalpodautoscalers/\" range_end:\"/registry/horizontalpodautoscalers0\" count_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2022-05-12T01:17:08.587Z","caller":"traceutil/trace.go:171","msg":"trace[357524218] range","detail":"{range_begin:/registry/runtimeclasses/; range_end:/registry/runtimeclasses0; response_count:0; response_revision:700; }","duration":"5.4105601s","start":"2022-05-12T01:17:03.176Z","end":"2022-05-12T01:17:08.587Z","steps":["trace[357524218] 'agreement among raft nodes before linearized reading'  (duration: 4.6562658s)","trace[357524218] 'count revisions from in-memory index tree'  (duration: 754.2073ms)"],"step_count":2}
	{"level":"warn","ts":"2022-05-12T01:17:08.587Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"6.612548s","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/cronjobs/\" range_end:\"/registry/cronjobs0\" count_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2022-05-12T01:17:08.587Z","caller":"traceutil/trace.go:171","msg":"trace[565821533] range","detail":"{range_begin:/registry/horizontalpodautoscalers/; range_end:/registry/horizontalpodautoscalers0; response_count:0; response_revision:700; }","duration":"6.8963807s","start":"2022-05-12T01:17:01.691Z","end":"2022-05-12T01:17:08.587Z","steps":["trace[565821533] 'agreement among raft nodes before linearized reading'  (duration: 6.1418729s)","trace[565821533] 'count revisions from in-memory index tree'  (duration: 754.3134ms)"],"step_count":2}
	{"level":"info","ts":"2022-05-12T01:17:08.587Z","caller":"traceutil/trace.go:171","msg":"trace[673773244] range","detail":"{range_begin:/registry/cronjobs/; range_end:/registry/cronjobs0; response_count:0; response_revision:700; }","duration":"6.6129827s","start":"2022-05-12T01:17:01.974Z","end":"2022-05-12T01:17:08.587Z","steps":["trace[673773244] 'agreement among raft nodes before linearized reading'  (duration: 5.8586757s)","trace[673773244] 'count revisions from in-memory index tree'  (duration: 753.8425ms)"],"step_count":2}
	{"level":"warn","ts":"2022-05-12T01:17:08.587Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2022-05-12T01:17:03.176Z","time spent":"5.4109103s","remote":"127.0.0.1:42566","response type":"/etcdserverpb.KV/Range","request count":0,"request size":56,"response count":0,"response size":28,"request content":"key:\"/registry/runtimeclasses/\" range_end:\"/registry/runtimeclasses0\" count_only:true "}
	{"level":"warn","ts":"2022-05-12T01:17:08.587Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2022-05-12T01:17:01.691Z","time spent":"6.8964365s","remote":"127.0.0.1:42530","response type":"/etcdserverpb.KV/Range","request count":0,"request size":76,"response count":0,"response size":28,"request content":"key:\"/registry/horizontalpodautoscalers/\" range_end:\"/registry/horizontalpodautoscalers0\" count_only:true "}
	{"level":"warn","ts":"2022-05-12T01:17:08.587Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2022-05-12T01:17:01.974Z","time spent":"6.6130236s","remote":"127.0.0.1:42544","response type":"/etcdserverpb.KV/Range","request count":0,"request size":44,"response count":0,"response size":28,"request content":"key:\"/registry/cronjobs/\" range_end:\"/registry/cronjobs0\" count_only:true "}
	{"level":"warn","ts":"2022-05-12T01:17:08.587Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"6.4690489s","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" ","response":"range_response_count:1 size:1133"}
	{"level":"info","ts":"2022-05-12T01:17:08.588Z","caller":"traceutil/trace.go:171","msg":"trace[1431651457] range","detail":"{range_begin:/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath; range_end:; response_count:1; response_revision:700; }","duration":"6.469675s","start":"2022-05-12T01:17:02.118Z","end":"2022-05-12T01:17:08.588Z","steps":["trace[1431651457] 'agreement among raft nodes before linearized reading'  (duration: 5.7147497s)","trace[1431651457] 'range keys from in-memory index tree'  (duration: 754.1747ms)"],"step_count":2}
	{"level":"warn","ts":"2022-05-12T01:17:08.588Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2022-05-12T01:17:02.118Z","time spent":"6.4697368s","remote":"127.0.0.1:42502","response type":"/etcdserverpb.KV/Range","request count":0,"request size":67,"response count":1,"response size":1156,"request content":"key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" "}
	{"level":"info","ts":"2022-05-12T01:17:08.587Z","caller":"traceutil/trace.go:171","msg":"trace[1806771618] range","detail":"{range_begin:/registry/daemonsets/; range_end:/registry/daemonsets0; response_count:0; response_revision:700; }","duration":"7.4515507s","start":"2022-05-12T01:17:01.135Z","end":"2022-05-12T01:17:08.587Z","steps":["trace[1806771618] 'agreement among raft nodes before linearized reading'  (duration: 6.6972502s)","trace[1806771618] 'count revisions from in-memory index tree'  (duration: 753.9944ms)"],"step_count":2}
	{"level":"warn","ts":"2022-05-12T01:17:08.588Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2022-05-12T01:17:01.135Z","time spent":"7.4522811s","remote":"127.0.0.1:42618","response type":"/etcdserverpb.KV/Range","request count":0,"request size":48,"response count":1,"response size":30,"request content":"key:\"/registry/daemonsets/\" range_end:\"/registry/daemonsets0\" count_only:true "}
	
	* 
	* ==> kernel <==
	*  01:18:50 up  2:26,  0 users,  load average: 8.20, 6.29, 4.82
	Linux embed-certs-20220512010611-7184 5.10.102.1-microsoft-standard-WSL2 #1 SMP Wed Mar 2 00:30:59 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 20.04.4 LTS"
	
	* 
	* ==> kube-apiserver [b523f2c5d6b7] <==
	* Trace[1175639905]: ---"About to write a response" 792ms (01:16:27.381)
	Trace[1175639905]: [792.5516ms] [792.5516ms] END
	W0512 01:16:41.573395       1 handler_proxy.go:104] no RequestInfo found in the context
	E0512 01:16:41.573548       1 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0512 01:16:41.573563       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	{"level":"warn","ts":"2022-05-12T01:17:02.692Z","logger":"etcd-client","caller":"v3/retry_interceptor.go:62","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0xc000b16a80/#initially=[https://127.0.0.1:2379]","attempt":0,"error":"rpc error: code = DeadlineExceeded desc = context deadline exceeded"}
	{"level":"warn","ts":"2022-05-12T01:17:03.041Z","logger":"etcd-client","caller":"v3/retry_interceptor.go:62","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0xc000f30a80/#initially=[https://127.0.0.1:2379]","attempt":0,"error":"rpc error: code = Canceled desc = context canceled"}
	E0512 01:17:03.041842       1 status.go:71] apiserver received an error that is not an metav1.Status: &errors.errorString{s:"context canceled"}: context canceled
	E0512 01:17:03.041902       1 writers.go:117] apiserver was unable to write a JSON response: http: Handler timeout
	E0512 01:17:03.042243       1 writers.go:117] apiserver was unable to write a JSON response: http: Handler timeout
	{"level":"warn","ts":"2022-05-12T01:17:03.042Z","logger":"etcd-client","caller":"v3/retry_interceptor.go:62","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0xc001d1cfc0/#initially=[https://127.0.0.1:2379]","attempt":0,"error":"rpc error: code = Canceled desc = context canceled"}
	I0512 01:17:03.042321       1 trace.go:205] Trace[1136970699]: "GuaranteedUpdate etcd3" type:*core.Event (12-May-2022 01:17:01.993) (total time: 1048ms):
	Trace[1136970699]: [1.0485025s] [1.0485025s] END
	E0512 01:17:03.042360       1 finisher.go:175] FinishRequest: post-timeout activity - time-elapsed: 100.3µs, panicked: false, err: context canceled, panic-reason: <nil>
	E0512 01:17:03.043808       1 status.go:71] apiserver received an error that is not an metav1.Status: &errors.errorString{s:"http: Handler timeout"}: http: Handler timeout
	E0512 01:17:03.045166       1 writers.go:130] apiserver was unable to write a fallback JSON response: http: Handler timeout
	E0512 01:17:03.046500       1 status.go:71] apiserver received an error that is not an metav1.Status: &errors.errorString{s:"http: Handler timeout"}: http: Handler timeout
	I0512 01:17:03.049168       1 trace.go:205] Trace[682879259]: "Get" url:/api/v1/namespaces/kube-system/pods/metrics-server-b955d9d8-dj72t,user-agent:Go-http-client/2.0,audit-id:ea2ec619-e586-4108-aec3-d9f5f8f52844,client:192.168.58.2,accept:application/vnd.kubernetes.protobuf,application/json,protocol:HTTP/2.0 (12-May-2022 01:17:01.992) (total time: 1056ms):
	Trace[682879259]: [1.0563334s] [1.0563334s] END
	E0512 01:17:03.050459       1 writers.go:130] apiserver was unable to write a fallback JSON response: http: Handler timeout
	E0512 01:17:03.051076       1 timeout.go:137] post-timeout activity - time-elapsed: 9.5828ms, GET "/api/v1/namespaces/kube-system/pods/metrics-server-b955d9d8-dj72t" result: <nil>
	I0512 01:17:03.052138       1 trace.go:205] Trace[1558827293]: "Patch" url:/api/v1/namespaces/kube-system/events/metrics-server-b955d9d8-dj72t.16ee3698e0dba3bc,user-agent:kubelet/v1.23.5 (linux/amd64) kubernetes/c285e78,audit-id:85f0e591-b471-4b40-bcf2-498fb391bb54,client:192.168.58.2,accept:application/vnd.kubernetes.protobuf,application/json,protocol:HTTP/2.0 (12-May-2022 01:17:01.993) (total time: 1058ms):
	Trace[1558827293]: [1.0585354s] [1.0585354s] END
	E0512 01:17:03.053083       1 timeout.go:137] post-timeout activity - time-elapsed: 10.9133ms, PATCH "/api/v1/namespaces/kube-system/events/metrics-server-b955d9d8-dj72t.16ee3698e0dba3bc" result: <nil>
	
	* 
	* ==> kube-controller-manager [a69052506663] <==
	* I0512 01:15:30.956479       1 event.go:294] "Event occurred" object="kube-system/coredns-64897985d" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-64897985d-dhfqv"
	I0512 01:15:31.299178       1 event.go:294] "Event occurred" object="kube-system/coredns" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set coredns-64897985d to 1"
	I0512 01:15:31.454242       1 event.go:294] "Event occurred" object="kube-system/coredns-64897985d" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-64897985d-dhfqv"
	I0512 01:15:39.184465       1 event.go:294] "Event occurred" object="kube-system/metrics-server" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set metrics-server-b955d9d8 to 1"
	I0512 01:15:39.365986       1 event.go:294] "Event occurred" object="kube-system/metrics-server-b955d9d8" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"metrics-server-b955d9d8-\" is forbidden: error looking up service account kube-system/metrics-server: serviceaccount \"metrics-server\" not found"
	E0512 01:15:39.457361       1 replica_set.go:536] sync "kube-system/metrics-server-b955d9d8" failed with pods "metrics-server-b955d9d8-" is forbidden: error looking up service account kube-system/metrics-server: serviceaccount "metrics-server" not found
	I0512 01:15:39.560403       1 event.go:294] "Event occurred" object="kube-system/metrics-server-b955d9d8" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: metrics-server-b955d9d8-dj72t"
	I0512 01:15:43.510991       1 event.go:294] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set dashboard-metrics-scraper-56974995fc to 1"
	I0512 01:15:43.578790       1 event.go:294] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set kubernetes-dashboard-8469778f77 to 1"
	I0512 01:15:43.582032       1 event.go:294] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-56974995fc" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"dashboard-metrics-scraper-56974995fc-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	I0512 01:15:43.587651       1 event.go:294] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-8469778f77" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"kubernetes-dashboard-8469778f77-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	E0512 01:15:43.653430       1 replica_set.go:536] sync "kubernetes-dashboard/kubernetes-dashboard-8469778f77" failed with pods "kubernetes-dashboard-8469778f77-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	E0512 01:15:43.654946       1 replica_set.go:536] sync "kubernetes-dashboard/dashboard-metrics-scraper-56974995fc" failed with pods "dashboard-metrics-scraper-56974995fc-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0512 01:15:43.750551       1 event.go:294] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-8469778f77" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"kubernetes-dashboard-8469778f77-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	I0512 01:15:43.750596       1 event.go:294] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-56974995fc" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"dashboard-metrics-scraper-56974995fc-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	E0512 01:15:43.864113       1 replica_set.go:536] sync "kubernetes-dashboard/dashboard-metrics-scraper-56974995fc" failed with pods "dashboard-metrics-scraper-56974995fc-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	E0512 01:15:43.880490       1 replica_set.go:536] sync "kubernetes-dashboard/kubernetes-dashboard-8469778f77" failed with pods "kubernetes-dashboard-8469778f77-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0512 01:15:43.881012       1 event.go:294] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-56974995fc" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: dashboard-metrics-scraper-56974995fc-jm4p7"
	I0512 01:15:44.008614       1 event.go:294] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-8469778f77" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kubernetes-dashboard-8469778f77-bwdns"
	E0512 01:16:00.071080       1 resource_quota_controller.go:413] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0512 01:16:00.568964       1 garbagecollector.go:707] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0512 01:16:30.254315       1 resource_quota_controller.go:413] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0512 01:16:30.755177       1 garbagecollector.go:707] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0512 01:17:00.286308       1 resource_quota_controller.go:413] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0512 01:17:00.794867       1 garbagecollector.go:707] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	
	* 
	* ==> kube-proxy [f5ebcb93e156] <==
	* E0512 01:15:34.658734       1 proxier.go:647] "Failed to read builtin modules file, you can ignore this message when kube-proxy is running inside container without mounting /lib/modules" err="open /lib/modules/5.10.102.1-microsoft-standard-WSL2/modules.builtin: no such file or directory" filePath="/lib/modules/5.10.102.1-microsoft-standard-WSL2/modules.builtin"
	I0512 01:15:34.664705       1 proxier.go:657] "Failed to load kernel module with modprobe, you can ignore this message when kube-proxy is running inside container without mounting /lib/modules" moduleName="ip_vs"
	I0512 01:15:34.675967       1 proxier.go:657] "Failed to load kernel module with modprobe, you can ignore this message when kube-proxy is running inside container without mounting /lib/modules" moduleName="ip_vs_rr"
	I0512 01:15:34.753502       1 proxier.go:657] "Failed to load kernel module with modprobe, you can ignore this message when kube-proxy is running inside container without mounting /lib/modules" moduleName="ip_vs_wrr"
	I0512 01:15:34.757225       1 proxier.go:657] "Failed to load kernel module with modprobe, you can ignore this message when kube-proxy is running inside container without mounting /lib/modules" moduleName="ip_vs_sh"
	I0512 01:15:34.762049       1 proxier.go:657] "Failed to load kernel module with modprobe, you can ignore this message when kube-proxy is running inside container without mounting /lib/modules" moduleName="nf_conntrack"
	I0512 01:15:34.857624       1 node.go:163] Successfully retrieved node IP: 192.168.58.2
	I0512 01:15:34.857672       1 server_others.go:138] "Detected node IP" address="192.168.58.2"
	I0512 01:15:34.857872       1 server_others.go:561] "Unknown proxy mode, assuming iptables proxy" proxyMode=""
	I0512 01:15:35.178606       1 server_others.go:206] "Using iptables Proxier"
	I0512 01:15:35.178749       1 server_others.go:213] "kube-proxy running in dual-stack mode" ipFamily=IPv4
	I0512 01:15:35.178770       1 server_others.go:214] "Creating dualStackProxier for iptables"
	I0512 01:15:35.178848       1 server_others.go:491] "Detect-local-mode set to ClusterCIDR, but no IPv6 cluster CIDR defined, , defaulting to no-op detect-local for IPv6"
	I0512 01:15:35.180125       1 server.go:656] "Version info" version="v1.23.5"
	I0512 01:15:35.182125       1 config.go:317] "Starting service config controller"
	I0512 01:15:35.182160       1 shared_informer.go:240] Waiting for caches to sync for service config
	I0512 01:15:35.182196       1 config.go:226] "Starting endpoint slice config controller"
	I0512 01:15:35.182203       1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
	I0512 01:15:35.349933       1 shared_informer.go:247] Caches are synced for endpoint slice config 
	I0512 01:15:35.349960       1 shared_informer.go:247] Caches are synced for service config 
	
	* 
	* ==> kube-scheduler [9f604816537e] <==
	* W0512 01:15:12.668516       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0512 01:15:12.668626       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0512 01:15:12.865214       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0512 01:15:12.865337       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0512 01:15:12.951479       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0512 01:15:12.951611       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0512 01:15:12.951711       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0512 01:15:12.951727       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0512 01:15:12.951724       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0512 01:15:12.951762       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0512 01:15:12.956630       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0512 01:15:12.956735       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0512 01:15:13.081369       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0512 01:15:13.081492       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0512 01:15:13.152421       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0512 01:15:13.152543       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0512 01:15:13.152642       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0512 01:15:13.152665       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0512 01:15:13.161242       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0512 01:15:13.161288       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0512 01:15:13.252324       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0512 01:15:13.252481       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0512 01:15:13.253501       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0512 01:15:13.253614       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	I0512 01:15:15.060824       1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	
	* 
	* ==> kubelet <==
	* -- Logs begin at Thu 2022-05-12 01:09:11 UTC, end at Thu 2022-05-12 01:18:50 UTC. --
	May 12 01:17:01 embed-certs-20220512010611-7184 kubelet[5195]: time="2022-05-12T01:17:01Z" level=error msg="Failed to remove cgroup" error="rmdir /sys/fs/cgroup/rdma/kubepods/burstable/podcfb29c14-064a-460f-8a47-3f5667911e1a/0a485f32b86d23942a3247815b04bb8eb0388c63d62e1e34aa821bb34bc22b79: device or resource busy"
	May 12 01:17:01 embed-certs-20220512010611-7184 kubelet[5195]: I0512 01:17:01.201681    5195 pod_container_manager_linux.go:194] "Failed to delete cgroup paths" cgroupName=[kubepods burstable pod002009a6866b0a2506f8d5c8c4da7548] err="unable to destroy cgroup paths for cgroup [kubepods burstable pod002009a6866b0a2506f8d5c8c4da7548] : Failed to remove paths: map[rdma:/sys/fs/cgroup/rdma/kubepods/burstable/pod002009a6866b0a2506f8d5c8c4da7548]"
	May 12 01:17:01 embed-certs-20220512010611-7184 kubelet[5195]: I0512 01:17:01.201713    5195 pod_container_manager_linux.go:194] "Failed to delete cgroup paths" cgroupName=[kubepods burstable podcfb29c14-064a-460f-8a47-3f5667911e1a] err="unable to destroy cgroup paths for cgroup [kubepods burstable podcfb29c14-064a-460f-8a47-3f5667911e1a] : Failed to remove paths: map[rdma:/sys/fs/cgroup/rdma/kubepods/burstable/podcfb29c14-064a-460f-8a47-3f5667911e1a]"
	May 12 01:17:01 embed-certs-20220512010611-7184 kubelet[5195]: time="2022-05-12T01:17:01Z" level=error msg="Failed to remove cgroup" error="rmdir /sys/fs/cgroup/rdma/kubepods/burstable/pod9fc427d2e6746d2b3f18846f6f0fcafb/2563f60fad220f07db8f45cc96d5a42c26fb34d6d3e5df05ee88aa982896f7b0: device or resource busy"
	May 12 01:17:01 embed-certs-20220512010611-7184 kubelet[5195]: time="2022-05-12T01:17:01Z" level=error msg="Failed to remove cgroup" error="rmdir /sys/fs/cgroup/rdma/kubepods/besteffort/pod88181eaf-3164-49ec-a268-6e0f32698745/52962f3d9e9a919c14831d3b3674cd3bd03654b4cac2792505be02ed4c3c48fa: device or resource busy"
	May 12 01:17:01 embed-certs-20220512010611-7184 kubelet[5195]: time="2022-05-12T01:17:01Z" level=error msg="Failed to remove cgroup" error="rmdir /sys/fs/cgroup/rdma/kubepods/burstable/pod7bdf10a6ac21254bb5823aca69fc5310/12bad5d82201d738c336fe1786c87d6e5afeae7df56f63d77efc9dbcd020d8e7: device or resource busy"
	May 12 01:17:01 embed-certs-20220512010611-7184 kubelet[5195]: I0512 01:17:01.201861    5195 pod_container_manager_linux.go:194] "Failed to delete cgroup paths" cgroupName=[kubepods burstable pod9fc427d2e6746d2b3f18846f6f0fcafb] err="unable to destroy cgroup paths for cgroup [kubepods burstable pod9fc427d2e6746d2b3f18846f6f0fcafb] : Failed to remove paths: map[rdma:/sys/fs/cgroup/rdma/kubepods/burstable/pod9fc427d2e6746d2b3f18846f6f0fcafb]"
	May 12 01:17:01 embed-certs-20220512010611-7184 kubelet[5195]: I0512 01:17:01.202022    5195 pod_container_manager_linux.go:194] "Failed to delete cgroup paths" cgroupName=[kubepods besteffort pod88181eaf-3164-49ec-a268-6e0f32698745] err="unable to destroy cgroup paths for cgroup [kubepods besteffort pod88181eaf-3164-49ec-a268-6e0f32698745] : Failed to remove paths: map[rdma:/sys/fs/cgroup/rdma/kubepods/besteffort/pod88181eaf-3164-49ec-a268-6e0f32698745]"
	May 12 01:17:01 embed-certs-20220512010611-7184 kubelet[5195]: I0512 01:17:01.202124    5195 pod_container_manager_linux.go:194] "Failed to delete cgroup paths" cgroupName=[kubepods burstable pod7bdf10a6ac21254bb5823aca69fc5310] err="unable to destroy cgroup paths for cgroup [kubepods burstable pod7bdf10a6ac21254bb5823aca69fc5310] : Failed to remove paths: map[rdma:/sys/fs/cgroup/rdma/kubepods/burstable/pod7bdf10a6ac21254bb5823aca69fc5310]"
	May 12 01:17:01 embed-certs-20220512010611-7184 kubelet[5195]: time="2022-05-12T01:17:01Z" level=error msg="Failed to remove cgroup" error="rmdir /sys/fs/cgroup/rdma/kubepods/besteffort/pod59444036ebaaae96eddd41dadabbc71a/26850d91e05e50e404cfbae0eb9a3758099cd1a8ad614d8e6c7b3f9e1d0d9b18: device or resource busy"
	May 12 01:17:01 embed-certs-20220512010611-7184 kubelet[5195]: I0512 01:17:01.212190    5195 pod_container_manager_linux.go:194] "Failed to delete cgroup paths" cgroupName=[kubepods besteffort pod59444036ebaaae96eddd41dadabbc71a] err="unable to destroy cgroup paths for cgroup [kubepods besteffort pod59444036ebaaae96eddd41dadabbc71a] : Failed to remove paths: map[rdma:/sys/fs/cgroup/rdma/kubepods/besteffort/pod59444036ebaaae96eddd41dadabbc71a]"
	May 12 01:17:01 embed-certs-20220512010611-7184 kubelet[5195]: time="2022-05-12T01:17:01Z" level=error msg="Failed to remove cgroup" error="rmdir /sys/fs/cgroup/rdma/kubepods/besteffort/podf54a67f2-f423-4525-a613-569e73288c94/4170b36e0b4c5f27b0b2178c8592e49502600a4ad2a613c36dcdb3953d5ec28d: device or resource busy"
	May 12 01:17:01 embed-certs-20220512010611-7184 kubelet[5195]: time="2022-05-12T01:17:01Z" level=error msg="Failed to remove cgroup" error="rmdir /sys/fs/cgroup/rdma/kubepods/besteffort/pod8728e39b-198d-41a2-ba6f-5934ef025209/52d5f6fdaf1406ef4c262aba07c84c6a554ab7c117b38097d889ae1a972c7a58: device or resource busy"
	May 12 01:17:01 embed-certs-20220512010611-7184 kubelet[5195]: I0512 01:17:01.212251    5195 pod_container_manager_linux.go:194] "Failed to delete cgroup paths" cgroupName=[kubepods besteffort podf54a67f2-f423-4525-a613-569e73288c94] err="unable to destroy cgroup paths for cgroup [kubepods besteffort podf54a67f2-f423-4525-a613-569e73288c94] : Failed to remove paths: map[rdma:/sys/fs/cgroup/rdma/kubepods/besteffort/podf54a67f2-f423-4525-a613-569e73288c94]"
	May 12 01:17:01 embed-certs-20220512010611-7184 kubelet[5195]: I0512 01:17:01.212283    5195 pod_container_manager_linux.go:194] "Failed to delete cgroup paths" cgroupName=[kubepods besteffort pod8728e39b-198d-41a2-ba6f-5934ef025209] err="unable to destroy cgroup paths for cgroup [kubepods besteffort pod8728e39b-198d-41a2-ba6f-5934ef025209] : Failed to remove paths: map[rdma:/sys/fs/cgroup/rdma/kubepods/besteffort/pod8728e39b-198d-41a2-ba6f-5934ef025209]"
	May 12 01:17:01 embed-certs-20220512010611-7184 kubelet[5195]: time="2022-05-12T01:17:01Z" level=error msg="Failed to remove cgroup" error="rmdir /sys/fs/cgroup/rdma/kubepods/besteffort/pod97886edb-5d10-49e9-8875-3c306b063e34/3258411661dbdb88691d22599bb15da544c488943246e63a9fc7ccaf03ba5585: device or resource busy"
	May 12 01:17:01 embed-certs-20220512010611-7184 kubelet[5195]: I0512 01:17:01.212391    5195 pod_container_manager_linux.go:194] "Failed to delete cgroup paths" cgroupName=[kubepods besteffort pod97886edb-5d10-49e9-8875-3c306b063e34] err="unable to destroy cgroup paths for cgroup [kubepods besteffort pod97886edb-5d10-49e9-8875-3c306b063e34] : Failed to remove paths: map[rdma:/sys/fs/cgroup/rdma/kubepods/besteffort/pod97886edb-5d10-49e9-8875-3c306b063e34]"
	May 12 01:17:01 embed-certs-20220512010611-7184 kubelet[5195]: time="2022-05-12T01:17:01Z" level=error msg="Failed to remove cgroup" error="rmdir /sys/fs/cgroup/rdma/kubepods/burstable/podba40e2cfcc5b57908dd25747d02cea61/11a6c9cca5627b8c45dc17777925da0d62dc2ad308b44f90bf8af74477b3f232: device or resource busy"
	May 12 01:17:01 embed-certs-20220512010611-7184 kubelet[5195]: time="2022-05-12T01:17:01Z" level=error msg="Failed to remove cgroup" error="rmdir /sys/fs/cgroup/rdma/kubepods/besteffort/pod070fb71a-1145-4881-a9cd-076ab7a6d77b/662135225f6c66890eeb0b9b3bdfa106d9b4f7a32ed7054006ac470f1dbfbfe9: device or resource busy"
	May 12 01:17:01 embed-certs-20220512010611-7184 kubelet[5195]: I0512 01:17:01.212443    5195 pod_container_manager_linux.go:194] "Failed to delete cgroup paths" cgroupName=[kubepods burstable podba40e2cfcc5b57908dd25747d02cea61] err="unable to destroy cgroup paths for cgroup [kubepods burstable podba40e2cfcc5b57908dd25747d02cea61] : Failed to remove paths: map[rdma:/sys/fs/cgroup/rdma/kubepods/burstable/podba40e2cfcc5b57908dd25747d02cea61]"
	May 12 01:17:01 embed-certs-20220512010611-7184 kubelet[5195]: I0512 01:17:01.212447    5195 pod_container_manager_linux.go:194] "Failed to delete cgroup paths" cgroupName=[kubepods besteffort pod070fb71a-1145-4881-a9cd-076ab7a6d77b] err="unable to destroy cgroup paths for cgroup [kubepods besteffort pod070fb71a-1145-4881-a9cd-076ab7a6d77b] : Failed to remove paths: map[rdma:/sys/fs/cgroup/rdma/kubepods/besteffort/pod070fb71a-1145-4881-a9cd-076ab7a6d77b]"
	May 12 01:17:02 embed-certs-20220512010611-7184 systemd[1]: Stopping kubelet: The Kubernetes Node Agent...
	May 12 01:17:02 embed-certs-20220512010611-7184 kubelet[5195]: I0512 01:17:02.959854    5195 dynamic_cafile_content.go:170] "Shutting down controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	May 12 01:17:03 embed-certs-20220512010611-7184 systemd[1]: kubelet.service: Succeeded.
	May 12 01:17:03 embed-certs-20220512010611-7184 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	
	* 
	* ==> kubernetes-dashboard [9dc69e11c6a1] <==
	* 2022/05/12 01:16:37 Starting overwatch
	2022/05/12 01:16:37 Using namespace: kubernetes-dashboard
	2022/05/12 01:16:37 Using in-cluster config to connect to apiserver
	2022/05/12 01:16:37 Using secret token for csrf signing
	2022/05/12 01:16:37 Initializing csrf token from kubernetes-dashboard-csrf secret
	2022/05/12 01:16:37 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2022/05/12 01:16:37 Successful initial request to the apiserver, version: v1.23.5
	2022/05/12 01:16:37 Generating JWE encryption key
	2022/05/12 01:16:37 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2022/05/12 01:16:37 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2022/05/12 01:16:38 Initializing JWE encryption key from synchronized object
	2022/05/12 01:16:38 Creating in-cluster Sidecar client
	2022/05/12 01:16:38 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2022/05/12 01:16:38 Serving insecurely on HTTP port: 9090
	
	* 
	* ==> storage-provisioner [518dc8f1de40] <==
	* I0512 01:15:42.268097       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0512 01:15:42.359123       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0512 01:15:42.359210       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0512 01:15:42.457547       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0512 01:15:42.457946       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-20220512010611-7184_c3a349cb-879f-44fa-ac06-9be541385714!
	I0512 01:15:42.458764       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"49631300-4fcf-4b37-b7ed-3c03968e9dd4", APIVersion:"v1", ResourceVersion:"523", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-20220512010611-7184_c3a349cb-879f-44fa-ac06-9be541385714 became leader
	I0512 01:15:42.658482       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-20220512010611-7184_c3a349cb-879f-44fa-ac06-9be541385714!
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0512 01:18:50.038085    3732 logs.go:192] command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.23.5/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.23.5/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	Unable to connect to the server: net/http: TLS handshake timeout
	 output: "\n** stderr ** \nUnable to connect to the server: net/http: TLS handshake timeout\n\n** /stderr **"
	! unable to fetch logs for: describe nodes

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p embed-certs-20220512010611-7184 -n embed-certs-20220512010611-7184

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/Pause
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p embed-certs-20220512010611-7184 -n embed-certs-20220512010611-7184: exit status 2 (6.9657127s)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "embed-certs-20220512010611-7184" apiserver is not running, skipping kubectl commands (state="Paused")
--- FAIL: TestStartStop/group/embed-certs/serial/Pause (121.19s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (68.39s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:313: (dbg) Run:  out/minikube-windows-amd64.exe pause -p old-k8s-version-20220512010246-7184 --alsologtostderr -v=1

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:313: (dbg) Non-zero exit: out/minikube-windows-amd64.exe pause -p old-k8s-version-20220512010246-7184 --alsologtostderr -v=1: exit status 80 (8.1243616s)

                                                
                                                
-- stdout --
	* Pausing node old-k8s-version-20220512010246-7184 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0512 01:20:54.574538     924 out.go:296] Setting OutFile to fd 1912 ...
	I0512 01:20:54.636535     924 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0512 01:20:54.636535     924 out.go:309] Setting ErrFile to fd 1752...
	I0512 01:20:54.636535     924 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0512 01:20:54.648564     924 out.go:303] Setting JSON to false
	I0512 01:20:54.648564     924 mustload.go:65] Loading cluster: old-k8s-version-20220512010246-7184
	I0512 01:20:54.649536     924 config.go:178] Loaded profile config "old-k8s-version-20220512010246-7184": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.16.0
	I0512 01:20:54.663536     924 cli_runner.go:164] Run: docker container inspect old-k8s-version-20220512010246-7184 --format={{.State.Status}}
	I0512 01:20:57.568403     924 cli_runner.go:217] Completed: docker container inspect old-k8s-version-20220512010246-7184 --format={{.State.Status}}: (2.9046258s)
	I0512 01:20:57.568403     924 host.go:66] Checking if "old-k8s-version-20220512010246-7184" exists ...
	I0512 01:20:57.587471     924 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" old-k8s-version-20220512010246-7184
	I0512 01:20:58.771839     924 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" old-k8s-version-20220512010246-7184: (1.1842596s)
	I0512 01:20:58.774091     924 pause.go:58] "namespaces" ="keys" ="(MISSING)"
	I0512 01:20:58.778753     924 out.go:177] * Pausing node old-k8s-version-20220512010246-7184 ... 
	I0512 01:20:58.782835     924 host.go:66] Checking if "old-k8s-version-20220512010246-7184" exists ...
	I0512 01:20:58.795420     924 ssh_runner.go:195] Run: systemctl --version
	I0512 01:20:58.802013     924 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220512010246-7184
	I0512 01:21:00.012501     924 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220512010246-7184: (1.2104257s)
	I0512 01:21:00.012501     924 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50585 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\old-k8s-version-20220512010246-7184\id_rsa Username:docker}
	I0512 01:21:00.133038     924 ssh_runner.go:235] Completed: systemctl --version: (1.3375487s)
	I0512 01:21:00.145042     924 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0512 01:21:00.168043     924 pause.go:50] kubelet running: true
	I0512 01:21:00.190064     924 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I0512 01:21:00.574973     924 retry.go:31] will retry after 276.165072ms: kubelet disable --now: sudo systemctl disable --now kubelet: Process exited with status 1
	stdout:
	
	stderr:
	Synchronizing state of kubelet.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install disable kubelet
	update-rc.d: error: kubelet Default-Start contains no runlevels, aborting.
	I0512 01:21:00.874552     924 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0512 01:21:00.911232     924 pause.go:50] kubelet running: true
	I0512 01:21:00.921235     924 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I0512 01:21:01.173600     924 retry.go:31] will retry after 540.190908ms: kubelet disable --now: sudo systemctl disable --now kubelet: Process exited with status 1
	stdout:
	
	stderr:
	Synchronizing state of kubelet.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install disable kubelet
	update-rc.d: error: kubelet Default-Start contains no runlevels, aborting.
	I0512 01:21:01.740301     924 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0512 01:21:01.778735     924 pause.go:50] kubelet running: true
	I0512 01:21:01.803446     924 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I0512 01:21:02.334904     924 out.go:177] 
	W0512 01:21:02.337906     924 out.go:239] X Exiting due to GUEST_PAUSE: kubelet disable --now: sudo systemctl disable --now kubelet: Process exited with status 1
	stdout:
	
	stderr:
	Synchronizing state of kubelet.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install disable kubelet
	update-rc.d: error: kubelet Default-Start contains no runlevels, aborting.
	
	X Exiting due to GUEST_PAUSE: kubelet disable --now: sudo systemctl disable --now kubelet: Process exited with status 1
	stdout:
	
	stderr:
	Synchronizing state of kubelet.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install disable kubelet
	update-rc.d: error: kubelet Default-Start contains no runlevels, aborting.
	
	W0512 01:21:02.337906     924 out.go:239] * 
	* 
	W0512 01:21:02.383339     924 out.go:239] ╭───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                       │
	│    * If the above advice does not help, please let us know:                                                           │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                         │
	│                                                                                                                       │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                              │
	│    * Please also attach the following file to the GitHub issue:                                                       │
	│    * - C:\Users\jenkins.minikube4\AppData\Local\Temp\minikube_pause_0a4d03c8adbe4992011689b475409882710ca950_1.log    │
	│                                                                                                                       │
	╰───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                       │
	│    * If the above advice does not help, please let us know:                                                           │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                         │
	│                                                                                                                       │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                              │
	│    * Please also attach the following file to the GitHub issue:                                                       │
	│    * - C:\Users\jenkins.minikube4\AppData\Local\Temp\minikube_pause_0a4d03c8adbe4992011689b475409882710ca950_1.log    │
	│                                                                                                                       │
	╰───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I0512 01:21:02.388335     924 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:313: out/minikube-windows-amd64.exe pause -p old-k8s-version-20220512010246-7184 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect old-k8s-version-20220512010246-7184
helpers_test.go:231: (dbg) Done: docker inspect old-k8s-version-20220512010246-7184: (1.3050037s)
helpers_test.go:235: (dbg) docker inspect old-k8s-version-20220512010246-7184:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "09c13f96e40541700b9cc790a0ab055a0fa1b74d8691ec171aaa8db383fbe462",
	        "Created": "2022-05-12T01:09:40.372697Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 224889,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2022-05-12T01:12:54.0696936Z",
	            "FinishedAt": "2022-05-12T01:12:34.3022706Z"
	        },
	        "Image": "sha256:8a42e1145657f551cd435eddb43b96ab44d0facbe44106da934225366eeb7757",
	        "ResolvConfPath": "/var/lib/docker/containers/09c13f96e40541700b9cc790a0ab055a0fa1b74d8691ec171aaa8db383fbe462/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/09c13f96e40541700b9cc790a0ab055a0fa1b74d8691ec171aaa8db383fbe462/hostname",
	        "HostsPath": "/var/lib/docker/containers/09c13f96e40541700b9cc790a0ab055a0fa1b74d8691ec171aaa8db383fbe462/hosts",
	        "LogPath": "/var/lib/docker/containers/09c13f96e40541700b9cc790a0ab055a0fa1b74d8691ec171aaa8db383fbe462/09c13f96e40541700b9cc790a0ab055a0fa1b74d8691ec171aaa8db383fbe462-json.log",
	        "Name": "/old-k8s-version-20220512010246-7184",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-20220512010246-7184:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "old-k8s-version-20220512010246-7184",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 2306867200,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/efd1441b9492f219138becda346b76206129a4b01aeb33530662ce9014d7857c-init/diff:/var/lib/docker/overlay2/f5e9ce82d74c36e0b6de7ac5d28dc4eacb2abae094b3550db66974795ad74446/diff:/var/lib/docker/overlay2/63fa2edc88f635760dd847938696c8fc50aad8a0c51ab6c6f93b0aa9a6fcefe6/diff:/var/lib/docker/overlay2/5fcaace21e215fd120a775470b4a78ef06d9f466e024f0b1baef234ddf87f04f/diff:/var/lib/docker/overlay2/6c9accb62919ca088333f8b3a193f43886e2c3042c5ec726e1c7fd19ee183024/diff:/var/lib/docker/overlay2/a9a1aea640018bd9e524c36422ea97cc97a391601b718810fed779260e48c87a/diff:/var/lib/docker/overlay2/2476b6e8d656e43d76d4288098236bc765cb37fa9dde696f09bfce485e99229e/diff:/var/lib/docker/overlay2/6cdf08ddc61561e961de67f3e14478397868c53a4d8552909a5b84e35b28db1f/diff:/var/lib/docker/overlay2/6f6f8b6686cc7838a52ce30fdc4e938cde2fb68b161e09a9bc81fa11011619a6/diff:/var/lib/docker/overlay2/9f55b91f762ea60dc3da71cf56e5ca24181e712389d99cb8d972bba474f5d6a4/diff:/var/lib/docker/overlay2/4c269b
df57eae0d21d2034dc086308278f9b00f2301c726b6de876b9ff97298d/diff:/var/lib/docker/overlay2/5fb8ed9b9e765df8150f27b7847ec7333b2c7d978dbc1161da97d630ec7e43e2/diff:/var/lib/docker/overlay2/3a297e9f6ab51d930ef61c49a0fea772cdc2a2704a077db6adb142eb044d9a93/diff:/var/lib/docker/overlay2/2068464f4655627fb513b31660ab34c938e559da10d44fd723ce9e1d744a037d/diff:/var/lib/docker/overlay2/f783596106daebadefbb7774015c369d757665d434c96581b426b2e5f5b453c4/diff:/var/lib/docker/overlay2/ac8927d3cc7829cc82e4c0214dd4adee97aedbe2b7d992cbbc08288443c8e320/diff:/var/lib/docker/overlay2/91eff4612dd56b2c82f4357b879f9cdcbc13704bf8f6dcbfc56febb104774843/diff:/var/lib/docker/overlay2/ee7366acc162efb7b878c4c56df021a8ef756fa595230ffe898cd0dd0355eb44/diff:/var/lib/docker/overlay2/ab5df115d2ec8cd71172942a2e449de32b93a3b6b5d90122c0e734c5e11d6bb2/diff:/var/lib/docker/overlay2/59fcbe9b552129cdc5c96e9e8ff27f4b88a12645aec3cf8f48f28d91521760e0/diff:/var/lib/docker/overlay2/0684681eb5880654b43e8803ef8f17b85e6129c85ff81c13b509563184a77625/diff:/var/lib/d
ocker/overlay2/8bbd24801b480df6ca8545e8f8bb09c17b7598c2868fb94ea5b8775ce2f311b4/diff:/var/lib/docker/overlay2/f28553cc59fccccfdfc5c24b7b8dfe4055c625d0a004731911c34b4ba32a9dfb/diff:/var/lib/docker/overlay2/2e47a8ef6e4481885d71f57a1d9ef99898b741644addd2796de5c2f4c696edb0/diff:/var/lib/docker/overlay2/e1f1eaaa809c974dabd197f590d19be05325f506a53a9a1f8ba29defd7096f60/diff:/var/lib/docker/overlay2/83ce12af60df76f98283ed8f3450cd5727b42d06055b18f04a18068b105ae128/diff:/var/lib/docker/overlay2/5fd34820f54e7f8f0c898c21b5d9d030e5b82c65c901897306c3db475481167f/diff:/var/lib/docker/overlay2/3ecb6f46fa47a8906ff5de1da5a63be9c664ff5bc66faf870126868d36bb77c6/diff:/var/lib/docker/overlay2/ccb92f12dd3e84b11b2c9b1ef6a0581ad5894648432ebe7cb5d16d48c7aacf6e/diff:/var/lib/docker/overlay2/7c6d11dc9abdd4916f3759c8ae4db8c3011cff872f2fd3cc502e7f663e496765/diff:/var/lib/docker/overlay2/b865b0351704115fa113e25f7651d1dc1e2f0348c332552e555e898094f34802/diff:/var/lib/docker/overlay2/bbcf207462c3f88368214d8e4ca222f28a828bd30661741d421665b4d10
80f07/diff:/var/lib/docker/overlay2/b554a32e9a2e4d3773e918754c27a1b32bc7ec5327d3bd1f52d7a146a07fa2c5/diff:/var/lib/docker/overlay2/d0a997bacfa9b1b54f61c62f00ad2797616ea9bb55182aad68ed805f96f5f72b/diff:/var/lib/docker/overlay2/e0c168ecfe6a93618f4f653c1aba422023114f242ab1045591d0c8454573d5c2/diff:/var/lib/docker/overlay2/fb67af38a46ef55935fcfb4f1be5f34b45b3d0e1c571538828117f23eedea417/diff:/var/lib/docker/overlay2/e96ed0776e5f27ef225469ac5f5e8ed2e299c72d5db88782599c0fdd1cec2fe3/diff:/var/lib/docker/overlay2/91b77e60e0a7864ace4f5a4d65f465bd7fe862616a87a74ee9fee21dc5dceb07/diff:/var/lib/docker/overlay2/9829211293f70b356dfa8d07b5dbbc3a6d05415cbd2840fd9dd948b8b315bf18/diff:/var/lib/docker/overlay2/dc35dda36e34a2f4f3a5d958b1a7d4d75db8655c4bc7b4b3d9591f43f9a645fc/diff:/var/lib/docker/overlay2/968c2bb04f641a9c8bd30d38659dc28973b31bfd577bb1aa355ae6c2ab4a0d34/diff:/var/lib/docker/overlay2/37432c6ae0b10a52e95b215fdd2e256362060f32c4a52f0d2021b6e10b3ed77b/diff:/var/lib/docker/overlay2/77687f9734b19f3e8a5bb08b07067e0b572775
20867b7a6ad80b67ffebe332d7/diff",
	                "MergedDir": "/var/lib/docker/overlay2/efd1441b9492f219138becda346b76206129a4b01aeb33530662ce9014d7857c/merged",
	                "UpperDir": "/var/lib/docker/overlay2/efd1441b9492f219138becda346b76206129a4b01aeb33530662ce9014d7857c/diff",
	                "WorkDir": "/var/lib/docker/overlay2/efd1441b9492f219138becda346b76206129a4b01aeb33530662ce9014d7857c/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-20220512010246-7184",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-20220512010246-7184/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-20220512010246-7184",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1652251400-14138@sha256:8c847a4aa2afc5a7fc659f9731046bf9cc7e788283deecc83c8633014fb0828a",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-20220512010246-7184",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-20220512010246-7184",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "32b0c2ce0f6a9578237c9c6cb025d61417c7468c64453600f63b0a2d42c5033f",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "50585"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "50586"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "50587"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "50588"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "50584"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/32b0c2ce0f6a",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-20220512010246-7184": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "09c13f96e405",
	                        "old-k8s-version-20220512010246-7184"
	                    ],
	                    "NetworkID": "62f4121100c00a6bbb9271af782221f9e410a7052f74222a0961dfec8ebf9fad",
	                    "EndpointID": "fa2e5caac9c73ffde8efa5bb8d61ea966a49ba68415a7ef79aada773644a0215",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:4c:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p old-k8s-version-20220512010246-7184 -n old-k8s-version-20220512010246-7184

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/Pause
helpers_test.go:239: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.Host}} -p old-k8s-version-20220512010246-7184 -n old-k8s-version-20220512010246-7184: (8.5534425s)
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-windows-amd64.exe -p old-k8s-version-20220512010246-7184 logs -n 25

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/Pause
helpers_test.go:247: (dbg) Done: out/minikube-windows-amd64.exe -p old-k8s-version-20220512010246-7184 logs -n 25: (9.9135916s)
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/Pause logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|------------------------------------------------------------|------------------------------------------------|-------------------|---------|---------------------|---------------------|
	| Command |                            Args                            |                    Profile                     |       User        | Version |     Start Time      |      End Time       |
	|---------|------------------------------------------------------------|------------------------------------------------|-------------------|---------|---------------------|---------------------|
	| start   | -p                                                         | default-k8s-different-port-20220512011148-7184 | minikube4\jenkins | v1.25.2 | 12 May 22 01:11 GMT | 12 May 22 01:14 GMT |
	|         | default-k8s-different-port-20220512011148-7184             |                                                |                   |         |                     |                     |
	|         | --memory=2200 --alsologtostderr --wait=true                |                                                |                   |         |                     |                     |
	|         | --apiserver-port=8444 --driver=docker                      |                                                |                   |         |                     |                     |
	|         | --kubernetes-version=v1.23.5                               |                                                |                   |         |                     |                     |
	| addons  | enable metrics-server -p                                   | default-k8s-different-port-20220512011148-7184 | minikube4\jenkins | v1.25.2 | 12 May 22 01:14 GMT | 12 May 22 01:14 GMT |
	|         | default-k8s-different-port-20220512011148-7184             |                                                |                   |         |                     |                     |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4           |                                                |                   |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                     |                                                |                   |         |                     |                     |
	| stop    | -p                                                         | default-k8s-different-port-20220512011148-7184 | minikube4\jenkins | v1.25.2 | 12 May 22 01:14 GMT | 12 May 22 01:14 GMT |
	|         | default-k8s-different-port-20220512011148-7184             |                                                |                   |         |                     |                     |
	|         | --alsologtostderr -v=3                                     |                                                |                   |         |                     |                     |
	| addons  | enable dashboard -p                                        | default-k8s-different-port-20220512011148-7184 | minikube4\jenkins | v1.25.2 | 12 May 22 01:14 GMT | 12 May 22 01:14 GMT |
	|         | default-k8s-different-port-20220512011148-7184             |                                                |                   |         |                     |                     |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4          |                                                |                   |         |                     |                     |
	| ssh     | -p                                                         | no-preload-20220512010315-7184                 | minikube4\jenkins | v1.25.2 | 12 May 22 01:14 GMT | 12 May 22 01:14 GMT |
	|         | no-preload-20220512010315-7184                             |                                                |                   |         |                     |                     |
	|         | sudo crictl images -o json                                 |                                                |                   |         |                     |                     |
	| pause   | -p                                                         | no-preload-20220512010315-7184                 | minikube4\jenkins | v1.25.2 | 12 May 22 01:14 GMT | 12 May 22 01:14 GMT |
	|         | no-preload-20220512010315-7184                             |                                                |                   |         |                     |                     |
	|         | --alsologtostderr -v=1                                     |                                                |                   |         |                     |                     |
	| unpause | -p                                                         | no-preload-20220512010315-7184                 | minikube4\jenkins | v1.25.2 | 12 May 22 01:15 GMT | 12 May 22 01:15 GMT |
	|         | no-preload-20220512010315-7184                             |                                                |                   |         |                     |                     |
	|         | --alsologtostderr -v=1                                     |                                                |                   |         |                     |                     |
	| start   | -p                                                         | embed-certs-20220512010611-7184                | minikube4\jenkins | v1.25.2 | 12 May 22 01:08 GMT | 12 May 22 01:15 GMT |
	|         | embed-certs-20220512010611-7184                            |                                                |                   |         |                     |                     |
	|         | --memory=2200 --alsologtostderr                            |                                                |                   |         |                     |                     |
	|         | --wait=true --embed-certs                                  |                                                |                   |         |                     |                     |
	|         | --driver=docker                                            |                                                |                   |         |                     |                     |
	|         | --kubernetes-version=v1.23.5                               |                                                |                   |         |                     |                     |
	| delete  | -p                                                         | no-preload-20220512010315-7184                 | minikube4\jenkins | v1.25.2 | 12 May 22 01:15 GMT | 12 May 22 01:16 GMT |
	|         | no-preload-20220512010315-7184                             |                                                |                   |         |                     |                     |
	| delete  | -p                                                         | no-preload-20220512010315-7184                 | minikube4\jenkins | v1.25.2 | 12 May 22 01:16 GMT | 12 May 22 01:16 GMT |
	|         | no-preload-20220512010315-7184                             |                                                |                   |         |                     |                     |
	| ssh     | -p                                                         | embed-certs-20220512010611-7184                | minikube4\jenkins | v1.25.2 | 12 May 22 01:16 GMT | 12 May 22 01:16 GMT |
	|         | embed-certs-20220512010611-7184                            |                                                |                   |         |                     |                     |
	|         | sudo crictl images -o json                                 |                                                |                   |         |                     |                     |
	| logs    | embed-certs-20220512010611-7184                            | embed-certs-20220512010611-7184                | minikube4\jenkins | v1.25.2 | 12 May 22 01:17 GMT | 12 May 22 01:17 GMT |
	|         | logs -n 25                                                 |                                                |                   |         |                     |                     |
	| start   | -p newest-cni-20220512011616-7184 --memory=2200            | newest-cni-20220512011616-7184                 | minikube4\jenkins | v1.25.2 | 12 May 22 01:16 GMT | 12 May 22 01:18 GMT |
	|         | --alsologtostderr --wait=apiserver,system_pods,default_sa  |                                                |                   |         |                     |                     |
	|         | --feature-gates ServerSideApply=true --network-plugin=cni  |                                                |                   |         |                     |                     |
	|         | --extra-config=kubelet.network-plugin=cni                  |                                                |                   |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 |                                                |                   |         |                     |                     |
	|         | --driver=docker --kubernetes-version=v1.23.6-rc.0          |                                                |                   |         |                     |                     |
	| addons  | enable metrics-server -p                                   | newest-cni-20220512011616-7184                 | minikube4\jenkins | v1.25.2 | 12 May 22 01:18 GMT | 12 May 22 01:18 GMT |
	|         | newest-cni-20220512011616-7184                             |                                                |                   |         |                     |                     |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4           |                                                |                   |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                     |                                                |                   |         |                     |                     |
	| logs    | embed-certs-20220512010611-7184                            | embed-certs-20220512010611-7184                | minikube4\jenkins | v1.25.2 | 12 May 22 01:17 GMT | 12 May 22 01:18 GMT |
	|         | logs -n 25                                                 |                                                |                   |         |                     |                     |
	| stop    | -p                                                         | newest-cni-20220512011616-7184                 | minikube4\jenkins | v1.25.2 | 12 May 22 01:18 GMT | 12 May 22 01:18 GMT |
	|         | newest-cni-20220512011616-7184                             |                                                |                   |         |                     |                     |
	|         | --alsologtostderr -v=3                                     |                                                |                   |         |                     |                     |
	| addons  | enable dashboard -p                                        | newest-cni-20220512011616-7184                 | minikube4\jenkins | v1.25.2 | 12 May 22 01:18 GMT | 12 May 22 01:19 GMT |
	|         | newest-cni-20220512011616-7184                             |                                                |                   |         |                     |                     |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4          |                                                |                   |         |                     |                     |
	| delete  | -p                                                         | embed-certs-20220512010611-7184                | minikube4\jenkins | v1.25.2 | 12 May 22 01:18 GMT | 12 May 22 01:19 GMT |
	|         | embed-certs-20220512010611-7184                            |                                                |                   |         |                     |                     |
	| delete  | -p                                                         | embed-certs-20220512010611-7184                | minikube4\jenkins | v1.25.2 | 12 May 22 01:19 GMT | 12 May 22 01:19 GMT |
	|         | embed-certs-20220512010611-7184                            |                                                |                   |         |                     |                     |
	| start   | -p newest-cni-20220512011616-7184 --memory=2200            | newest-cni-20220512011616-7184                 | minikube4\jenkins | v1.25.2 | 12 May 22 01:19 GMT | 12 May 22 01:20 GMT |
	|         | --alsologtostderr --wait=apiserver,system_pods,default_sa  |                                                |                   |         |                     |                     |
	|         | --feature-gates ServerSideApply=true --network-plugin=cni  |                                                |                   |         |                     |                     |
	|         | --extra-config=kubelet.network-plugin=cni                  |                                                |                   |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 |                                                |                   |         |                     |                     |
	|         | --driver=docker --kubernetes-version=v1.23.6-rc.0          |                                                |                   |         |                     |                     |
	| start   | -p                                                         | old-k8s-version-20220512010246-7184            | minikube4\jenkins | v1.25.2 | 12 May 22 01:12 GMT | 12 May 22 01:20 GMT |
	|         | old-k8s-version-20220512010246-7184                        |                                                |                   |         |                     |                     |
	|         | --memory=2200 --alsologtostderr                            |                                                |                   |         |                     |                     |
	|         | --wait=true --kvm-network=default                          |                                                |                   |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                              |                                                |                   |         |                     |                     |
	|         | --disable-driver-mounts                                    |                                                |                   |         |                     |                     |
	|         | --keep-context=false                                       |                                                |                   |         |                     |                     |
	|         | --driver=docker                                            |                                                |                   |         |                     |                     |
	|         | --kubernetes-version=v1.16.0                               |                                                |                   |         |                     |                     |
	| ssh     | -p                                                         | newest-cni-20220512011616-7184                 | minikube4\jenkins | v1.25.2 | 12 May 22 01:20 GMT | 12 May 22 01:20 GMT |
	|         | newest-cni-20220512011616-7184                             |                                                |                   |         |                     |                     |
	|         | sudo crictl images -o json                                 |                                                |                   |         |                     |                     |
	| pause   | -p                                                         | newest-cni-20220512011616-7184                 | minikube4\jenkins | v1.25.2 | 12 May 22 01:20 GMT | 12 May 22 01:20 GMT |
	|         | newest-cni-20220512011616-7184                             |                                                |                   |         |                     |                     |
	|         | --alsologtostderr -v=1                                     |                                                |                   |         |                     |                     |
	| ssh     | -p                                                         | old-k8s-version-20220512010246-7184            | minikube4\jenkins | v1.25.2 | 12 May 22 01:20 GMT | 12 May 22 01:20 GMT |
	|         | old-k8s-version-20220512010246-7184                        |                                                |                   |         |                     |                     |
	|         | sudo crictl images -o json                                 |                                                |                   |         |                     |                     |
	| unpause | -p                                                         | newest-cni-20220512011616-7184                 | minikube4\jenkins | v1.25.2 | 12 May 22 01:20 GMT | 12 May 22 01:21 GMT |
	|         | newest-cni-20220512011616-7184                             |                                                |                   |         |                     |                     |
	|         | --alsologtostderr -v=1                                     |                                                |                   |         |                     |                     |
	|---------|------------------------------------------------------------|------------------------------------------------|-------------------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2022/05/12 01:19:49
	Running on machine: minikube4
	Binary: Built with gc go1.18.1 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0512 01:19:49.571176    4188 out.go:296] Setting OutFile to fd 1860 ...
	I0512 01:19:49.639221    4188 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0512 01:19:49.639221    4188 out.go:309] Setting ErrFile to fd 1796...
	I0512 01:19:49.639221    4188 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0512 01:19:49.650223    4188 out.go:303] Setting JSON to false
	I0512 01:19:49.653228    4188 start.go:115] hostinfo: {"hostname":"minikube4","uptime":16842,"bootTime":1652301547,"procs":166,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19042 Build 19042","kernelVersion":"10.0.19042 Build 19042","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"df6bfb5f-73f2-4acb-9365-df7854ecbb28"}
	W0512 01:19:49.653228    4188 start.go:123] gopshost.Virtualization returned error: not implemented yet
	I0512 01:19:49.659225    4188 out.go:177] * [auto-20220512010229-7184] minikube v1.25.2 on Microsoft Windows 10 Enterprise N 10.0.19042 Build 19042
	I0512 01:19:49.662226    4188 notify.go:193] Checking for updates...
	I0512 01:19:49.668232    4188 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	I0512 01:19:49.671297    4188 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube4\minikube-integration\.minikube
	I0512 01:19:49.676237    4188 out.go:177]   - MINIKUBE_LOCATION=13639
	I0512 01:19:49.678249    4188 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0512 01:19:46.933351    4792 pod_ready.go:102] pod "metrics-server-b955d9d8-rm42p" in "kube-system" namespace has status "Ready":"False"
	I0512 01:19:49.432852    4792 pod_ready.go:102] pod "metrics-server-b955d9d8-rm42p" in "kube-system" namespace has status "Ready":"False"
	I0512 01:19:49.672237    4756 system_pods.go:86] 4 kube-system pods found
	I0512 01:19:49.672237    4756 system_pods.go:89] "coredns-5644d7b6d9-ds6wg" [274c71a2-5a74-40cf-9719-e53e1901acdb] Running
	I0512 01:19:49.672237    4756 system_pods.go:89] "kube-proxy-5dp6x" [29ed9a2f-069f-409e-8a9f-ce8869e1a908] Running
	I0512 01:19:49.672237    4756 system_pods.go:89] "metrics-server-6f89b5864b-xnzbk" [7c6b6847-36d4-4700-b45c-4e00a73b9477] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0512 01:19:49.672237    4756 system_pods.go:89] "storage-provisioner" [aab59255-6979-4cee-bb62-a1d8611e5cf8] Running
	I0512 01:19:49.672237    4756 retry.go:31] will retry after 4.086092664s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0512 01:19:47.965926    6912 api_server.go:165] Checking apiserver status ...
	I0512 01:19:47.982987    6912 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0512 01:19:48.013100    6912 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0512 01:19:48.171006    6912 api_server.go:165] Checking apiserver status ...
	I0512 01:19:48.186022    6912 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0512 01:19:48.211529    6912 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0512 01:19:48.374994    6912 api_server.go:165] Checking apiserver status ...
	I0512 01:19:48.390804    6912 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0512 01:19:48.421480    6912 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0512 01:19:48.563441    6912 api_server.go:165] Checking apiserver status ...
	I0512 01:19:48.578770    6912 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0512 01:19:48.616536    6912 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0512 01:19:48.769628    6912 api_server.go:165] Checking apiserver status ...
	I0512 01:19:48.786513    6912 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0512 01:19:48.815443    6912 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0512 01:19:48.973003    6912 api_server.go:165] Checking apiserver status ...
	I0512 01:19:48.985016    6912 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0512 01:19:49.012504    6912 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0512 01:19:49.176835    6912 api_server.go:165] Checking apiserver status ...
	I0512 01:19:49.204038    6912 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0512 01:19:49.236355    6912 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0512 01:19:49.365687    6912 api_server.go:165] Checking apiserver status ...
	I0512 01:19:49.382965    6912 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0512 01:19:49.423915    6912 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0512 01:19:49.567179    6912 api_server.go:165] Checking apiserver status ...
	I0512 01:19:49.584863    6912 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0512 01:19:49.616760    6912 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0512 01:19:49.769459    6912 api_server.go:165] Checking apiserver status ...
	I0512 01:19:49.782464    6912 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0512 01:19:49.809387    6912 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0512 01:19:49.974446    6912 api_server.go:165] Checking apiserver status ...
	I0512 01:19:49.993561    6912 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0512 01:19:50.022494    6912 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0512 01:19:50.163756    6912 api_server.go:165] Checking apiserver status ...
	I0512 01:19:50.184636    6912 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0512 01:19:50.211748    6912 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0512 01:19:50.211812    6912 api_server.go:165] Checking apiserver status ...
	I0512 01:19:50.222237    6912 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0512 01:19:50.246716    6912 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0512 01:19:50.246716    6912 kubeadm.go:576] needs reconfigure: apiserver error: timed out waiting for the condition
	I0512 01:19:50.246716    6912 kubeadm.go:1067] stopping kube-system containers ...
	I0512 01:19:50.255348    6912 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0512 01:19:50.340290    6912 docker.go:442] Stopping containers: [d227455ccdde 2705871ca0f7 49817350aebd 852768ca0726 0cbc97ed8c11 badfe163ceb4 9367d74fd2f8 4d1db2f18b33 4a44055f81f8 fbd2796c00bf 48054c5b8de4 6f1ab527264d 4f0acad8f528 3e9d5d1a9343 6b5810bcd73a 114027ffb054 7e8a5a194b38 d8804284e08f dda6e2dbf316]
	I0512 01:19:50.352131    6912 ssh_runner.go:195] Run: docker stop d227455ccdde 2705871ca0f7 49817350aebd 852768ca0726 0cbc97ed8c11 badfe163ceb4 9367d74fd2f8 4d1db2f18b33 4a44055f81f8 fbd2796c00bf 48054c5b8de4 6f1ab527264d 4f0acad8f528 3e9d5d1a9343 6b5810bcd73a 114027ffb054 7e8a5a194b38 d8804284e08f dda6e2dbf316
	I0512 01:19:50.444468    6912 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0512 01:19:50.490417    6912 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0512 01:19:50.512429    6912 kubeadm.go:155] found existing configuration files:
	-rw------- 1 root root 5643 May 12 01:17 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5656 May 12 01:17 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2059 May 12 01:18 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5600 May 12 01:17 /etc/kubernetes/scheduler.conf
	
	I0512 01:19:50.522401    6912 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0512 01:19:50.551414    6912 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0512 01:19:50.593427    6912 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0512 01:19:50.613499    6912 kubeadm.go:166] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0512 01:19:50.622421    6912 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0512 01:19:50.649418    6912 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0512 01:19:50.670974    6912 kubeadm.go:166] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0512 01:19:50.685280    6912 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0512 01:19:50.727494    6912 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0512 01:19:50.752047    6912 kubeadm.go:678] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0512 01:19:50.752047    6912 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6-rc.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0512 01:19:50.884508    6912 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6-rc.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0512 01:19:52.347373    6912 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6-rc.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.4627672s)
	I0512 01:19:52.347447    6912 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6-rc.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0512 01:19:52.668499    6912 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6-rc.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0512 01:19:52.897641    6912 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6-rc.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0512 01:19:49.682253    4188 config.go:178] Loaded profile config "default-k8s-different-port-20220512011148-7184": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.5
	I0512 01:19:49.682253    4188 config.go:178] Loaded profile config "newest-cni-20220512011616-7184": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.6-rc.0
	I0512 01:19:49.683411    4188 config.go:178] Loaded profile config "old-k8s-version-20220512010246-7184": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.16.0
	I0512 01:19:49.683411    4188 driver.go:358] Setting default libvirt URI to qemu:///system
	I0512 01:19:52.538476    4188 docker.go:137] docker version: linux-20.10.14
	I0512 01:19:52.546247    4188 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0512 01:19:51.988243    4792 pod_ready.go:102] pod "metrics-server-b955d9d8-rm42p" in "kube-system" namespace has status "Ready":"False"
	I0512 01:19:54.426932    4792 pod_ready.go:102] pod "metrics-server-b955d9d8-rm42p" in "kube-system" namespace has status "Ready":"False"
	I0512 01:19:54.890230    4188 cli_runner.go:217] Completed: docker system info --format "{{json .}}": (2.3438627s)
	I0512 01:19:55.419165    4188 info.go:265] docker info: {ID:5MOX:W55Z:6RSS:V5PU:46KT:D723:NTM4:N7FK:USOO:URA3:TW6J:2PNT Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:86 OomKillDisable:true NGoroutines:63 SystemTime:2022-05-12 01:19:53.7251318 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.102.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_
64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:53902323712 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.14 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3df54a852345ae127d1fa3092b95168e4a88e2f8 Expected:3df54a852345ae127d1fa3092b95168e4a88e2f8} RuncCommit:{ID:v1.0.3-0-gf46b6ba Expected:v1.0.3-0-gf46b6ba} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,
profile=default] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.5.0] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:C:\Program Files\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc.
Version:v0.17.0]] Warnings:<nil>}}
	I0512 01:19:55.739799    4188 out.go:177] * Using the docker driver based on user configuration
	I0512 01:19:53.773240    4756 system_pods.go:86] 5 kube-system pods found
	I0512 01:19:53.773240    4756 system_pods.go:89] "coredns-5644d7b6d9-ds6wg" [274c71a2-5a74-40cf-9719-e53e1901acdb] Running
	I0512 01:19:53.773240    4756 system_pods.go:89] "kube-apiserver-old-k8s-version-20220512010246-7184" [5e4b74f1-7f9f-4b1e-bfbb-762b651204a1] Pending
	I0512 01:19:53.773240    4756 system_pods.go:89] "kube-proxy-5dp6x" [29ed9a2f-069f-409e-8a9f-ce8869e1a908] Running
	I0512 01:19:53.773240    4756 system_pods.go:89] "metrics-server-6f89b5864b-xnzbk" [7c6b6847-36d4-4700-b45c-4e00a73b9477] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0512 01:19:53.773240    4756 system_pods.go:89] "storage-provisioner" [aab59255-6979-4cee-bb62-a1d8611e5cf8] Running
	I0512 01:19:53.773240    4756 retry.go:31] will retry after 6.402197611s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0512 01:19:53.169962    6912 api_server.go:51] waiting for apiserver process to appear ...
	I0512 01:19:53.187515    6912 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0512 01:19:53.730427    6912 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0512 01:19:54.229642    6912 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0512 01:19:54.732637    6912 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0512 01:19:55.239773    6912 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0512 01:19:55.743657    6912 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0512 01:19:56.227267    6912 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0512 01:19:56.733277    6912 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0512 01:19:57.239840    6912 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0512 01:19:57.726233    6912 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0512 01:19:55.744334    4188 start.go:284] selected driver: docker
	I0512 01:19:55.744334    4188 start.go:801] validating driver "docker" against <nil>
	I0512 01:19:55.744334    4188 start.go:812] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0512 01:19:55.828778    4188 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0512 01:19:58.492957    4188 cli_runner.go:217] Completed: docker system info --format "{{json .}}": (2.6640423s)
	I0512 01:19:58.492957    4188 info.go:265] docker info: {ID:5MOX:W55Z:6RSS:V5PU:46KT:D723:NTM4:N7FK:USOO:URA3:TW6J:2PNT Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:86 OomKillDisable:true NGoroutines:63 SystemTime:2022-05-12 01:19:57.0171379 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.102.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_
64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:53902323712 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.14 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3df54a852345ae127d1fa3092b95168e4a88e2f8 Expected:3df54a852345ae127d1fa3092b95168e4a88e2f8} RuncCommit:{ID:v1.0.3-0-gf46b6ba Expected:v1.0.3-0-gf46b6ba} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,
profile=default] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.5.0] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:C:\Program Files\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc.
Version:v0.17.0]] Warnings:<nil>}}
	I0512 01:19:58.492957    4188 start_flags.go:292] no existing cluster config was found, will generate one from the flags 
	I0512 01:19:58.493961    4188 start_flags.go:847] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0512 01:19:58.496990    4188 out.go:177] * Using Docker Desktop driver with the root privilege
	I0512 01:19:58.498953    4188 cni.go:95] Creating CNI manager for ""
	I0512 01:19:58.498953    4188 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0512 01:19:58.498953    4188 start_flags.go:306] config:
	{Name:auto-20220512010229-7184 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1652251400-14138@sha256:8c847a4aa2afc5a7fc659f9731046bf9cc7e788283deecc83c8633014fb0828a Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.5 ClusterName:auto-20220512010229-7184 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:5m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube4:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0512 01:19:58.502956    4188 out.go:177] * Starting control plane node auto-20220512010229-7184 in cluster auto-20220512010229-7184
	I0512 01:19:58.504961    4188 cache.go:120] Beginning downloading kic base image for docker with docker
	I0512 01:19:58.507965    4188 out.go:177] * Pulling base image ...
	I0512 01:19:58.509945    4188 preload.go:132] Checking if preload exists for k8s version v1.23.5 and runtime docker
	I0512 01:19:58.509945    4188 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1652251400-14138@sha256:8c847a4aa2afc5a7fc659f9731046bf9cc7e788283deecc83c8633014fb0828a in local docker daemon
	I0512 01:19:58.509945    4188 preload.go:148] Found local preload: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.23.5-docker-overlay2-amd64.tar.lz4
	I0512 01:19:58.509945    4188 cache.go:57] Caching tarball of preloaded images
	I0512 01:19:58.509945    4188 preload.go:174] Found C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.23.5-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0512 01:19:58.510965    4188 cache.go:60] Finished verifying existence of preloaded tar for  v1.23.5 on docker
	I0512 01:19:58.510965    4188 profile.go:148] Saving config to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\auto-20220512010229-7184\config.json ...
	I0512 01:19:58.510965    4188 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\auto-20220512010229-7184\config.json: {Name:mkd138d070c3656e8dfc555bf2a37060768135d6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0512 01:19:59.973645    4188 image.go:79] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1652251400-14138@sha256:8c847a4aa2afc5a7fc659f9731046bf9cc7e788283deecc83c8633014fb0828a in local docker daemon, skipping pull
	I0512 01:19:59.973645    4188 cache.go:141] gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1652251400-14138@sha256:8c847a4aa2afc5a7fc659f9731046bf9cc7e788283deecc83c8633014fb0828a exists in daemon, skipping load
	I0512 01:19:59.973645    4188 cache.go:206] Successfully downloaded all kic artifacts
	I0512 01:19:59.973645    4188 start.go:352] acquiring machines lock for auto-20220512010229-7184: {Name:mkce085adb4528067fc9b8e27ba1f8fcfad3c3c6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0512 01:19:59.973645    4188 start.go:356] acquired machines lock for "auto-20220512010229-7184" in 0s
	I0512 01:19:59.974439    4188 start.go:91] Provisioning new machine with config: &{Name:auto-20220512010229-7184 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1652251400-14138@sha256:8c847a4aa2afc5a7fc659f9731046bf9cc7e788283deecc83c8633014fb0828a Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.5 ClusterName:auto-20220512010229-7184 Namespace:default APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.23.5 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:5m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube4:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimiz
ations:false DisableMetrics:false} &{Name: IP: Port:8443 KubernetesVersion:v1.23.5 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0512 01:19:59.974439    4188 start.go:131] createHost starting for "" (driver="docker")
	I0512 01:19:56.448640    4792 pod_ready.go:102] pod "metrics-server-b955d9d8-rm42p" in "kube-system" namespace has status "Ready":"False"
	I0512 01:19:58.907967    4792 pod_ready.go:81] duration metric: took 4m0.0148566s waiting for pod "metrics-server-b955d9d8-rm42p" in "kube-system" namespace to be "Ready" ...
	E0512 01:19:58.907967    4792 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-b955d9d8-rm42p" in "kube-system" namespace to be "Ready" (will not retry!)
	I0512 01:19:58.907967    4792 pod_ready.go:38] duration metric: took 4m5.7987586s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0512 01:19:58.907967    4792 kubeadm.go:605] restartCluster took 4m38.5249642s
	W0512 01:19:58.907967    4792 out.go:239] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I0512 01:19:58.908975    4792 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.5:$PATH" kubeadm reset --cri-socket /var/run/dockershim.sock --force"
	I0512 01:20:00.291379    4756 system_pods.go:86] 5 kube-system pods found
	I0512 01:20:00.291379    4756 system_pods.go:89] "coredns-5644d7b6d9-ds6wg" [274c71a2-5a74-40cf-9719-e53e1901acdb] Running
	I0512 01:20:00.291379    4756 system_pods.go:89] "kube-apiserver-old-k8s-version-20220512010246-7184" [5e4b74f1-7f9f-4b1e-bfbb-762b651204a1] Running
	I0512 01:20:00.291530    4756 system_pods.go:89] "kube-proxy-5dp6x" [29ed9a2f-069f-409e-8a9f-ce8869e1a908] Running
	I0512 01:20:00.291567    4756 system_pods.go:89] "metrics-server-6f89b5864b-xnzbk" [7c6b6847-36d4-4700-b45c-4e00a73b9477] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0512 01:20:00.291567    4756 system_pods.go:89] "storage-provisioner" [aab59255-6979-4cee-bb62-a1d8611e5cf8] Running
	I0512 01:20:00.291639    4756 retry.go:31] will retry after 6.062999549s: missing components: etcd, kube-controller-manager, kube-scheduler
	I0512 01:19:58.232279    6912 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0512 01:19:58.731964    6912 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0512 01:19:58.890978    6912 api_server.go:71] duration metric: took 5.7207215s to wait for apiserver process to appear ...
	I0512 01:19:58.890978    6912 api_server.go:87] waiting for apiserver healthz status ...
	I0512 01:19:58.890978    6912 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:50853/healthz ...
	I0512 01:19:58.896961    6912 api_server.go:256] stopped: https://127.0.0.1:50853/healthz: Get "https://127.0.0.1:50853/healthz": EOF
	I0512 01:19:59.401007    6912 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:50853/healthz ...
	I0512 01:19:59.978436    4188 out.go:204] * Creating docker container (CPUs=2, Memory=2048MB) ...
	I0512 01:19:59.978436    4188 start.go:165] libmachine.API.Create for "auto-20220512010229-7184" (driver="docker")
	I0512 01:19:59.978436    4188 client.go:168] LocalClient.Create starting
	I0512 01:19:59.979496    4188 main.go:134] libmachine: Reading certificate data from C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem
	I0512 01:19:59.979496    4188 main.go:134] libmachine: Decoding PEM data...
	I0512 01:19:59.979496    4188 main.go:134] libmachine: Parsing certificate...
	I0512 01:19:59.979496    4188 main.go:134] libmachine: Reading certificate data from C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem
	I0512 01:19:59.980461    4188 main.go:134] libmachine: Decoding PEM data...
	I0512 01:19:59.980461    4188 main.go:134] libmachine: Parsing certificate...
	I0512 01:19:59.994433    4188 cli_runner.go:164] Run: docker network inspect auto-20220512010229-7184 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0512 01:20:01.234030    4188 cli_runner.go:211] docker network inspect auto-20220512010229-7184 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0512 01:20:01.234030    4188 cli_runner.go:217] Completed: docker network inspect auto-20220512010229-7184 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}": (1.2395329s)
	I0512 01:20:01.241764    4188 network_create.go:272] running [docker network inspect auto-20220512010229-7184] to gather additional debugging logs...
	I0512 01:20:01.241798    4188 cli_runner.go:164] Run: docker network inspect auto-20220512010229-7184
	W0512 01:20:02.452635    4188 cli_runner.go:211] docker network inspect auto-20220512010229-7184 returned with exit code 1
	I0512 01:20:02.452635    4188 cli_runner.go:217] Completed: docker network inspect auto-20220512010229-7184: (1.2106913s)
	I0512 01:20:02.452635    4188 network_create.go:275] error running [docker network inspect auto-20220512010229-7184]: docker network inspect auto-20220512010229-7184: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: auto-20220512010229-7184
	I0512 01:20:02.452635    4188 network_create.go:277] output of [docker network inspect auto-20220512010229-7184]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: auto-20220512010229-7184
	
	** /stderr **
	I0512 01:20:02.460063    4188 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0512 01:20:03.649817    4188 cli_runner.go:217] Completed: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}": (1.1896924s)
	I0512 01:20:03.673052    4188 network.go:288] reserving subnet 192.168.49.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[] amended:true}} dirty:map[192.168.49.0:0xc000794630] misses:0}
	I0512 01:20:03.673052    4188 network.go:235] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0512 01:20:03.673052    4188 network_create.go:115] attempt to create docker network auto-20220512010229-7184 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0512 01:20:03.687033    4188 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true auto-20220512010229-7184
	I0512 01:20:06.376301    4756 system_pods.go:86] 7 kube-system pods found
	I0512 01:20:06.376301    4756 system_pods.go:89] "coredns-5644d7b6d9-ds6wg" [274c71a2-5a74-40cf-9719-e53e1901acdb] Running
	I0512 01:20:06.376301    4756 system_pods.go:89] "etcd-old-k8s-version-20220512010246-7184" [8197f31d-c95a-42f1-9974-091d1c27c60b] Pending
	I0512 01:20:06.376301    4756 system_pods.go:89] "kube-apiserver-old-k8s-version-20220512010246-7184" [5e4b74f1-7f9f-4b1e-bfbb-762b651204a1] Running
	I0512 01:20:06.376301    4756 system_pods.go:89] "kube-controller-manager-old-k8s-version-20220512010246-7184" [464fedb8-445d-4d2b-98af-2fea913fa291] Pending
	I0512 01:20:06.376301    4756 system_pods.go:89] "kube-proxy-5dp6x" [29ed9a2f-069f-409e-8a9f-ce8869e1a908] Running
	I0512 01:20:06.376301    4756 system_pods.go:89] "metrics-server-6f89b5864b-xnzbk" [7c6b6847-36d4-4700-b45c-4e00a73b9477] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0512 01:20:06.376301    4756 system_pods.go:89] "storage-provisioner" [aab59255-6979-4cee-bb62-a1d8611e5cf8] Running
	I0512 01:20:06.376301    4756 retry.go:31] will retry after 10.504197539s: missing components: etcd, kube-controller-manager, kube-scheduler
	I0512 01:20:04.406423    6912 api_server.go:256] stopped: https://127.0.0.1:50853/healthz: Get "https://127.0.0.1:50853/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0512 01:20:04.910700    6912 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:50853/healthz ...
	I0512 01:20:05.276798    6912 api_server.go:266] https://127.0.0.1:50853/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0512 01:20:05.276798    6912 api_server.go:102] status: https://127.0.0.1:50853/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0512 01:20:05.400835    6912 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:50853/healthz ...
	I0512 01:20:05.577325    6912 api_server.go:266] https://127.0.0.1:50853/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0512 01:20:05.577325    6912 api_server.go:102] status: https://127.0.0.1:50853/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0512 01:20:05.902960    6912 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:50853/healthz ...
	I0512 01:20:05.983572    6912 api_server.go:266] https://127.0.0.1:50853/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0512 01:20:05.983572    6912 api_server.go:102] status: https://127.0.0.1:50853/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0512 01:20:06.405287    6912 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:50853/healthz ...
	I0512 01:20:06.483009    6912 api_server.go:266] https://127.0.0.1:50853/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0512 01:20:06.483127    6912 api_server.go:102] status: https://127.0.0.1:50853/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0512 01:20:06.909964    6912 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:50853/healthz ...
	I0512 01:20:07.552892    6912 api_server.go:266] https://127.0.0.1:50853/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0512 01:20:07.553019    6912 api_server.go:102] status: https://127.0.0.1:50853/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0512 01:20:07.904511    6912 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:50853/healthz ...
	W0512 01:20:04.799654    4188 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true auto-20220512010229-7184 returned with exit code 1
	I0512 01:20:04.799654    4188 cli_runner.go:217] Completed: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true auto-20220512010229-7184: (1.1124913s)
	W0512 01:20:04.799654    4188 network_create.go:107] failed to create docker network auto-20220512010229-7184 192.168.49.0/24, will retry: subnet is taken
	I0512 01:20:04.822835    4188 network.go:279] skipping subnet 192.168.49.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc000794630] amended:false}} dirty:map[] misses:0}
	I0512 01:20:04.822875    4188 network.go:238] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0512 01:20:04.843798    4188 network.go:288] reserving subnet 192.168.58.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc000794630] amended:true}} dirty:map[192.168.49.0:0xc000794630 192.168.58.0:0xc000006a08] misses:0}
	I0512 01:20:04.843798    4188 network.go:235] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0512 01:20:04.843798    4188 network_create.go:115] attempt to create docker network auto-20220512010229-7184 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
	I0512 01:20:04.850795    4188 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true auto-20220512010229-7184
	I0512 01:20:06.166981    4188 cli_runner.go:217] Completed: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true auto-20220512010229-7184: (1.3151242s)
	I0512 01:20:06.166981    4188 network_create.go:99] docker network auto-20220512010229-7184 192.168.58.0/24 created
	I0512 01:20:06.166981    4188 kic.go:106] calculated static IP "192.168.58.2" for the "auto-20220512010229-7184" container
	I0512 01:20:06.193972    4188 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0512 01:20:07.431171    4188 cli_runner.go:217] Completed: docker ps -a --format {{.Names}}: (1.2370298s)
	I0512 01:20:07.439300    4188 cli_runner.go:164] Run: docker volume create auto-20220512010229-7184 --label name.minikube.sigs.k8s.io=auto-20220512010229-7184 --label created_by.minikube.sigs.k8s.io=true
	I0512 01:20:08.711061    4188 cli_runner.go:217] Completed: docker volume create auto-20220512010229-7184 --label name.minikube.sigs.k8s.io=auto-20220512010229-7184 --label created_by.minikube.sigs.k8s.io=true: (1.2716197s)
	I0512 01:20:08.711061    4188 oci.go:103] Successfully created a docker volume auto-20220512010229-7184
	I0512 01:20:08.718042    4188 cli_runner.go:164] Run: docker run --rm --name auto-20220512010229-7184-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=auto-20220512010229-7184 --entrypoint /usr/bin/test -v auto-20220512010229-7184:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1652251400-14138@sha256:8c847a4aa2afc5a7fc659f9731046bf9cc7e788283deecc83c8633014fb0828a -d /var/lib
	I0512 01:20:07.981541    6912 api_server.go:266] https://127.0.0.1:50853/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0512 01:20:07.981541    6912 api_server.go:102] status: https://127.0.0.1:50853/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0512 01:20:08.412223    6912 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:50853/healthz ...
	I0512 01:20:08.484191    6912 api_server.go:266] https://127.0.0.1:50853/healthz returned 200:
	ok
	I0512 01:20:08.588044    6912 api_server.go:140] control plane version: v1.23.6-rc.0
	I0512 01:20:08.588044    6912 api_server.go:130] duration metric: took 9.6965749s to wait for apiserver health ...
	I0512 01:20:08.588189    6912 cni.go:95] Creating CNI manager for ""
	I0512 01:20:08.588189    6912 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0512 01:20:08.588189    6912 system_pods.go:43] waiting for kube-system pods to appear ...
	I0512 01:20:08.693623    6912 system_pods.go:59] 8 kube-system pods found
	I0512 01:20:08.693623    6912 system_pods.go:61] "coredns-64897985d-5ws8d" [3ab4607e-b641-4ec4-95b8-e748182293c3] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0512 01:20:08.693623    6912 system_pods.go:61] "etcd-newest-cni-20220512011616-7184" [bd9ea317-d13b-4bd1-816d-a9cebacb0f9d] Running
	I0512 01:20:08.693623    6912 system_pods.go:61] "kube-apiserver-newest-cni-20220512011616-7184" [fea8351d-15a8-453d-a564-46ca2334caf1] Running
	I0512 01:20:08.693623    6912 system_pods.go:61] "kube-controller-manager-newest-cni-20220512011616-7184" [18ff8810-e805-4c54-bca7-51c98357c897] Running
	I0512 01:20:08.693623    6912 system_pods.go:61] "kube-proxy-4rh4b" [b0893ff4-bc22-47ac-8feb-c4f6dd7d3fb0] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0512 01:20:08.693623    6912 system_pods.go:61] "kube-scheduler-newest-cni-20220512011616-7184" [658fbc66-063f-4b58-b41f-e054ec6b9ec4] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0512 01:20:08.693623    6912 system_pods.go:61] "metrics-server-b955d9d8-nkjgl" [6ec6d39c-3946-4260-a4ae-3b080a511a18] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0512 01:20:08.693623    6912 system_pods.go:61] "storage-provisioner" [e46bc56d-a455-44fd-a6ca-36a598ad3fdd] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0512 01:20:08.693623    6912 system_pods.go:74] duration metric: took 105.4288ms to wait for pod list to return data ...
	I0512 01:20:08.693623    6912 node_conditions.go:102] verifying NodePressure condition ...
	I0512 01:20:08.775510    6912 node_conditions.go:122] node storage ephemeral capacity is 263174212Ki
	I0512 01:20:08.775510    6912 node_conditions.go:123] node cpu capacity is 16
	I0512 01:20:08.775698    6912 node_conditions.go:105] duration metric: took 82.0704ms to run NodePressure ...
	I0512 01:20:08.775698    6912 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6-rc.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0512 01:20:11.396272    6912 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6-rc.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml": (2.6204433s)
	I0512 01:20:11.396272    6912 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0512 01:20:11.485562    6912 ops.go:34] apiserver oom_adj: -16
	I0512 01:20:11.485562    6912 kubeadm.go:605] restartCluster took 25.5629595s
	I0512 01:20:11.485562    6912 kubeadm.go:393] StartCluster complete in 25.6695855s
	I0512 01:20:11.485562    6912 settings.go:142] acquiring lock: {Name:mk5d8710830d010adb6db61f855b0ef766a8622c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0512 01:20:11.485562    6912 settings.go:150] Updating kubeconfig:  C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	I0512 01:20:11.493548    6912 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\kubeconfig: {Name:mk97c09b788e5010ffd4c9dd9525f9245d5edd25 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0512 01:20:11.672134    6912 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "newest-cni-20220512011616-7184" rescaled to 1
	I0512 01:20:11.672386    6912 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.23.6-rc.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0512 01:20:11.672500    6912 addons.go:415] enableAddons start: toEnable=map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true], additional=[]
	I0512 01:20:11.672386    6912 start.go:208] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.23.6-rc.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0512 01:20:11.676966    6912 out.go:177] * Verifying Kubernetes components...
	I0512 01:20:11.672594    6912 addons.go:65] Setting storage-provisioner=true in profile "newest-cni-20220512011616-7184"
	I0512 01:20:11.672594    6912 addons.go:65] Setting default-storageclass=true in profile "newest-cni-20220512011616-7184"
	I0512 01:20:11.672594    6912 addons.go:65] Setting dashboard=true in profile "newest-cni-20220512011616-7184"
	I0512 01:20:11.672594    6912 addons.go:65] Setting metrics-server=true in profile "newest-cni-20220512011616-7184"
	I0512 01:20:11.672932    6912 config.go:178] Loaded profile config "newest-cni-20220512011616-7184": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.6-rc.0
	I0512 01:20:11.677159    6912 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-20220512011616-7184"
	I0512 01:20:11.677159    6912 addons.go:153] Setting addon dashboard=true in "newest-cni-20220512011616-7184"
	W0512 01:20:11.677307    6912 addons.go:165] addon dashboard should already be in state true
	I0512 01:20:11.677344    6912 addons.go:153] Setting addon metrics-server=true in "newest-cni-20220512011616-7184"
	W0512 01:20:11.677344    6912 addons.go:165] addon metrics-server should already be in state true
	I0512 01:20:11.677600    6912 host.go:66] Checking if "newest-cni-20220512011616-7184" exists ...
	I0512 01:20:11.677637    6912 addons.go:153] Setting addon storage-provisioner=true in "newest-cni-20220512011616-7184"
	W0512 01:20:11.679387    6912 addons.go:165] addon storage-provisioner should already be in state true
	I0512 01:20:11.680588    6912 host.go:66] Checking if "newest-cni-20220512011616-7184" exists ...
	I0512 01:20:11.677637    6912 host.go:66] Checking if "newest-cni-20220512011616-7184" exists ...
	I0512 01:20:11.701049    6912 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0512 01:20:11.708053    6912 cli_runner.go:164] Run: docker container inspect newest-cni-20220512011616-7184 --format={{.State.Status}}
	I0512 01:20:11.713433    6912 cli_runner.go:164] Run: docker container inspect newest-cni-20220512011616-7184 --format={{.State.Status}}
	I0512 01:20:11.716939    6912 cli_runner.go:164] Run: docker container inspect newest-cni-20220512011616-7184 --format={{.State.Status}}
	I0512 01:20:11.719578    6912 cli_runner.go:164] Run: docker container inspect newest-cni-20220512011616-7184 --format={{.State.Status}}
	I0512 01:20:12.588862    6912 start.go:795] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I0512 01:20:12.599856    6912 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" newest-cni-20220512011616-7184
	I0512 01:20:13.246533    6912 cli_runner.go:217] Completed: docker container inspect newest-cni-20220512011616-7184 --format={{.State.Status}}: (1.5263316s)
	I0512 01:20:13.249817    6912 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0512 01:20:13.252968    6912 addons.go:348] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0512 01:20:13.252968    6912 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0512 01:20:13.262555    6912 cli_runner.go:217] Completed: docker container inspect newest-cni-20220512011616-7184 --format={{.State.Status}}: (1.5451467s)
	I0512 01:20:13.265858    6912 out.go:177]   - Using image kubernetesui/dashboard:v2.5.1
	I0512 01:20:13.267570    6912 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220512011616-7184
	I0512 01:20:13.272555    6912 out.go:177]   - Using image k8s.gcr.io/echoserver:1.4
	I0512 01:20:13.276529    6912 addons.go:348] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0512 01:20:13.276529    6912 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0512 01:20:13.295546    6912 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220512011616-7184
	I0512 01:20:13.300559    6912 cli_runner.go:217] Completed: docker container inspect newest-cni-20220512011616-7184 --format={{.State.Status}}: (1.5870464s)
	I0512 01:20:13.305548    6912 out.go:177]   - Using image fake.domain/k8s.gcr.io/echoserver:1.4
	I0512 01:20:12.064431    4188 cli_runner.go:217] Completed: docker run --rm --name auto-20220512010229-7184-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=auto-20220512010229-7184 --entrypoint /usr/bin/test -v auto-20220512010229-7184:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1652251400-14138@sha256:8c847a4aa2afc5a7fc659f9731046bf9cc7e788283deecc83c8633014fb0828a -d /var/lib: (3.3462212s)
	I0512 01:20:12.064431    4188 oci.go:107] Successfully prepared a docker volume auto-20220512010229-7184
	I0512 01:20:12.064431    4188 preload.go:132] Checking if preload exists for k8s version v1.23.5 and runtime docker
	I0512 01:20:12.064431    4188 kic.go:179] Starting extracting preloaded images to volume ...
	I0512 01:20:12.084632    4188 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.23.5-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v auto-20220512010229-7184:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1652251400-14138@sha256:8c847a4aa2afc5a7fc659f9731046bf9cc7e788283deecc83c8633014fb0828a -I lz4 -xf /preloaded.tar -C /extractDir
	I0512 01:20:16.900552    4756 system_pods.go:86] 8 kube-system pods found
	I0512 01:20:16.900552    4756 system_pods.go:89] "coredns-5644d7b6d9-ds6wg" [274c71a2-5a74-40cf-9719-e53e1901acdb] Running
	I0512 01:20:16.900552    4756 system_pods.go:89] "etcd-old-k8s-version-20220512010246-7184" [8197f31d-c95a-42f1-9974-091d1c27c60b] Running
	I0512 01:20:16.900552    4756 system_pods.go:89] "kube-apiserver-old-k8s-version-20220512010246-7184" [5e4b74f1-7f9f-4b1e-bfbb-762b651204a1] Running
	I0512 01:20:16.900552    4756 system_pods.go:89] "kube-controller-manager-old-k8s-version-20220512010246-7184" [464fedb8-445d-4d2b-98af-2fea913fa291] Running
	I0512 01:20:16.900552    4756 system_pods.go:89] "kube-proxy-5dp6x" [29ed9a2f-069f-409e-8a9f-ce8869e1a908] Running
	I0512 01:20:16.900552    4756 system_pods.go:89] "kube-scheduler-old-k8s-version-20220512010246-7184" [ee09078d-37ef-42bd-bdc4-c6d4d41df903] Pending
	I0512 01:20:16.900552    4756 system_pods.go:89] "metrics-server-6f89b5864b-xnzbk" [7c6b6847-36d4-4700-b45c-4e00a73b9477] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0512 01:20:16.900552    4756 system_pods.go:89] "storage-provisioner" [aab59255-6979-4cee-bb62-a1d8611e5cf8] Running
	I0512 01:20:16.900552    4756 retry.go:31] will retry after 12.194240946s: missing components: kube-scheduler
	I0512 01:20:13.308529    6912 addons.go:348] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0512 01:20:13.308529    6912 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0512 01:20:13.309551    6912 cli_runner.go:217] Completed: docker container inspect newest-cni-20220512011616-7184 --format={{.State.Status}}: (1.6014178s)
	I0512 01:20:13.322538    6912 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220512011616-7184
	I0512 01:20:13.395534    6912 addons.go:153] Setting addon default-storageclass=true in "newest-cni-20220512011616-7184"
	W0512 01:20:13.395534    6912 addons.go:165] addon default-storageclass should already be in state true
	I0512 01:20:13.395534    6912 host.go:66] Checking if "newest-cni-20220512011616-7184" exists ...
	I0512 01:20:13.418576    6912 cli_runner.go:164] Run: docker container inspect newest-cni-20220512011616-7184 --format={{.State.Status}}
	I0512 01:20:14.242039    6912 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" newest-cni-20220512011616-7184: (1.6421002s)
	I0512 01:20:14.242039    6912 api_server.go:51] waiting for apiserver process to appear ...
	I0512 01:20:14.255895    6912 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0512 01:20:14.370900    6912 api_server.go:71] duration metric: took 2.6981703s to wait for apiserver process to appear ...
	I0512 01:20:14.370900    6912 api_server.go:87] waiting for apiserver healthz status ...
	I0512 01:20:14.370900    6912 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:50853/healthz ...
	I0512 01:20:14.398889    6912 api_server.go:266] https://127.0.0.1:50853/healthz returned 200:
	ok
	I0512 01:20:14.403882    6912 api_server.go:140] control plane version: v1.23.6-rc.0
	I0512 01:20:14.403882    6912 api_server.go:130] duration metric: took 32.9806ms to wait for apiserver health ...
	I0512 01:20:14.403882    6912 system_pods.go:43] waiting for kube-system pods to appear ...
	I0512 01:20:14.420910    6912 system_pods.go:59] 8 kube-system pods found
	I0512 01:20:14.420910    6912 system_pods.go:61] "coredns-64897985d-5ws8d" [3ab4607e-b641-4ec4-95b8-e748182293c3] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0512 01:20:14.420910    6912 system_pods.go:61] "etcd-newest-cni-20220512011616-7184" [bd9ea317-d13b-4bd1-816d-a9cebacb0f9d] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0512 01:20:14.420910    6912 system_pods.go:61] "kube-apiserver-newest-cni-20220512011616-7184" [fea8351d-15a8-453d-a564-46ca2334caf1] Running
	I0512 01:20:14.420910    6912 system_pods.go:61] "kube-controller-manager-newest-cni-20220512011616-7184" [18ff8810-e805-4c54-bca7-51c98357c897] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0512 01:20:14.420910    6912 system_pods.go:61] "kube-proxy-4rh4b" [b0893ff4-bc22-47ac-8feb-c4f6dd7d3fb0] Running
	I0512 01:20:14.425116    6912 system_pods.go:61] "kube-scheduler-newest-cni-20220512011616-7184" [658fbc66-063f-4b58-b41f-e054ec6b9ec4] Running
	I0512 01:20:14.425158    6912 system_pods.go:61] "metrics-server-b955d9d8-nkjgl" [6ec6d39c-3946-4260-a4ae-3b080a511a18] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0512 01:20:14.425158    6912 system_pods.go:61] "storage-provisioner" [e46bc56d-a455-44fd-a6ca-36a598ad3fdd] Running
	I0512 01:20:14.425208    6912 system_pods.go:74] duration metric: took 21.3249ms to wait for pod list to return data ...
	I0512 01:20:14.425208    6912 default_sa.go:34] waiting for default service account to be created ...
	I0512 01:20:14.478487    6912 default_sa.go:45] found service account: "default"
	I0512 01:20:14.478487    6912 default_sa.go:55] duration metric: took 53.2761ms for default service account to be created ...
	I0512 01:20:14.478487    6912 kubeadm.go:548] duration metric: took 2.8057519s to wait for : map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] ...
	I0512 01:20:14.478487    6912 node_conditions.go:102] verifying NodePressure condition ...
	I0512 01:20:14.502796    6912 node_conditions.go:122] node storage ephemeral capacity is 263174212Ki
	I0512 01:20:14.502796    6912 node_conditions.go:123] node cpu capacity is 16
	I0512 01:20:14.502796    6912 node_conditions.go:105] duration metric: took 24.3079ms to run NodePressure ...
	I0512 01:20:14.502796    6912 start.go:213] waiting for startup goroutines ...
	I0512 01:20:14.892055    6912 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220512011616-7184: (1.596429s)
	I0512 01:20:14.893039    6912 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50854 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\newest-cni-20220512011616-7184\id_rsa Username:docker}
	I0512 01:20:14.907528    6912 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220512011616-7184: (1.5849104s)
	I0512 01:20:14.908022    6912 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50854 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\newest-cni-20220512011616-7184\id_rsa Username:docker}
	I0512 01:20:14.923028    6912 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220512011616-7184: (1.653413s)
	I0512 01:20:14.923028    6912 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50854 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\newest-cni-20220512011616-7184\id_rsa Username:docker}
	I0512 01:20:15.065660    6912 cli_runner.go:217] Completed: docker container inspect newest-cni-20220512011616-7184 --format={{.State.Status}}: (1.6470015s)
	I0512 01:20:15.065660    6912 addons.go:348] installing /etc/kubernetes/addons/storageclass.yaml
	I0512 01:20:15.065660    6912 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0512 01:20:15.087656    6912 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220512011616-7184
	I0512 01:20:15.403525    6912 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.6-rc.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0512 01:20:15.468338    6912 addons.go:348] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0512 01:20:15.468445    6912 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1849 bytes)
	I0512 01:20:15.473573    6912 addons.go:348] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0512 01:20:15.473573    6912 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0512 01:20:15.681380    6912 addons.go:348] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0512 01:20:15.681380    6912 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0512 01:20:15.772044    6912 addons.go:348] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0512 01:20:15.772044    6912 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0512 01:20:15.974184    6912 addons.go:348] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0512 01:20:15.974184    6912 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0512 01:20:16.070371    6912 addons.go:348] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0512 01:20:16.070371    6912 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0512 01:20:16.325294    6912 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220512011616-7184: (1.2374346s)
	I0512 01:20:16.325740    6912 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50854 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\newest-cni-20220512011616-7184\id_rsa Username:docker}
	I0512 01:20:16.382374    6912 addons.go:348] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0512 01:20:16.382374    6912 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4196 bytes)
	I0512 01:20:16.399346    6912 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.6-rc.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0512 01:20:16.601484    6912 addons.go:348] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0512 01:20:16.602267    6912 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0512 01:20:16.879990    6912 addons.go:348] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0512 01:20:16.879990    6912 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0512 01:20:16.891990    6912 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.6-rc.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0512 01:20:17.187007    6912 addons.go:348] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0512 01:20:17.187007    6912 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0512 01:20:17.300673    6912 addons.go:348] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0512 01:20:17.300673    6912 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0512 01:20:17.571563    6912 addons.go:348] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0512 01:20:17.571563    6912 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0512 01:20:17.799999    6912 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.6-rc.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0512 01:20:19.474901    6912 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.6-rc.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (4.0711719s)
	I0512 01:20:20.060411    6912 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.6-rc.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (3.660882s)
	I0512 01:20:20.060411    6912 addons.go:386] Verifying addon metrics-server=true in "newest-cni-20220512011616-7184"
	I0512 01:20:20.060411    6912 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.6-rc.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (3.1682621s)
	I0512 01:20:21.385198    6912 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.6-rc.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (3.5850169s)
	I0512 01:20:21.388281    6912 out.go:177] * Enabled addons: storage-provisioner, metrics-server, default-storageclass, dashboard
	I0512 01:20:21.390184    6912 addons.go:417] enableAddons completed in 9.7173095s
	I0512 01:20:21.640158    6912 start.go:499] kubectl: 1.18.2, cluster: 1.23.6-rc.0 (minor skew: 5)
	I0512 01:20:21.643797    6912 out.go:177] 
	W0512 01:20:21.646687    6912 out.go:239] ! C:\ProgramData\chocolatey\bin\kubectl.exe is version 1.18.2, which may have incompatibilites with Kubernetes 1.23.6-rc.0.
	I0512 01:20:21.651852    6912 out.go:177]   - Want kubectl v1.23.6-rc.0? Try 'minikube kubectl -- get pods -A'
	I0512 01:20:21.660836    6912 out.go:177] * Done! kubectl is now configured to use "newest-cni-20220512011616-7184" cluster and "default" namespace by default
	I0512 01:20:29.127663    4756 system_pods.go:86] 8 kube-system pods found
	I0512 01:20:29.127843    4756 system_pods.go:89] "coredns-5644d7b6d9-ds6wg" [274c71a2-5a74-40cf-9719-e53e1901acdb] Running
	I0512 01:20:29.127956    4756 system_pods.go:89] "etcd-old-k8s-version-20220512010246-7184" [8197f31d-c95a-42f1-9974-091d1c27c60b] Running
	I0512 01:20:29.127994    4756 system_pods.go:89] "kube-apiserver-old-k8s-version-20220512010246-7184" [5e4b74f1-7f9f-4b1e-bfbb-762b651204a1] Running
	I0512 01:20:29.127994    4756 system_pods.go:89] "kube-controller-manager-old-k8s-version-20220512010246-7184" [464fedb8-445d-4d2b-98af-2fea913fa291] Running
	I0512 01:20:29.127994    4756 system_pods.go:89] "kube-proxy-5dp6x" [29ed9a2f-069f-409e-8a9f-ce8869e1a908] Running
	I0512 01:20:29.127994    4756 system_pods.go:89] "kube-scheduler-old-k8s-version-20220512010246-7184" [ee09078d-37ef-42bd-bdc4-c6d4d41df903] Running
	I0512 01:20:29.128099    4756 system_pods.go:89] "metrics-server-6f89b5864b-xnzbk" [7c6b6847-36d4-4700-b45c-4e00a73b9477] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0512 01:20:29.128099    4756 system_pods.go:89] "storage-provisioner" [aab59255-6979-4cee-bb62-a1d8611e5cf8] Running
	I0512 01:20:29.128171    4756 system_pods.go:126] duration metric: took 56.7371491s to wait for k8s-apps to be running ...
	I0512 01:20:29.128226    4756 system_svc.go:44] waiting for kubelet service to be running ....
	I0512 01:20:29.142548    4756 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0512 01:20:29.165073    4756 system_svc.go:56] duration metric: took 36.845ms WaitForService to wait for kubelet.
	I0512 01:20:29.165613    4756 kubeadm.go:548] duration metric: took 1m9.344661s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0512 01:20:29.165613    4756 node_conditions.go:102] verifying NodePressure condition ...
	I0512 01:20:29.178919    4756 node_conditions.go:122] node storage ephemeral capacity is 263174212Ki
	I0512 01:20:29.178919    4756 node_conditions.go:123] node cpu capacity is 16
	I0512 01:20:29.178919    4756 node_conditions.go:105] duration metric: took 13.3051ms to run NodePressure ...
	I0512 01:20:29.178919    4756 start.go:213] waiting for startup goroutines ...
	I0512 01:20:29.451947    4756 start.go:499] kubectl: 1.18.2, cluster: 1.16.0 (minor skew: 2)
	I0512 01:20:29.635603    4756 out.go:177] 
	W0512 01:20:29.769121    4756 out.go:239] ! C:\ProgramData\chocolatey\bin\kubectl.exe is version 1.18.2, which may have incompatibilites with Kubernetes 1.16.0.
	I0512 01:20:29.772435    4756 out.go:177]   - Want kubectl v1.16.0? Try 'minikube kubectl -- get pods -A'
	I0512 01:20:29.780826    4756 out.go:177] * Done! kubectl is now configured to use "old-k8s-version-20220512010246-7184" cluster and "default" namespace by default
	I0512 01:20:35.734579    4188 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.23.5-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v auto-20220512010229-7184:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1652251400-14138@sha256:8c847a4aa2afc5a7fc659f9731046bf9cc7e788283deecc83c8633014fb0828a -I lz4 -xf /preloaded.tar -C /extractDir: (23.6487399s)
	I0512 01:20:35.734579    4188 kic.go:188] duration metric: took 23.668940 seconds to extract preloaded images to volume
	I0512 01:20:35.740587    4188 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0512 01:20:38.037595    4188 cli_runner.go:217] Completed: docker system info --format "{{json .}}": (2.2968901s)
	I0512 01:20:38.037595    4188 info.go:265] docker info: {ID:5MOX:W55Z:6RSS:V5PU:46KT:D723:NTM4:N7FK:USOO:URA3:TW6J:2PNT Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:86 OomKillDisable:true NGoroutines:63 SystemTime:2022-05-12 01:20:36.9027889 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.102.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_
64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:53902323712 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.14 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3df54a852345ae127d1fa3092b95168e4a88e2f8 Expected:3df54a852345ae127d1fa3092b95168e4a88e2f8} RuncCommit:{ID:v1.0.3-0-gf46b6ba Expected:v1.0.3-0-gf46b6ba} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,
profile=default] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.5.0] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:C:\Program Files\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc.
Version:v0.17.0]] Warnings:<nil>}}
	I0512 01:20:38.051149    4188 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0512 01:20:40.242818    4188 cli_runner.go:217] Completed: docker info --format "'{{json .SecurityOptions}}'": (2.191556s)
	I0512 01:20:40.255389    4188 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname auto-20220512010229-7184 --name auto-20220512010229-7184 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=auto-20220512010229-7184 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=auto-20220512010229-7184 --network auto-20220512010229-7184 --ip 192.168.58.2 --volume auto-20220512010229-7184:/var --security-opt apparmor=unconfined --memory=2048mb --memory-swap=2048mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1652251400-14138@sha256:8c847a4aa2afc5a7fc659f9731046bf9cc7e788283deecc83c8633014fb0828a
	I0512 01:20:42.478695    4188 cli_runner.go:217] Completed: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname auto-20220512010229-7184 --name auto-20220512010229-7184 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=auto-20220512010229-7184 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=auto-20220512010229-7184 --network auto-20220512010229-7184 --ip 192.168.58.2 --volume auto-20220512010229-7184:/var --security-opt apparmor=unconfined --memory=2048mb --memory-swap=2048mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1652251400-14138@sha256:8c847a4aa2afc5a7fc659f9731046bf9cc7e788283deecc83c8633014fb0828a: (2.2231917s)
	I0512 01:20:42.488699    4188 cli_runner.go:164] Run: docker container inspect auto-20220512010229-7184 --format={{.State.Running}}
	I0512 01:20:43.648407    4188 cli_runner.go:217] Completed: docker container inspect auto-20220512010229-7184 --format={{.State.Running}}: (1.1586552s)
	I0512 01:20:43.656247    4188 cli_runner.go:164] Run: docker container inspect auto-20220512010229-7184 --format={{.State.Status}}
	I0512 01:20:44.760296    4188 cli_runner.go:217] Completed: docker container inspect auto-20220512010229-7184 --format={{.State.Status}}: (1.1039925s)
	I0512 01:20:44.767306    4188 cli_runner.go:164] Run: docker exec auto-20220512010229-7184 stat /var/lib/dpkg/alternatives/iptables
	I0512 01:20:46.054013    4188 cli_runner.go:217] Completed: docker exec auto-20220512010229-7184 stat /var/lib/dpkg/alternatives/iptables: (1.2866406s)
	I0512 01:20:46.054013    4188 oci.go:247] the created container "auto-20220512010229-7184" has a running status.
	I0512 01:20:46.054013    4188 kic.go:210] Creating ssh key for kic: C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\auto-20220512010229-7184\id_rsa...
	I0512 01:20:46.471711    4188 kic_runner.go:191] docker (temp): C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\auto-20220512010229-7184\id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0512 01:20:47.767250    4188 cli_runner.go:164] Run: docker container inspect auto-20220512010229-7184 --format={{.State.Status}}
	I0512 01:20:48.908876    4188 cli_runner.go:217] Completed: docker container inspect auto-20220512010229-7184 --format={{.State.Status}}: (1.1414628s)
	I0512 01:20:48.926841    4188 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0512 01:20:48.926841    4188 kic_runner.go:114] Args: [docker exec --privileged auto-20220512010229-7184 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0512 01:20:49.968722    4792 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.5:$PATH" kubeadm reset --cri-socket /var/run/dockershim.sock --force": (51.0545819s)
	I0512 01:20:49.982715    4792 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0512 01:20:50.021715    4792 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0512 01:20:50.047725    4792 kubeadm.go:221] ignoring SystemVerification for kubeadm because of docker driver
	I0512 01:20:50.059709    4792 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0512 01:20:50.085720    4792 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0512 01:20:50.085720    4792 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.5:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0512 01:20:50.250555    4188 kic_runner.go:123] Done: [docker exec --privileged auto-20220512010229-7184 chown docker:docker /home/docker/.ssh/authorized_keys]: (1.3235276s)
	I0512 01:20:50.255049    4188 kic.go:250] ensuring only current user has permissions to key file located at : C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\auto-20220512010229-7184\id_rsa...
	I0512 01:20:50.850170    4188 cli_runner.go:164] Run: docker container inspect auto-20220512010229-7184 --format={{.State.Status}}
	I0512 01:20:51.926226    4188 cli_runner.go:217] Completed: docker container inspect auto-20220512010229-7184 --format={{.State.Status}}: (1.0758274s)
	I0512 01:20:51.926302    4188 machine.go:88] provisioning docker machine ...
	I0512 01:20:51.926390    4188 ubuntu.go:169] provisioning hostname "auto-20220512010229-7184"
	I0512 01:20:51.936531    4188 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-20220512010229-7184
	I0512 01:20:53.032292    4188 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-20220512010229-7184: (1.0955797s)
	I0512 01:20:53.036616    4188 main.go:134] libmachine: Using SSH client type: native
	I0512 01:20:53.037612    4188 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x422e20] 0x425c80 <nil>  [] 0s} 127.0.0.1 50942 <nil> <nil>}
	I0512 01:20:53.037612    4188 main.go:134] libmachine: About to run SSH command:
	sudo hostname auto-20220512010229-7184 && echo "auto-20220512010229-7184" | sudo tee /etc/hostname
	I0512 01:20:53.236394    4188 main.go:134] libmachine: SSH cmd err, output: <nil>: auto-20220512010229-7184
	
	I0512 01:20:53.245809    4188 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-20220512010229-7184
	I0512 01:20:54.422089    4188 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-20220512010229-7184: (1.1762193s)
	I0512 01:20:54.425089    4188 main.go:134] libmachine: Using SSH client type: native
	I0512 01:20:54.426091    4188 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x422e20] 0x425c80 <nil>  [] 0s} 127.0.0.1 50942 <nil> <nil>}
	I0512 01:20:54.426091    4188 main.go:134] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sauto-20220512010229-7184' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 auto-20220512010229-7184/g' /etc/hosts;
				else 
					echo '127.0.1.1 auto-20220512010229-7184' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0512 01:20:54.622533    4188 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I0512 01:20:54.622533    4188 ubuntu.go:175] set auth options {CertDir:C:\Users\jenkins.minikube4\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube4\minikube-integration\.minikube}
	I0512 01:20:54.622533    4188 ubuntu.go:177] setting up certificates
	I0512 01:20:54.622533    4188 provision.go:83] configureAuth start
	I0512 01:20:54.634553    4188 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" auto-20220512010229-7184
	I0512 01:20:55.837746    4188 cli_runner.go:217] Completed: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" auto-20220512010229-7184: (1.2030963s)
	I0512 01:20:55.837866    4188 provision.go:138] copyHostCerts
	I0512 01:20:55.837866    4188 exec_runner.go:144] found C:\Users\jenkins.minikube4\minikube-integration\.minikube/ca.pem, removing ...
	I0512 01:20:55.837866    4188 exec_runner.go:207] rm: C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.pem
	I0512 01:20:55.838813    4188 exec_runner.go:151] cp: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube4\minikube-integration\.minikube/ca.pem (1078 bytes)
	I0512 01:20:55.840112    4188 exec_runner.go:144] found C:\Users\jenkins.minikube4\minikube-integration\.minikube/cert.pem, removing ...
	I0512 01:20:55.840184    4188 exec_runner.go:207] rm: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cert.pem
	I0512 01:20:55.840549    4188 exec_runner.go:151] cp: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube4\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0512 01:20:55.841894    4188 exec_runner.go:144] found C:\Users\jenkins.minikube4\minikube-integration\.minikube/key.pem, removing ...
	I0512 01:20:55.841981    4188 exec_runner.go:207] rm: C:\Users\jenkins.minikube4\minikube-integration\.minikube\key.pem
	I0512 01:20:55.842440    4188 exec_runner.go:151] cp: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube4\minikube-integration\.minikube/key.pem (1675 bytes)
	I0512 01:20:55.843369    4188 provision.go:112] generating server cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.auto-20220512010229-7184 san=[192.168.58.2 127.0.0.1 localhost 127.0.0.1 minikube auto-20220512010229-7184]
	I0512 01:20:56.306928    4188 provision.go:172] copyRemoteCerts
	I0512 01:20:56.319974    4188 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0512 01:20:56.329935    4188 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-20220512010229-7184
	I0512 01:20:57.552955    4188 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-20220512010229-7184: (1.2229562s)
	I0512 01:20:57.552955    4188 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50942 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\auto-20220512010229-7184\id_rsa Username:docker}
	I0512 01:20:57.711987    4188 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (1.3919414s)
	I0512 01:20:57.713721    4188 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0512 01:20:57.772018    4188 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1249 bytes)
	I0512 01:20:57.821436    4188 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0512 01:20:57.873681    4188 provision.go:86] duration metric: configureAuth took 3.2509807s
	I0512 01:20:57.873681    4188 ubuntu.go:193] setting minikube options for container-runtime
	I0512 01:20:57.874705    4188 config.go:178] Loaded profile config "auto-20220512010229-7184": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.5
	I0512 01:20:57.890672    4188 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-20220512010229-7184
	I0512 01:20:59.036444    4188 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-20220512010229-7184: (1.1457132s)
	I0512 01:20:59.040446    4188 main.go:134] libmachine: Using SSH client type: native
	I0512 01:20:59.040446    4188 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x422e20] 0x425c80 <nil>  [] 0s} 127.0.0.1 50942 <nil> <nil>}
	I0512 01:20:59.040446    4188 main.go:134] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0512 01:20:59.262939    4188 main.go:134] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0512 01:20:59.262939    4188 ubuntu.go:71] root file system type: overlay
	I0512 01:20:59.262939    4188 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0512 01:20:59.280110    4188 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-20220512010229-7184
	I0512 01:21:00.500281    4188 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-20220512010229-7184: (1.2199894s)
	I0512 01:21:00.506156    4188 main.go:134] libmachine: Using SSH client type: native
	I0512 01:21:00.507194    4188 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x422e20] 0x425c80 <nil>  [] 0s} 127.0.0.1 50942 <nil> <nil>}
	I0512 01:21:00.507194    4188 main.go:134] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0512 01:21:00.731136    4188 main.go:134] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0512 01:21:00.744785    4188 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-20220512010229-7184
	I0512 01:21:01.916813    4188 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-20220512010229-7184: (1.1719673s)
	I0512 01:21:01.923820    4188 main.go:134] libmachine: Using SSH client type: native
	I0512 01:21:01.924819    4188 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x422e20] 0x425c80 <nil>  [] 0s} 127.0.0.1 50942 <nil> <nil>}
	I0512 01:21:01.924819    4188 main.go:134] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0512 01:21:03.446750    4188 main.go:134] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2022-05-05 13:17:28.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2022-05-12 01:21:00.714844000 +0000
	@@ -1,30 +1,32 @@
	 [Unit]
	 Description=Docker Application Container Engine
	 Documentation=https://docs.docker.com
	-After=network-online.target docker.socket firewalld.service containerd.service
	+BindsTo=containerd.service
	+After=network-online.target firewalld.service containerd.service
	 Wants=network-online.target
	-Requires=docker.socket containerd.service
	+Requires=docker.socket
	+StartLimitBurst=3
	+StartLimitIntervalSec=60
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutSec=0
	-RestartSec=2
	-Restart=always
	-
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	+Restart=on-failure
	 
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP $MAINPID
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	@@ -32,16 +34,16 @@
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	 
	 # kill only the docker process, not all processes in the cgroup
	 KillMode=process
	-OOMScoreAdjust=-500
	 
	 [Install]
	 WantedBy=multi-user.target
	Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install enable docker
	
	I0512 01:21:03.447107    4188 machine.go:91] provisioned docker machine in 11.5201641s
	I0512 01:21:03.447107    4188 client.go:171] LocalClient.Create took 1m3.465425s
	I0512 01:21:03.447206    4188 start.go:173] duration metric: libmachine.API.Create for "auto-20220512010229-7184" took 1m3.465478s
	I0512 01:21:03.447258    4188 start.go:306] post-start starting for "auto-20220512010229-7184" (driver="docker")
	I0512 01:21:03.447296    4188 start.go:316] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0512 01:21:03.460656    4188 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0512 01:21:03.467833    4188 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-20220512010229-7184
	I0512 01:21:04.732787    4188 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-20220512010229-7184: (1.2648893s)
	I0512 01:21:04.732787    4188 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50942 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\auto-20220512010229-7184\id_rsa Username:docker}
	I0512 01:21:04.867883    4188 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (1.407155s)
	I0512 01:21:04.890873    4188 ssh_runner.go:195] Run: cat /etc/os-release
	I0512 01:21:04.901879    4188 main.go:134] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0512 01:21:04.901879    4188 main.go:134] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0512 01:21:04.901879    4188 main.go:134] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0512 01:21:04.901879    4188 info.go:137] Remote host: Ubuntu 20.04.4 LTS
	I0512 01:21:04.901879    4188 filesync.go:126] Scanning C:\Users\jenkins.minikube4\minikube-integration\.minikube\addons for local assets ...
	I0512 01:21:04.902978    4188 filesync.go:126] Scanning C:\Users\jenkins.minikube4\minikube-integration\.minikube\files for local assets ...
	I0512 01:21:04.903870    4188 filesync.go:149] local asset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\71842.pem -> 71842.pem in /etc/ssl/certs
	I0512 01:21:04.914881    4188 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0512 01:21:04.941868    4188 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\71842.pem --> /etc/ssl/certs/71842.pem (1708 bytes)
	I0512 01:21:04.997105    4188 start.go:309] post-start completed in 1.5497672s
	I0512 01:21:05.009155    4188 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" auto-20220512010229-7184
	I0512 01:21:06.301449    4188 cli_runner.go:217] Completed: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" auto-20220512010229-7184: (1.2922275s)
	I0512 01:21:06.301449    4188 profile.go:148] Saving config to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\auto-20220512010229-7184\config.json ...
	I0512 01:21:06.323447    4188 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0512 01:21:06.333451    4188 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-20220512010229-7184
	I0512 01:21:07.645074    4188 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-20220512010229-7184: (1.3115548s)
	I0512 01:21:07.645074    4188 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50942 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\auto-20220512010229-7184\id_rsa Username:docker}
	I0512 01:21:07.789349    4188 ssh_runner.go:235] Completed: sh -c "df -h /var | awk 'NR==2{print $5}'": (1.4658263s)
	I0512 01:21:07.810348    4188 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0512 01:21:07.825336    4188 start.go:134] duration metric: createHost completed in 1m7.8474256s
	I0512 01:21:07.825336    4188 start.go:81] releasing machines lock for "auto-20220512010229-7184", held for 1m7.84822s
	I0512 01:21:07.834344    4188 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" auto-20220512010229-7184
	I0512 01:21:09.366448    4188 cli_runner.go:217] Completed: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" auto-20220512010229-7184: (1.5320259s)
	I0512 01:21:09.368461    4188 ssh_runner.go:195] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0512 01:21:09.393448    4188 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-20220512010229-7184
	I0512 01:21:09.397453    4188 ssh_runner.go:195] Run: systemctl --version
	I0512 01:21:09.413472    4188 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-20220512010229-7184
	I0512 01:21:11.127873    4188 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-20220512010229-7184: (1.7343356s)
	I0512 01:21:11.127873    4188 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50942 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\auto-20220512010229-7184\id_rsa Username:docker}
	I0512 01:21:11.142895    4188 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-20220512010229-7184: (1.7293339s)
	I0512 01:21:11.143898    4188 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50942 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\auto-20220512010229-7184\id_rsa Username:docker}
	I0512 01:21:11.293901    4188 ssh_runner.go:235] Completed: curl -sS -m 2 https://k8s.gcr.io/: (1.9253405s)
	I0512 01:21:11.294905    4188 ssh_runner.go:235] Completed: systemctl --version: (1.8973548s)
	I0512 01:21:11.307892    4188 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0512 01:21:11.423767    4188 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0512 01:21:11.450794    4188 cruntime.go:273] skipping containerd shutdown because we are bound to it
	I0512 01:21:11.459790    4188 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0512 01:21:11.491780    4188 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/dockershim.sock
	image-endpoint: unix:///var/run/dockershim.sock
	" | sudo tee /etc/crictl.yaml"
	I0512 01:21:11.535770    4188 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0512 01:21:11.733908    4188 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0512 01:21:11.927206    4188 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0512 01:21:12.009214    4188 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0512 01:21:12.209796    4188 ssh_runner.go:195] Run: sudo systemctl start docker
	I0512 01:21:12.248824    4188 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0512 01:21:12.355817    4188 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	
	* 
	* ==> Docker <==
	* -- Logs begin at Thu 2022-05-12 01:12:54 UTC, end at Thu 2022-05-12 01:21:19 UTC. --
	May 12 01:18:31 old-k8s-version-20220512010246-7184 dockerd[248]: time="2022-05-12T01:18:31.068446500Z" level=info msg="ignoring event" container=edf5c72d76107d77303bdc63b75a5c143a51412db62fb5b748843a3bfd7f3a26 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	May 12 01:18:31 old-k8s-version-20220512010246-7184 dockerd[248]: time="2022-05-12T01:18:31.466462200Z" level=info msg="ignoring event" container=ffdd1b8390a14b7365c1a8e0af08730f82f3632ae534d1ad9166e60b83eda758 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	May 12 01:18:31 old-k8s-version-20220512010246-7184 dockerd[248]: time="2022-05-12T01:18:31.823609400Z" level=info msg="ignoring event" container=c80f0ee9dc76332929a10503bed3f2791f71a1f438565632dcc36bde693bfa97 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	May 12 01:18:32 old-k8s-version-20220512010246-7184 dockerd[248]: time="2022-05-12T01:18:32.161850600Z" level=info msg="ignoring event" container=7fedc89a2e63ab82a1eb4c73a2100c2b608bc90ac15eff0a34155ec4d5b9287a module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	May 12 01:18:32 old-k8s-version-20220512010246-7184 dockerd[248]: time="2022-05-12T01:18:32.586513900Z" level=info msg="ignoring event" container=26850d91e05e50e404cfbae0eb9a3758099cd1a8ad614d8e6c7b3f9e1d0d9b18 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	May 12 01:19:23 old-k8s-version-20220512010246-7184 dockerd[248]: time="2022-05-12T01:19:23.288891700Z" level=error msg="stream copy error: reading from a closed fifo"
	May 12 01:19:23 old-k8s-version-20220512010246-7184 dockerd[248]: time="2022-05-12T01:19:23.289200100Z" level=error msg="stream copy error: reading from a closed fifo"
	May 12 01:19:24 old-k8s-version-20220512010246-7184 dockerd[248]: time="2022-05-12T01:19:24.267261600Z" level=error msg="e7cb0d7181edd1d86d79ad9b4191be26320d98e31cbf341325033d69e3fc3cb3 cleanup: failed to delete container from containerd: no such container"
	May 12 01:19:24 old-k8s-version-20220512010246-7184 dockerd[248]: time="2022-05-12T01:19:24.267674600Z" level=error msg="Handler for POST /containers/e7cb0d7181edd1d86d79ad9b4191be26320d98e31cbf341325033d69e3fc3cb3/start returned error: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: writing syncT \"procResume\": write init-p: broken pipe: unknown"
	May 12 01:19:31 old-k8s-version-20220512010246-7184 dockerd[248]: time="2022-05-12T01:19:31.161874800Z" level=warning msg="Error getting v2 registry: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host"
	May 12 01:19:31 old-k8s-version-20220512010246-7184 dockerd[248]: time="2022-05-12T01:19:31.162183300Z" level=info msg="Attempting next endpoint for pull after error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host"
	May 12 01:19:31 old-k8s-version-20220512010246-7184 dockerd[248]: time="2022-05-12T01:19:31.171747000Z" level=error msg="Handler for POST /images/create returned error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host"
	May 12 01:19:32 old-k8s-version-20220512010246-7184 dockerd[248]: time="2022-05-12T01:19:32.720085200Z" level=warning msg="reference for unknown type: " digest="sha256:cc746e7a0b1eec0db01cbabbb6386b23d7af97e79fa9e36bb883a95b7eb96fe2" remote="docker.io/kubernetesui/dashboard@sha256:cc746e7a0b1eec0db01cbabbb6386b23d7af97e79fa9e36bb883a95b7eb96fe2"
	May 12 01:19:56 old-k8s-version-20220512010246-7184 dockerd[248]: time="2022-05-12T01:19:56.391525800Z" level=warning msg="reference for unknown type: application/vnd.docker.distribution.manifest.v1+prettyjws" digest="sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb" remote="k8s.gcr.io/echoserver:1.4"
	May 12 01:19:56 old-k8s-version-20220512010246-7184 dockerd[248]: time="2022-05-12T01:19:56.591358600Z" level=warning msg="Error persisting manifest" digest="sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb" error="error committing manifest to content store: commit failed: unexpected commit digest sha256:eaee4c452b076cdb05b391ed7e75e1ad0aca136665875ab5d7e2f3d9f4675769, expected sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb: failed precondition" remote="k8s.gcr.io/echoserver:1.4"
	May 12 01:20:14 old-k8s-version-20220512010246-7184 dockerd[248]: time="2022-05-12T01:20:14.202798600Z" level=warning msg="Error getting v2 registry: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host"
	May 12 01:20:14 old-k8s-version-20220512010246-7184 dockerd[248]: time="2022-05-12T01:20:14.205167900Z" level=info msg="Attempting next endpoint for pull after error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host"
	May 12 01:20:14 old-k8s-version-20220512010246-7184 dockerd[248]: time="2022-05-12T01:20:14.232090300Z" level=error msg="Handler for POST /images/create returned error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host"
	May 12 01:20:14 old-k8s-version-20220512010246-7184 dockerd[248]: time="2022-05-12T01:20:14.712562600Z" level=info msg="ignoring event" container=47b400ef79f5d137f34caa11383a0e9ad1c28f2ae99e7685c5bb6e7bd9513f91 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	May 12 01:20:15 old-k8s-version-20220512010246-7184 dockerd[248]: time="2022-05-12T01:20:15.789896000Z" level=info msg="ignoring event" container=ab00bd231ec66c29a530b2ea2b905bcc464fa8d5d6ed515a1825aa8501bc2d08 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	May 12 01:20:32 old-k8s-version-20220512010246-7184 dockerd[248]: time="2022-05-12T01:20:32.639646500Z" level=info msg="ignoring event" container=0754e7bc90666385decb9bee83def6c80f7df268f31433e5f339e9e447963dc6 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	May 12 01:20:41 old-k8s-version-20220512010246-7184 dockerd[248]: time="2022-05-12T01:20:41.799381300Z" level=warning msg="Error getting v2 registry: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host"
	May 12 01:20:41 old-k8s-version-20220512010246-7184 dockerd[248]: time="2022-05-12T01:20:41.799567000Z" level=info msg="Attempting next endpoint for pull after error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host"
	May 12 01:20:41 old-k8s-version-20220512010246-7184 dockerd[248]: time="2022-05-12T01:20:41.812380400Z" level=error msg="Handler for POST /images/create returned error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host"
	May 12 01:21:02 old-k8s-version-20220512010246-7184 dockerd[248]: time="2022-05-12T01:21:02.500445200Z" level=info msg="ignoring event" container=73624efb9aa4e1285e0ad418a15d51078278772c4d29458bccf6eb606275ff27 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                                                            CREATED              STATE               NAME                        ATTEMPT             POD ID
	73624efb9aa4e       a90209bb39e3d                                                                                    18 seconds ago       Exited              dashboard-metrics-scraper   3                   30a183e4533ff
	57dce2ef6b231       kubernetesui/dashboard@sha256:cc746e7a0b1eec0db01cbabbb6386b23d7af97e79fa9e36bb883a95b7eb96fe2   About a minute ago   Running             kubernetes-dashboard        0                   2059a775d6bc9
	4a09b1dec8680       6e38f40d628db                                                                                    About a minute ago   Running             storage-provisioner         0                   082d65acc8fdc
	65058f3069c2f       bf261d1579144                                                                                    About a minute ago   Running             coredns                     0                   5dcf8588c18e0
	e2f6eb90e5344       c21b0c7400f98                                                                                    About a minute ago   Running             kube-proxy                  0                   b83104470f2c5
	9f2268fa3de9e       b2756210eeabf                                                                                    2 minutes ago        Running             etcd                        0                   0f8a4822f8593
	1e0cf8fdf46a2       06a629a7e51cd                                                                                    2 minutes ago        Running             kube-controller-manager     0                   19ad00af00379
	289649ce5a72e       b305571ca60a5                                                                                    2 minutes ago        Running             kube-apiserver              0                   090830a0e30d8
	eb642e98a5e5e       301ddc62b80b1                                                                                    2 minutes ago        Running             kube-scheduler              0                   fa0ec8426828a
	
	* 
	* ==> coredns [65058f3069c2] <==
	* .:53
	2022-05-12T01:19:25.474Z [INFO] plugin/reload: Running configuration MD5 = f64cb9b977c7dfca58c4fab108535a76
	2022-05-12T01:19:25.475Z [INFO] CoreDNS-1.6.2
	2022-05-12T01:19:25.475Z [INFO] linux/amd64, go1.12.8, 795a3eb
	CoreDNS-1.6.2
	linux/amd64, go1.12.8, 795a3eb
	[INFO] Reloading
	2022-05-12T01:19:54.427Z [INFO] plugin/reload: Running configuration MD5 = 034a4984a79adc08e57427d1bc08b68f
	[INFO] Reloading complete
	
	* 
	* ==> describe nodes <==
	* Name:               old-k8s-version-20220512010246-7184
	Roles:              master
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=old-k8s-version-20220512010246-7184
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=50a7977b568d2ad3e04003527a57f4502d6177a0
	                    minikube.k8s.io/name=old-k8s-version-20220512010246-7184
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2022_05_12T01_18_55_0700
	                    minikube.k8s.io/version=v1.25.2
	                    node-role.kubernetes.io/master=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 12 May 2022 01:18:50 +0000
	Taints:             <none>
	Unschedulable:      false
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 12 May 2022 01:20:43 +0000   Thu, 12 May 2022 01:18:43 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 12 May 2022 01:20:43 +0000   Thu, 12 May 2022 01:18:43 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 12 May 2022 01:20:43 +0000   Thu, 12 May 2022 01:18:43 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 12 May 2022 01:20:43 +0000   Thu, 12 May 2022 01:18:43 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    old-k8s-version-20220512010246-7184
	Capacity:
	 cpu:                16
	 ephemeral-storage:  263174212Ki
	 hugepages-1Gi:      0
	 hugepages-2Mi:      0
	 memory:             52638988Ki
	 pods:               110
	Allocatable:
	 cpu:                16
	 ephemeral-storage:  263174212Ki
	 hugepages-1Gi:      0
	 hugepages-2Mi:      0
	 memory:             52638988Ki
	 pods:               110
	System Info:
	 Machine ID:                 8556a0a9a0e64ba4b825f672d2dce0b9
	 System UUID:                8556a0a9a0e64ba4b825f672d2dce0b9
	 Boot ID:                    10186544-b659-4889-afdb-c2512535b797
	 Kernel Version:             5.10.102.1-microsoft-standard-WSL2
	 OS Image:                   Ubuntu 20.04.4 LTS
	 Operating System:           linux
	 Architecture:               amd64
	 Container Runtime Version:  docker://20.10.15
	 Kubelet Version:            v1.16.0
	 Kube-Proxy Version:         v1.16.0
	PodCIDR:                     10.244.0.0/24
	PodCIDRs:                    10.244.0.0/24
	Non-terminated Pods:         (10 in total)
	  Namespace                  Name                                                           CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE
	  ---------                  ----                                                           ------------  ----------  ---------------  -------------  ---
	  kube-system                coredns-5644d7b6d9-ds6wg                                       100m (0%!)(MISSING)     0 (0%!)(MISSING)      70Mi (0%!)(MISSING)        170Mi (0%!)(MISSING)     2m11s
	  kube-system                etcd-old-k8s-version-20220512010246-7184                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         80s
	  kube-system                kube-apiserver-old-k8s-version-20220512010246-7184             250m (1%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         90s
	  kube-system                kube-controller-manager-old-k8s-version-20220512010246-7184    200m (1%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         81s
	  kube-system                kube-proxy-5dp6x                                               0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m11s
	  kube-system                kube-scheduler-old-k8s-version-20220512010246-7184             100m (0%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         73s
	  kube-system                metrics-server-6f89b5864b-xnzbk                                100m (0%!)(MISSING)     0 (0%!)(MISSING)      200Mi (0%!)(MISSING)       0 (0%!)(MISSING)         114s
	  kube-system                storage-provisioner                                            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         115s
	  kubernetes-dashboard       dashboard-metrics-scraper-6b84985989-bn4zg                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         112s
	  kubernetes-dashboard       kubernetes-dashboard-6fb5469cf5-mrs7d                          0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         112s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (4%!)(MISSING)   0 (0%!)(MISSING)
	  memory             270Mi (0%!)(MISSING)  170Mi (0%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From                                             Message
	  ----    ------                   ----                   ----                                             -------
	  Normal  NodeHasSufficientMemory  2m42s (x8 over 2m43s)  kubelet, old-k8s-version-20220512010246-7184     Node old-k8s-version-20220512010246-7184 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m42s (x8 over 2m43s)  kubelet, old-k8s-version-20220512010246-7184     Node old-k8s-version-20220512010246-7184 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m42s (x7 over 2m43s)  kubelet, old-k8s-version-20220512010246-7184     Node old-k8s-version-20220512010246-7184 status is now: NodeHasSufficientPID
	  Normal  Starting                 118s                   kube-proxy, old-k8s-version-20220512010246-7184  Starting kube-proxy.
	
	* 
	* ==> dmesg <==
	* [May12 00:52] WSL2: Performing memory compaction.
	[May12 00:54] WSL2: Performing memory compaction.
	[May12 00:56] overlayfs: upperdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
	[  +0.000004] overlayfs: workdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
	[  +0.010948] overlayfs: upperdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
	[  +0.000005] overlayfs: workdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
	[  +0.036593] overlayfs: upperdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
	[  +0.000001] overlayfs: upperdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
	[  +0.000004] overlayfs: workdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
	[  +0.000001] overlayfs: workdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
	[May12 00:57] WSL2: Performing memory compaction.
	[May12 00:58] WSL2: Performing memory compaction.
	[May12 01:00] WSL2: Performing memory compaction.
	[May12 01:01] WSL2: Performing memory compaction.
	[May12 01:02] WSL2: Performing memory compaction.
	[May12 01:03] WSL2: Performing memory compaction.
	[May12 01:05] WSL2: Performing memory compaction.
	[May12 01:06] WSL2: Performing memory compaction.
	[May12 01:07] WSL2: Performing memory compaction.
	[May12 01:08] WSL2: Performing memory compaction.
	[May12 01:09] WSL2: Performing memory compaction.
	[May12 01:12] WSL2: Performing memory compaction.
	[May12 01:14] WSL2: Performing memory compaction.
	[May12 01:16] WSL2: Performing memory compaction.
	[May12 01:19] WSL2: Performing memory compaction.
	
	* 
	* ==> etcd [9f2268fa3de9] <==
	* 2022-05-12 01:19:27.879660 W | etcdserver: read-only range request "key:\"/registry/serviceaccounts/kubernetes-dashboard/kubernetes-dashboard\" " with result "range_response_count:0 size:5" took too long (113.8091ms) to execute
	2022-05-12 01:19:27.879875 W | etcdserver: read-only range request "key:\"/registry/rolebindings/kubernetes-dashboard/kubernetes-dashboard\" " with result "range_response_count:0 size:5" took too long (110.6783ms) to execute
	2022-05-12 01:19:27.880081 W | etcdserver: read-only range request "key:\"/registry/serviceaccounts/kubernetes-dashboard/kubernetes-dashboard\" " with result "range_response_count:0 size:5" took too long (109.3269ms) to execute
	2022-05-12 01:19:28.072374 W | etcdserver: read-only range request "key:\"/registry/deployments/kube-system/metrics-server\" " with result "range_response_count:1 size:2901" took too long (189.9728ms) to execute
	2022-05-12 01:19:28.072562 W | etcdserver: read-only range request "key:\"/registry/serviceaccounts/kubernetes-dashboard/kubernetes-dashboard\" " with result "range_response_count:0 size:5" took too long (104.6452ms) to execute
	2022-05-12 01:19:28.072605 W | etcdserver: read-only range request "key:\"/registry/namespaces/kubernetes-dashboard\" " with result "range_response_count:1 size:547" took too long (107.2846ms) to execute
	2022-05-12 01:19:28.072692 W | etcdserver: read-only range request "key:\"/registry/serviceaccounts/kubernetes-dashboard/kubernetes-dashboard\" " with result "range_response_count:0 size:5" took too long (104.7872ms) to execute
	2022-05-12 01:19:28.072747 W | etcdserver: read-only range request "key:\"/registry/events/kubernetes-dashboard/dashboard-metrics-scraper-6b84985989.16ee36cd0a9a7fb4\" " with result "range_response_count:1 size:695" took too long (106.3427ms) to execute
	2022-05-12 01:19:28.270087 W | etcdserver: read-only range request "key:\"/registry/serviceaccounts/kubernetes-dashboard/kubernetes-dashboard\" " with result "range_response_count:0 size:5" took too long (103.6388ms) to execute
	2022-05-12 01:19:28.277750 W | etcdserver: read-only range request "key:\"/registry/serviceaccounts/kubernetes-dashboard/kubernetes-dashboard\" " with result "range_response_count:0 size:5" took too long (111.265ms) to execute
	2022-05-12 01:19:28.277912 W | etcdserver: read-only range request "key:\"/registry/events/kubernetes-dashboard/kubernetes-dashboard-6fb5469cf5.16ee36cd10c048c4\" " with result "range_response_count:1 size:675" took too long (101.9887ms) to execute
	2022-05-12 01:19:28.868581 W | etcdserver: read-only range request "key:\"/registry/pods/kube-system/storage-provisioner\" " with result "range_response_count:1 size:2007" took too long (103.8104ms) to execute
	2022-05-12 01:19:29.182531 W | etcdserver: read-only range request "key:\"/registry/namespaces/kubernetes-dashboard\" " with result "range_response_count:1 size:547" took too long (119.1325ms) to execute
	2022-05-12 01:19:29.963778 W | etcdserver: request "header:<ID:15638328274604654967 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/pods/kubernetes-dashboard/dashboard-metrics-scraper-6b84985989-bn4zg\" mod_revision:472 > success:<request_put:<key:\"/registry/pods/kubernetes-dashboard/dashboard-metrics-scraper-6b84985989-bn4zg\" value_size:1612 >> failure:<request_range:<key:\"/registry/pods/kubernetes-dashboard/dashboard-metrics-scraper-6b84985989-bn4zg\" > >>" with result "size:16" took too long (101.175ms) to execute
	2022-05-12 01:19:51.964672 W | etcdserver: read-only range request "key:\"/registry/limitranges/kube-system/\" range_end:\"/registry/limitranges/kube-system0\" " with result "range_response_count:0 size:5" took too long (195.7886ms) to execute
	2022-05-12 01:19:51.965053 W | etcdserver: read-only range request "key:\"/registry/namespaces/kube-system\" " with result "range_response_count:1 size:178" took too long (473.6882ms) to execute
	2022-05-12 01:19:55.761407 W | etcdserver: read-only range request "key:\"/registry/limitranges\" range_end:\"/registry/limitranget\" count_only:true " with result "range_response_count:0 size:5" took too long (304.7121ms) to execute
	2022-05-12 01:20:00.282286 W | etcdserver: read-only range request "key:\"/registry/resourcequotas\" range_end:\"/registry/resourcequotat\" count_only:true " with result "range_response_count:0 size:5" took too long (116.9054ms) to execute
	2022-05-12 01:20:07.571835 W | etcdserver: read-only range request "key:\"/registry/configmaps\" range_end:\"/registry/configmapt\" count_only:true " with result "range_response_count:0 size:7" took too long (197.2191ms) to execute
	2022-05-12 01:20:18.575884 W | etcdserver: read-only range request "key:\"/registry/health\" " with result "range_response_count:0 size:5" took too long (429.1823ms) to execute
	2022-05-12 01:20:29.778209 W | etcdserver: read-only range request "key:\"/registry/horizontalpodautoscalers\" range_end:\"/registry/horizontalpodautoscalert\" count_only:true " with result "range_response_count:0 size:5" took too long (100.2334ms) to execute
	2022-05-12 01:20:29.778668 W | etcdserver: read-only range request "key:\"/registry/apiextensions.k8s.io/customresourcedefinitions\" range_end:\"/registry/apiextensions.k8s.io/customresourcedefinitiont\" count_only:true " with result "range_response_count:0 size:5" took too long (452.3901ms) to execute
	2022-05-12 01:20:29.779006 W | etcdserver: read-only range request "key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" " with result "range_response_count:1 size:765" took too long (392.5599ms) to execute
	2022-05-12 01:20:31.908269 W | etcdserver: read-only range request "key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" " with result "range_response_count:1 size:765" took too long (104.2955ms) to execute
	2022-05-12 01:20:50.371028 W | etcdserver: read-only range request "key:\"/registry/clusterrolebindings\" range_end:\"/registry/clusterrolebindingt\" count_only:true " with result "range_response_count:0 size:7" took too long (238.0992ms) to execute
	
	* 
	* ==> kernel <==
	*  01:21:21 up  2:29,  0 users,  load average: 8.81, 7.10, 5.33
	Linux old-k8s-version-20220512010246-7184 5.10.102.1-microsoft-standard-WSL2 #1 SMP Wed Mar 2 00:30:59 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 20.04.4 LTS"
	
	* 
	* ==> kube-apiserver [289649ce5a72] <==
	* Trace[1489116783]: [2.3843582s] [2.3843582s] END
	I0512 01:19:19.811247       1 trace.go:116] Trace[2076838749]: "GuaranteedUpdate etcd3" type:*coordination.Lease (started: 2022-05-12 01:19:18.250866 +0000 UTC m=+36.366462601) (total time: 1.5591978s):
	Trace[2076838749]: [1.55899s] [1.5583578s] Transaction committed
	I0512 01:19:19.811325       1 trace.go:116] Trace[807904916]: "Get" url:/api/v1/namespaces/default (started: 2022-05-12 01:19:13.9798591 +0000 UTC m=+32.095455301) (total time: 5.8305022s):
	Trace[807904916]: [5.8304425s] [5.8302591s] About to write a response
	I0512 01:19:19.811338       1 trace.go:116] Trace[505957615]: "Update" url:/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/old-k8s-version-20220512010246-7184 (started: 2022-05-12 01:19:18.2503042 +0000 UTC m=+36.365900301) (total time: 1.5600757s):
	Trace[505957615]: [1.5600757s] [1.5596064s] END
	I0512 01:19:19.811354       1 trace.go:116] Trace[1750567319]: "List" url:/apis/batch/v1/jobs (started: 2022-05-12 01:19:17.4256292 +0000 UTC m=+35.541225201) (total time: 2.3847699s):
	Trace[1750567319]: [2.3846382s] [2.3844563s] Listing from storage done
	I0512 01:19:19.811838       1 trace.go:116] Trace[2071442353]: "Get" url:/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale (started: 2022-05-12 01:19:11.9070316 +0000 UTC m=+30.022627601) (total time: 7.9038439s):
	Trace[2071442353]: [7.9037594s] [7.9037137s] About to write a response
	I0512 01:19:19.859412       1 trace.go:116] Trace[1214309108]: "Get" url:/apis/apps/v1/namespaces/kube-system/replicasets/coredns-5644d7b6d9 (started: 2022-05-12 01:19:18.6798763 +0000 UTC m=+36.795472201) (total time: 1.1785558s):
	Trace[1214309108]: [1.1784293s] [1.1783951s] About to write a response
	I0512 01:19:19.859705       1 trace.go:116] Trace[463022542]: "Get" url:/api/v1/namespaces/kube-system/pods/kube-proxy-5dp6x (started: 2022-05-12 01:19:18.7276601 +0000 UTC m=+36.843256101) (total time: 1.1310774s):
	Trace[463022542]: [1.1309109s] [1.1308608s] About to write a response
	I0512 01:19:30.568768       1 controller.go:107] OpenAPI AggregationController: Processing item v1beta1.metrics.k8s.io
	W0512 01:19:30.568962       1 handler_proxy.go:99] no RequestInfo found in the context
	E0512 01:19:30.569196       1 controller.go:114] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0512 01:19:30.569227       1 controller.go:127] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0512 01:20:30.574249       1 controller.go:107] OpenAPI AggregationController: Processing item v1beta1.metrics.k8s.io
	W0512 01:20:30.574492       1 handler_proxy.go:99] no RequestInfo found in the context
	E0512 01:20:30.574669       1 controller.go:114] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0512 01:20:30.574739       1 controller.go:127] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	* 
	* ==> kube-controller-manager [1e0cf8fdf46a] <==
	* E0512 01:19:27.963125       1 replica_set.go:450] Sync "kubernetes-dashboard/dashboard-metrics-scraper-6b84985989" failed with pods "dashboard-metrics-scraper-6b84985989-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0512 01:19:27.963122       1 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kubernetes-dashboard", Name:"dashboard-metrics-scraper-6b84985989", UID:"f11d33f0-30f1-47e8-b3d7-3cd32f9b7c85", APIVersion:"apps/v1", ResourceVersion:"434", FieldPath:""}): type: 'Warning' reason: 'FailedCreate' Error creating: pods "dashboard-metrics-scraper-6b84985989-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0512 01:19:27.963562       1 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kubernetes-dashboard", Name:"kubernetes-dashboard-6fb5469cf5", UID:"59b4dcea-7d7c-4c1c-bb2d-03dd882b242c", APIVersion:"apps/v1", ResourceVersion:"437", FieldPath:""}): type: 'Warning' reason: 'FailedCreate' Error creating: pods "kubernetes-dashboard-6fb5469cf5-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	E0512 01:19:28.161750       1 replica_set.go:450] Sync "kubernetes-dashboard/dashboard-metrics-scraper-6b84985989" failed with pods "dashboard-metrics-scraper-6b84985989-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0512 01:19:28.162035       1 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kubernetes-dashboard", Name:"dashboard-metrics-scraper-6b84985989", UID:"f11d33f0-30f1-47e8-b3d7-3cd32f9b7c85", APIVersion:"apps/v1", ResourceVersion:"434", FieldPath:""}): type: 'Warning' reason: 'FailedCreate' Error creating: pods "dashboard-metrics-scraper-6b84985989-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0512 01:19:28.162074       1 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kubernetes-dashboard", Name:"kubernetes-dashboard-6fb5469cf5", UID:"59b4dcea-7d7c-4c1c-bb2d-03dd882b242c", APIVersion:"apps/v1", ResourceVersion:"437", FieldPath:""}): type: 'Warning' reason: 'FailedCreate' Error creating: pods "kubernetes-dashboard-6fb5469cf5-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	E0512 01:19:28.162087       1 replica_set.go:450] Sync "kubernetes-dashboard/kubernetes-dashboard-6fb5469cf5" failed with pods "kubernetes-dashboard-6fb5469cf5-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0512 01:19:28.273348       1 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kubernetes-dashboard", Name:"kubernetes-dashboard-6fb5469cf5", UID:"59b4dcea-7d7c-4c1c-bb2d-03dd882b242c", APIVersion:"apps/v1", ResourceVersion:"437", FieldPath:""}): type: 'Warning' reason: 'FailedCreate' Error creating: pods "kubernetes-dashboard-6fb5469cf5-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	E0512 01:19:28.273385       1 replica_set.go:450] Sync "kubernetes-dashboard/kubernetes-dashboard-6fb5469cf5" failed with pods "kubernetes-dashboard-6fb5469cf5-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	E0512 01:19:28.279867       1 replica_set.go:450] Sync "kubernetes-dashboard/dashboard-metrics-scraper-6b84985989" failed with pods "dashboard-metrics-scraper-6b84985989-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0512 01:19:28.279906       1 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kubernetes-dashboard", Name:"dashboard-metrics-scraper-6b84985989", UID:"f11d33f0-30f1-47e8-b3d7-3cd32f9b7c85", APIVersion:"apps/v1", ResourceVersion:"434", FieldPath:""}): type: 'Warning' reason: 'FailedCreate' Error creating: pods "dashboard-metrics-scraper-6b84985989-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	E0512 01:19:28.364396       1 replica_set.go:450] Sync "kubernetes-dashboard/kubernetes-dashboard-6fb5469cf5" failed with pods "kubernetes-dashboard-6fb5469cf5-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0512 01:19:28.364636       1 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kubernetes-dashboard", Name:"kubernetes-dashboard-6fb5469cf5", UID:"59b4dcea-7d7c-4c1c-bb2d-03dd882b242c", APIVersion:"apps/v1", ResourceVersion:"437", FieldPath:""}): type: 'Warning' reason: 'FailedCreate' Error creating: pods "kubernetes-dashboard-6fb5469cf5-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0512 01:19:28.366962       1 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kubernetes-dashboard", Name:"dashboard-metrics-scraper-6b84985989", UID:"f11d33f0-30f1-47e8-b3d7-3cd32f9b7c85", APIVersion:"apps/v1", ResourceVersion:"434", FieldPath:""}): type: 'Warning' reason: 'FailedCreate' Error creating: pods "dashboard-metrics-scraper-6b84985989-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	E0512 01:19:28.367129       1 replica_set.go:450] Sync "kubernetes-dashboard/dashboard-metrics-scraper-6b84985989" failed with pods "dashboard-metrics-scraper-6b84985989-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0512 01:19:29.562167       1 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kubernetes-dashboard", Name:"dashboard-metrics-scraper-6b84985989", UID:"f11d33f0-30f1-47e8-b3d7-3cd32f9b7c85", APIVersion:"apps/v1", ResourceVersion:"434", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: dashboard-metrics-scraper-6b84985989-bn4zg
	I0512 01:19:29.563929       1 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kubernetes-dashboard", Name:"kubernetes-dashboard-6fb5469cf5", UID:"59b4dcea-7d7c-4c1c-bb2d-03dd882b242c", APIVersion:"apps/v1", ResourceVersion:"437", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: kubernetes-dashboard-6fb5469cf5-mrs7d
	E0512 01:19:41.065948       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0512 01:19:42.867438       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0512 01:20:11.367852       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0512 01:20:14.875860       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0512 01:20:41.624150       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0512 01:20:46.881699       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0512 01:21:11.881591       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0512 01:21:18.895095       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	
	* 
	* ==> kube-proxy [e2f6eb90e534] <==
	* W0512 01:19:23.689317       1 proxier.go:584] Failed to read file /lib/modules/5.10.102.1-microsoft-standard-WSL2/modules.builtin with error open /lib/modules/5.10.102.1-microsoft-standard-WSL2/modules.builtin: no such file or directory. You can ignore this message when kube-proxy is running inside container without mounting /lib/modules
	W0512 01:19:23.691124       1 proxier.go:597] Failed to load kernel module ip_vs with modprobe. You can ignore this message when kube-proxy is running inside container without mounting /lib/modules
	W0512 01:19:23.692841       1 proxier.go:597] Failed to load kernel module ip_vs_rr with modprobe. You can ignore this message when kube-proxy is running inside container without mounting /lib/modules
	W0512 01:19:23.694545       1 proxier.go:597] Failed to load kernel module ip_vs_wrr with modprobe. You can ignore this message when kube-proxy is running inside container without mounting /lib/modules
	W0512 01:19:23.696083       1 proxier.go:597] Failed to load kernel module ip_vs_sh with modprobe. You can ignore this message when kube-proxy is running inside container without mounting /lib/modules
	W0512 01:19:23.697485       1 proxier.go:597] Failed to load kernel module nf_conntrack with modprobe. You can ignore this message when kube-proxy is running inside container without mounting /lib/modules
	W0512 01:19:23.704995       1 server_others.go:329] Flag proxy-mode="" unknown, assuming iptables proxy
	I0512 01:19:23.766882       1 node.go:135] Successfully retrieved node IP: 192.168.76.2
	I0512 01:19:23.767034       1 server_others.go:149] Using iptables Proxier.
	I0512 01:19:23.768628       1 server.go:529] Version: v1.16.0
	I0512 01:19:23.770453       1 config.go:313] Starting service config controller
	I0512 01:19:23.770890       1 config.go:131] Starting endpoints config controller
	I0512 01:19:23.772729       1 shared_informer.go:197] Waiting for caches to sync for endpoints config
	I0512 01:19:23.773119       1 shared_informer.go:197] Waiting for caches to sync for service config
	I0512 01:19:23.873839       1 shared_informer.go:204] Caches are synced for endpoints config 
	I0512 01:19:23.874148       1 shared_informer.go:204] Caches are synced for service config 
	
	* 
	* ==> kube-scheduler [eb642e98a5e5] <==
	* I0512 01:18:50.868135       1 deprecated_insecure_serving.go:51] Serving healthz insecurely on [::]:10251
	I0512 01:18:50.869133       1 secure_serving.go:123] Serving securely on 127.0.0.1:10259
	E0512 01:18:51.170922       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0512 01:18:51.171110       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0512 01:18:51.171114       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1beta1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0512 01:18:51.264435       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0512 01:18:51.264576       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0512 01:18:51.264541       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0512 01:18:51.264733       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0512 01:18:51.267317       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0512 01:18:51.267461       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0512 01:18:51.267341       1 reflector.go:123] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:236: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0512 01:18:51.267439       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0512 01:18:52.172926       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0512 01:18:52.263523       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0512 01:18:52.265790       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1beta1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0512 01:18:52.266851       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0512 01:18:52.268806       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0512 01:18:52.270000       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0512 01:18:52.272297       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0512 01:18:52.273212       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0512 01:18:52.275642       1 reflector.go:123] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:236: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0512 01:18:52.276881       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0512 01:18:52.279021       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0512 01:19:10.669095       1 factory.go:585] pod is already present in the activeQ
	
	* 
	* ==> kubelet <==
	* -- Logs begin at Thu 2022-05-12 01:12:54 UTC, end at Thu 2022-05-12 01:21:21 UTC. --
	May 12 01:20:17 old-k8s-version-20220512010246-7184 kubelet[5464]: W0512 01:20:17.083695    5464 docker_sandbox.go:394] failed to read pod IP from plugin/docker: Couldn't find network status for kubernetes-dashboard/dashboard-metrics-scraper-6b84985989-bn4zg through plugin: invalid network status for
	May 12 01:20:17 old-k8s-version-20220512010246-7184 kubelet[5464]: E0512 01:20:17.099540    5464 pod_workers.go:191] Error syncing pod b83b5a1e-8008-45e7-b80d-6a9c27bf5f98 ("dashboard-metrics-scraper-6b84985989-bn4zg_kubernetes-dashboard(b83b5a1e-8008-45e7-b80d-6a9c27bf5f98)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6b84985989-bn4zg_kubernetes-dashboard(b83b5a1e-8008-45e7-b80d-6a9c27bf5f98)"
	May 12 01:20:18 old-k8s-version-20220512010246-7184 kubelet[5464]: W0512 01:20:18.116810    5464 docker_sandbox.go:394] failed to read pod IP from plugin/docker: Couldn't find network status for kubernetes-dashboard/dashboard-metrics-scraper-6b84985989-bn4zg through plugin: invalid network status for
	May 12 01:20:18 old-k8s-version-20220512010246-7184 kubelet[5464]: E0512 01:20:18.125626    5464 pod_workers.go:191] Error syncing pod b83b5a1e-8008-45e7-b80d-6a9c27bf5f98 ("dashboard-metrics-scraper-6b84985989-bn4zg_kubernetes-dashboard(b83b5a1e-8008-45e7-b80d-6a9c27bf5f98)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6b84985989-bn4zg_kubernetes-dashboard(b83b5a1e-8008-45e7-b80d-6a9c27bf5f98)"
	May 12 01:20:26 old-k8s-version-20220512010246-7184 kubelet[5464]: E0512 01:20:26.738304    5464 pod_workers.go:191] Error syncing pod 7c6b6847-36d4-4700-b45c-4e00a73b9477 ("metrics-server-6f89b5864b-xnzbk_kube-system(7c6b6847-36d4-4700-b45c-4e00a73b9477)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/k8s.gcr.io/echoserver:1.4\""
	May 12 01:20:32 old-k8s-version-20220512010246-7184 kubelet[5464]: W0512 01:20:32.305044    5464 docker_sandbox.go:394] failed to read pod IP from plugin/docker: Couldn't find network status for kubernetes-dashboard/dashboard-metrics-scraper-6b84985989-bn4zg through plugin: invalid network status for
	May 12 01:20:33 old-k8s-version-20220512010246-7184 kubelet[5464]: W0512 01:20:33.030991    5464 container.go:409] Failed to create summary reader for "/kubepods/besteffort/podb83b5a1e-8008-45e7-b80d-6a9c27bf5f98/0754e7bc90666385decb9bee83def6c80f7df268f31433e5f339e9e447963dc6": none of the resources are being tracked.
	May 12 01:20:33 old-k8s-version-20220512010246-7184 kubelet[5464]: W0512 01:20:33.621429    5464 docker_sandbox.go:394] failed to read pod IP from plugin/docker: Couldn't find network status for kubernetes-dashboard/dashboard-metrics-scraper-6b84985989-bn4zg through plugin: invalid network status for
	May 12 01:20:33 old-k8s-version-20220512010246-7184 kubelet[5464]: E0512 01:20:33.637407    5464 pod_workers.go:191] Error syncing pod b83b5a1e-8008-45e7-b80d-6a9c27bf5f98 ("dashboard-metrics-scraper-6b84985989-bn4zg_kubernetes-dashboard(b83b5a1e-8008-45e7-b80d-6a9c27bf5f98)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6b84985989-bn4zg_kubernetes-dashboard(b83b5a1e-8008-45e7-b80d-6a9c27bf5f98)"
	May 12 01:20:34 old-k8s-version-20220512010246-7184 kubelet[5464]: W0512 01:20:34.650996    5464 docker_sandbox.go:394] failed to read pod IP from plugin/docker: Couldn't find network status for kubernetes-dashboard/dashboard-metrics-scraper-6b84985989-bn4zg through plugin: invalid network status for
	May 12 01:20:36 old-k8s-version-20220512010246-7184 kubelet[5464]: E0512 01:20:36.434520    5464 pod_workers.go:191] Error syncing pod b83b5a1e-8008-45e7-b80d-6a9c27bf5f98 ("dashboard-metrics-scraper-6b84985989-bn4zg_kubernetes-dashboard(b83b5a1e-8008-45e7-b80d-6a9c27bf5f98)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6b84985989-bn4zg_kubernetes-dashboard(b83b5a1e-8008-45e7-b80d-6a9c27bf5f98)"
	May 12 01:20:41 old-k8s-version-20220512010246-7184 kubelet[5464]: E0512 01:20:41.813657    5464 remote_image.go:113] PullImage "fake.domain/k8s.gcr.io/echoserver:1.4" from image service failed: rpc error: code = Unknown desc = Error response from daemon: Get "https://fake.domain/v2/": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host
	May 12 01:20:41 old-k8s-version-20220512010246-7184 kubelet[5464]: E0512 01:20:41.813887    5464 kuberuntime_image.go:50] Pull image "fake.domain/k8s.gcr.io/echoserver:1.4" failed: rpc error: code = Unknown desc = Error response from daemon: Get "https://fake.domain/v2/": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host
	May 12 01:20:41 old-k8s-version-20220512010246-7184 kubelet[5464]: E0512 01:20:41.814171    5464 kuberuntime_manager.go:783] container start failed: ErrImagePull: rpc error: code = Unknown desc = Error response from daemon: Get "https://fake.domain/v2/": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host
	May 12 01:20:41 old-k8s-version-20220512010246-7184 kubelet[5464]: E0512 01:20:41.814221    5464 pod_workers.go:191] Error syncing pod 7c6b6847-36d4-4700-b45c-4e00a73b9477 ("metrics-server-6f89b5864b-xnzbk_kube-system(7c6b6847-36d4-4700-b45c-4e00a73b9477)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = Error response from daemon: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host"
	May 12 01:20:50 old-k8s-version-20220512010246-7184 kubelet[5464]: E0512 01:20:50.735711    5464 pod_workers.go:191] Error syncing pod b83b5a1e-8008-45e7-b80d-6a9c27bf5f98 ("dashboard-metrics-scraper-6b84985989-bn4zg_kubernetes-dashboard(b83b5a1e-8008-45e7-b80d-6a9c27bf5f98)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6b84985989-bn4zg_kubernetes-dashboard(b83b5a1e-8008-45e7-b80d-6a9c27bf5f98)"
	May 12 01:20:56 old-k8s-version-20220512010246-7184 kubelet[5464]: E0512 01:20:56.740908    5464 pod_workers.go:191] Error syncing pod 7c6b6847-36d4-4700-b45c-4e00a73b9477 ("metrics-server-6f89b5864b-xnzbk_kube-system(7c6b6847-36d4-4700-b45c-4e00a73b9477)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/k8s.gcr.io/echoserver:1.4\""
	May 12 01:21:02 old-k8s-version-20220512010246-7184 kubelet[5464]: W0512 01:21:02.076747    5464 docker_sandbox.go:394] failed to read pod IP from plugin/docker: Couldn't find network status for kubernetes-dashboard/dashboard-metrics-scraper-6b84985989-bn4zg through plugin: invalid network status for
	May 12 01:21:02 old-k8s-version-20220512010246-7184 kubelet[5464]: W0512 01:21:02.587432    5464 container.go:409] Failed to create summary reader for "/kubepods/besteffort/podb83b5a1e-8008-45e7-b80d-6a9c27bf5f98/73624efb9aa4e1285e0ad418a15d51078278772c4d29458bccf6eb606275ff27": none of the resources are being tracked.
	May 12 01:21:03 old-k8s-version-20220512010246-7184 kubelet[5464]: W0512 01:21:03.498096    5464 docker_sandbox.go:394] failed to read pod IP from plugin/docker: Couldn't find network status for kubernetes-dashboard/dashboard-metrics-scraper-6b84985989-bn4zg through plugin: invalid network status for
	May 12 01:21:03 old-k8s-version-20220512010246-7184 kubelet[5464]: E0512 01:21:03.512093    5464 pod_workers.go:191] Error syncing pod b83b5a1e-8008-45e7-b80d-6a9c27bf5f98 ("dashboard-metrics-scraper-6b84985989-bn4zg_kubernetes-dashboard(b83b5a1e-8008-45e7-b80d-6a9c27bf5f98)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6b84985989-bn4zg_kubernetes-dashboard(b83b5a1e-8008-45e7-b80d-6a9c27bf5f98)"
	May 12 01:21:04 old-k8s-version-20220512010246-7184 kubelet[5464]: W0512 01:21:04.534363    5464 docker_sandbox.go:394] failed to read pod IP from plugin/docker: Couldn't find network status for kubernetes-dashboard/dashboard-metrics-scraper-6b84985989-bn4zg through plugin: invalid network status for
	May 12 01:21:06 old-k8s-version-20220512010246-7184 kubelet[5464]: E0512 01:21:06.433186    5464 pod_workers.go:191] Error syncing pod b83b5a1e-8008-45e7-b80d-6a9c27bf5f98 ("dashboard-metrics-scraper-6b84985989-bn4zg_kubernetes-dashboard(b83b5a1e-8008-45e7-b80d-6a9c27bf5f98)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6b84985989-bn4zg_kubernetes-dashboard(b83b5a1e-8008-45e7-b80d-6a9c27bf5f98)"
	May 12 01:21:10 old-k8s-version-20220512010246-7184 kubelet[5464]: E0512 01:21:10.740213    5464 pod_workers.go:191] Error syncing pod 7c6b6847-36d4-4700-b45c-4e00a73b9477 ("metrics-server-6f89b5864b-xnzbk_kube-system(7c6b6847-36d4-4700-b45c-4e00a73b9477)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/k8s.gcr.io/echoserver:1.4\""
	May 12 01:21:21 old-k8s-version-20220512010246-7184 kubelet[5464]: E0512 01:21:21.736597    5464 pod_workers.go:191] Error syncing pod b83b5a1e-8008-45e7-b80d-6a9c27bf5f98 ("dashboard-metrics-scraper-6b84985989-bn4zg_kubernetes-dashboard(b83b5a1e-8008-45e7-b80d-6a9c27bf5f98)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6b84985989-bn4zg_kubernetes-dashboard(b83b5a1e-8008-45e7-b80d-6a9c27bf5f98)"
	
	* 
	* ==> kubernetes-dashboard [57dce2ef6b23] <==
	* 2022/05/12 01:19:56 Starting overwatch
	2022/05/12 01:19:56 Using namespace: kubernetes-dashboard
	2022/05/12 01:19:56 Using in-cluster config to connect to apiserver
	2022/05/12 01:19:56 Using secret token for csrf signing
	2022/05/12 01:19:56 Initializing csrf token from kubernetes-dashboard-csrf secret
	2022/05/12 01:19:56 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2022/05/12 01:19:56 Successful initial request to the apiserver, version: v1.16.0
	2022/05/12 01:19:56 Generating JWE encryption key
	2022/05/12 01:19:56 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2022/05/12 01:19:56 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2022/05/12 01:19:58 Initializing JWE encryption key from synchronized object
	2022/05/12 01:19:58 Creating in-cluster Sidecar client
	2022/05/12 01:19:58 Serving insecurely on HTTP port: 9090
	2022/05/12 01:19:58 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2022/05/12 01:20:28 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2022/05/12 01:20:58 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	
	* 
	* ==> storage-provisioner [4a09b1dec868] <==
	* I0512 01:19:30.284416       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0512 01:19:30.376943       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0512 01:19:30.377085       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0512 01:19:30.473700       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0512 01:19:30.473797       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"0ac414cb-8b00-43fe-ac13-d4acc19bfd4f", APIVersion:"v1", ResourceVersion:"488", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-20220512010246-7184_1f959df6-09f4-46af-8951-76ce3599dc39 became leader
	I0512 01:19:30.474071       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-20220512010246-7184_1f959df6-09f4-46af-8951-76ce3599dc39!
	I0512 01:19:30.574586       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-20220512010246-7184_1f959df6-09f4-46af-8951-76ce3599dc39!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p old-k8s-version-20220512010246-7184 -n old-k8s-version-20220512010246-7184
E0512 01:21:25.002438    7184 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-20220511231058-7184\client.crt: The system cannot find the path specified.
helpers_test.go:254: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p old-k8s-version-20220512010246-7184 -n old-k8s-version-20220512010246-7184: (9.0452445s)
helpers_test.go:261: (dbg) Run:  kubectl --context old-k8s-version-20220512010246-7184 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:270: non-running pods: metrics-server-6f89b5864b-xnzbk
helpers_test.go:272: ======> post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: describe non-running pods <======
helpers_test.go:275: (dbg) Run:  kubectl --context old-k8s-version-20220512010246-7184 describe pod metrics-server-6f89b5864b-xnzbk
helpers_test.go:275: (dbg) Non-zero exit: kubectl --context old-k8s-version-20220512010246-7184 describe pod metrics-server-6f89b5864b-xnzbk: exit status 1 (2.8933038s)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-6f89b5864b-xnzbk" not found

                                                
                                                
** /stderr **
helpers_test.go:277: kubectl --context old-k8s-version-20220512010246-7184 describe pod metrics-server-6f89b5864b-xnzbk: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect old-k8s-version-20220512010246-7184
helpers_test.go:231: (dbg) Done: docker inspect old-k8s-version-20220512010246-7184: (1.3258989s)
helpers_test.go:235: (dbg) docker inspect old-k8s-version-20220512010246-7184:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "09c13f96e40541700b9cc790a0ab055a0fa1b74d8691ec171aaa8db383fbe462",
	        "Created": "2022-05-12T01:09:40.372697Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 224889,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2022-05-12T01:12:54.0696936Z",
	            "FinishedAt": "2022-05-12T01:12:34.3022706Z"
	        },
	        "Image": "sha256:8a42e1145657f551cd435eddb43b96ab44d0facbe44106da934225366eeb7757",
	        "ResolvConfPath": "/var/lib/docker/containers/09c13f96e40541700b9cc790a0ab055a0fa1b74d8691ec171aaa8db383fbe462/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/09c13f96e40541700b9cc790a0ab055a0fa1b74d8691ec171aaa8db383fbe462/hostname",
	        "HostsPath": "/var/lib/docker/containers/09c13f96e40541700b9cc790a0ab055a0fa1b74d8691ec171aaa8db383fbe462/hosts",
	        "LogPath": "/var/lib/docker/containers/09c13f96e40541700b9cc790a0ab055a0fa1b74d8691ec171aaa8db383fbe462/09c13f96e40541700b9cc790a0ab055a0fa1b74d8691ec171aaa8db383fbe462-json.log",
	        "Name": "/old-k8s-version-20220512010246-7184",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-20220512010246-7184:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "old-k8s-version-20220512010246-7184",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 2306867200,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/efd1441b9492f219138becda346b76206129a4b01aeb33530662ce9014d7857c-init/diff:/var/lib/docker/overlay2/f5e9ce82d74c36e0b6de7ac5d28dc4eacb2abae094b3550db66974795ad74446/diff:/var/lib/docker/overlay2/63fa2edc88f635760dd847938696c8fc50aad8a0c51ab6c6f93b0aa9a6fcefe6/diff:/var/lib/docker/overlay2/5fcaace21e215fd120a775470b4a78ef06d9f466e024f0b1baef234ddf87f04f/diff:/var/lib/docker/overlay2/6c9accb62919ca088333f8b3a193f43886e2c3042c5ec726e1c7fd19ee183024/diff:/var/lib/docker/overlay2/a9a1aea640018bd9e524c36422ea97cc97a391601b718810fed779260e48c87a/diff:/var/lib/docker/overlay2/2476b6e8d656e43d76d4288098236bc765cb37fa9dde696f09bfce485e99229e/diff:/var/lib/docker/overlay2/6cdf08ddc61561e961de67f3e14478397868c53a4d8552909a5b84e35b28db1f/diff:/var/lib/docker/overlay2/6f6f8b6686cc7838a52ce30fdc4e938cde2fb68b161e09a9bc81fa11011619a6/diff:/var/lib/docker/overlay2/9f55b91f762ea60dc3da71cf56e5ca24181e712389d99cb8d972bba474f5d6a4/diff:/var/lib/docker/overlay2/4c269b
df57eae0d21d2034dc086308278f9b00f2301c726b6de876b9ff97298d/diff:/var/lib/docker/overlay2/5fb8ed9b9e765df8150f27b7847ec7333b2c7d978dbc1161da97d630ec7e43e2/diff:/var/lib/docker/overlay2/3a297e9f6ab51d930ef61c49a0fea772cdc2a2704a077db6adb142eb044d9a93/diff:/var/lib/docker/overlay2/2068464f4655627fb513b31660ab34c938e559da10d44fd723ce9e1d744a037d/diff:/var/lib/docker/overlay2/f783596106daebadefbb7774015c369d757665d434c96581b426b2e5f5b453c4/diff:/var/lib/docker/overlay2/ac8927d3cc7829cc82e4c0214dd4adee97aedbe2b7d992cbbc08288443c8e320/diff:/var/lib/docker/overlay2/91eff4612dd56b2c82f4357b879f9cdcbc13704bf8f6dcbfc56febb104774843/diff:/var/lib/docker/overlay2/ee7366acc162efb7b878c4c56df021a8ef756fa595230ffe898cd0dd0355eb44/diff:/var/lib/docker/overlay2/ab5df115d2ec8cd71172942a2e449de32b93a3b6b5d90122c0e734c5e11d6bb2/diff:/var/lib/docker/overlay2/59fcbe9b552129cdc5c96e9e8ff27f4b88a12645aec3cf8f48f28d91521760e0/diff:/var/lib/docker/overlay2/0684681eb5880654b43e8803ef8f17b85e6129c85ff81c13b509563184a77625/diff:/var/lib/d
ocker/overlay2/8bbd24801b480df6ca8545e8f8bb09c17b7598c2868fb94ea5b8775ce2f311b4/diff:/var/lib/docker/overlay2/f28553cc59fccccfdfc5c24b7b8dfe4055c625d0a004731911c34b4ba32a9dfb/diff:/var/lib/docker/overlay2/2e47a8ef6e4481885d71f57a1d9ef99898b741644addd2796de5c2f4c696edb0/diff:/var/lib/docker/overlay2/e1f1eaaa809c974dabd197f590d19be05325f506a53a9a1f8ba29defd7096f60/diff:/var/lib/docker/overlay2/83ce12af60df76f98283ed8f3450cd5727b42d06055b18f04a18068b105ae128/diff:/var/lib/docker/overlay2/5fd34820f54e7f8f0c898c21b5d9d030e5b82c65c901897306c3db475481167f/diff:/var/lib/docker/overlay2/3ecb6f46fa47a8906ff5de1da5a63be9c664ff5bc66faf870126868d36bb77c6/diff:/var/lib/docker/overlay2/ccb92f12dd3e84b11b2c9b1ef6a0581ad5894648432ebe7cb5d16d48c7aacf6e/diff:/var/lib/docker/overlay2/7c6d11dc9abdd4916f3759c8ae4db8c3011cff872f2fd3cc502e7f663e496765/diff:/var/lib/docker/overlay2/b865b0351704115fa113e25f7651d1dc1e2f0348c332552e555e898094f34802/diff:/var/lib/docker/overlay2/bbcf207462c3f88368214d8e4ca222f28a828bd30661741d421665b4d10
80f07/diff:/var/lib/docker/overlay2/b554a32e9a2e4d3773e918754c27a1b32bc7ec5327d3bd1f52d7a146a07fa2c5/diff:/var/lib/docker/overlay2/d0a997bacfa9b1b54f61c62f00ad2797616ea9bb55182aad68ed805f96f5f72b/diff:/var/lib/docker/overlay2/e0c168ecfe6a93618f4f653c1aba422023114f242ab1045591d0c8454573d5c2/diff:/var/lib/docker/overlay2/fb67af38a46ef55935fcfb4f1be5f34b45b3d0e1c571538828117f23eedea417/diff:/var/lib/docker/overlay2/e96ed0776e5f27ef225469ac5f5e8ed2e299c72d5db88782599c0fdd1cec2fe3/diff:/var/lib/docker/overlay2/91b77e60e0a7864ace4f5a4d65f465bd7fe862616a87a74ee9fee21dc5dceb07/diff:/var/lib/docker/overlay2/9829211293f70b356dfa8d07b5dbbc3a6d05415cbd2840fd9dd948b8b315bf18/diff:/var/lib/docker/overlay2/dc35dda36e34a2f4f3a5d958b1a7d4d75db8655c4bc7b4b3d9591f43f9a645fc/diff:/var/lib/docker/overlay2/968c2bb04f641a9c8bd30d38659dc28973b31bfd577bb1aa355ae6c2ab4a0d34/diff:/var/lib/docker/overlay2/37432c6ae0b10a52e95b215fdd2e256362060f32c4a52f0d2021b6e10b3ed77b/diff:/var/lib/docker/overlay2/77687f9734b19f3e8a5bb08b07067e0b572775
20867b7a6ad80b67ffebe332d7/diff",
	                "MergedDir": "/var/lib/docker/overlay2/efd1441b9492f219138becda346b76206129a4b01aeb33530662ce9014d7857c/merged",
	                "UpperDir": "/var/lib/docker/overlay2/efd1441b9492f219138becda346b76206129a4b01aeb33530662ce9014d7857c/diff",
	                "WorkDir": "/var/lib/docker/overlay2/efd1441b9492f219138becda346b76206129a4b01aeb33530662ce9014d7857c/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-20220512010246-7184",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-20220512010246-7184/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-20220512010246-7184",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1652251400-14138@sha256:8c847a4aa2afc5a7fc659f9731046bf9cc7e788283deecc83c8633014fb0828a",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-20220512010246-7184",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-20220512010246-7184",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "32b0c2ce0f6a9578237c9c6cb025d61417c7468c64453600f63b0a2d42c5033f",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "50585"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "50586"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "50587"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "50588"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "50584"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/32b0c2ce0f6a",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-20220512010246-7184": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "09c13f96e405",
	                        "old-k8s-version-20220512010246-7184"
	                    ],
	                    "NetworkID": "62f4121100c00a6bbb9271af782221f9e410a7052f74222a0961dfec8ebf9fad",
	                    "EndpointID": "fa2e5caac9c73ffde8efa5bb8d61ea966a49ba68415a7ef79aada773644a0215",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:4c:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p old-k8s-version-20220512010246-7184 -n old-k8s-version-20220512010246-7184
helpers_test.go:239: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.Host}} -p old-k8s-version-20220512010246-7184 -n old-k8s-version-20220512010246-7184: (7.4509292s)
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-windows-amd64.exe -p old-k8s-version-20220512010246-7184 logs -n 25

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/Pause
helpers_test.go:247: (dbg) Done: out/minikube-windows-amd64.exe -p old-k8s-version-20220512010246-7184 logs -n 25: (9.782033s)
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/Pause logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|------------------------------------------------------------|------------------------------------------------|-------------------|---------|---------------------|---------------------|
	| Command |                            Args                            |                    Profile                     |       User        | Version |     Start Time      |      End Time       |
	|---------|------------------------------------------------------------|------------------------------------------------|-------------------|---------|---------------------|---------------------|
	| addons  | enable metrics-server -p                                   | default-k8s-different-port-20220512011148-7184 | minikube4\jenkins | v1.25.2 | 12 May 22 01:14 GMT | 12 May 22 01:14 GMT |
	|         | default-k8s-different-port-20220512011148-7184             |                                                |                   |         |                     |                     |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4           |                                                |                   |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                     |                                                |                   |         |                     |                     |
	| stop    | -p                                                         | default-k8s-different-port-20220512011148-7184 | minikube4\jenkins | v1.25.2 | 12 May 22 01:14 GMT | 12 May 22 01:14 GMT |
	|         | default-k8s-different-port-20220512011148-7184             |                                                |                   |         |                     |                     |
	|         | --alsologtostderr -v=3                                     |                                                |                   |         |                     |                     |
	| addons  | enable dashboard -p                                        | default-k8s-different-port-20220512011148-7184 | minikube4\jenkins | v1.25.2 | 12 May 22 01:14 GMT | 12 May 22 01:14 GMT |
	|         | default-k8s-different-port-20220512011148-7184             |                                                |                   |         |                     |                     |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4          |                                                |                   |         |                     |                     |
	| ssh     | -p                                                         | no-preload-20220512010315-7184                 | minikube4\jenkins | v1.25.2 | 12 May 22 01:14 GMT | 12 May 22 01:14 GMT |
	|         | no-preload-20220512010315-7184                             |                                                |                   |         |                     |                     |
	|         | sudo crictl images -o json                                 |                                                |                   |         |                     |                     |
	| pause   | -p                                                         | no-preload-20220512010315-7184                 | minikube4\jenkins | v1.25.2 | 12 May 22 01:14 GMT | 12 May 22 01:14 GMT |
	|         | no-preload-20220512010315-7184                             |                                                |                   |         |                     |                     |
	|         | --alsologtostderr -v=1                                     |                                                |                   |         |                     |                     |
	| unpause | -p                                                         | no-preload-20220512010315-7184                 | minikube4\jenkins | v1.25.2 | 12 May 22 01:15 GMT | 12 May 22 01:15 GMT |
	|         | no-preload-20220512010315-7184                             |                                                |                   |         |                     |                     |
	|         | --alsologtostderr -v=1                                     |                                                |                   |         |                     |                     |
	| start   | -p                                                         | embed-certs-20220512010611-7184                | minikube4\jenkins | v1.25.2 | 12 May 22 01:08 GMT | 12 May 22 01:15 GMT |
	|         | embed-certs-20220512010611-7184                            |                                                |                   |         |                     |                     |
	|         | --memory=2200 --alsologtostderr                            |                                                |                   |         |                     |                     |
	|         | --wait=true --embed-certs                                  |                                                |                   |         |                     |                     |
	|         | --driver=docker                                            |                                                |                   |         |                     |                     |
	|         | --kubernetes-version=v1.23.5                               |                                                |                   |         |                     |                     |
	| delete  | -p                                                         | no-preload-20220512010315-7184                 | minikube4\jenkins | v1.25.2 | 12 May 22 01:15 GMT | 12 May 22 01:16 GMT |
	|         | no-preload-20220512010315-7184                             |                                                |                   |         |                     |                     |
	| delete  | -p                                                         | no-preload-20220512010315-7184                 | minikube4\jenkins | v1.25.2 | 12 May 22 01:16 GMT | 12 May 22 01:16 GMT |
	|         | no-preload-20220512010315-7184                             |                                                |                   |         |                     |                     |
	| ssh     | -p                                                         | embed-certs-20220512010611-7184                | minikube4\jenkins | v1.25.2 | 12 May 22 01:16 GMT | 12 May 22 01:16 GMT |
	|         | embed-certs-20220512010611-7184                            |                                                |                   |         |                     |                     |
	|         | sudo crictl images -o json                                 |                                                |                   |         |                     |                     |
	| logs    | embed-certs-20220512010611-7184                            | embed-certs-20220512010611-7184                | minikube4\jenkins | v1.25.2 | 12 May 22 01:17 GMT | 12 May 22 01:17 GMT |
	|         | logs -n 25                                                 |                                                |                   |         |                     |                     |
	| start   | -p newest-cni-20220512011616-7184 --memory=2200            | newest-cni-20220512011616-7184                 | minikube4\jenkins | v1.25.2 | 12 May 22 01:16 GMT | 12 May 22 01:18 GMT |
	|         | --alsologtostderr --wait=apiserver,system_pods,default_sa  |                                                |                   |         |                     |                     |
	|         | --feature-gates ServerSideApply=true --network-plugin=cni  |                                                |                   |         |                     |                     |
	|         | --extra-config=kubelet.network-plugin=cni                  |                                                |                   |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 |                                                |                   |         |                     |                     |
	|         | --driver=docker --kubernetes-version=v1.23.6-rc.0          |                                                |                   |         |                     |                     |
	| addons  | enable metrics-server -p                                   | newest-cni-20220512011616-7184                 | minikube4\jenkins | v1.25.2 | 12 May 22 01:18 GMT | 12 May 22 01:18 GMT |
	|         | newest-cni-20220512011616-7184                             |                                                |                   |         |                     |                     |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4           |                                                |                   |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                     |                                                |                   |         |                     |                     |
	| logs    | embed-certs-20220512010611-7184                            | embed-certs-20220512010611-7184                | minikube4\jenkins | v1.25.2 | 12 May 22 01:17 GMT | 12 May 22 01:18 GMT |
	|         | logs -n 25                                                 |                                                |                   |         |                     |                     |
	| stop    | -p                                                         | newest-cni-20220512011616-7184                 | minikube4\jenkins | v1.25.2 | 12 May 22 01:18 GMT | 12 May 22 01:18 GMT |
	|         | newest-cni-20220512011616-7184                             |                                                |                   |         |                     |                     |
	|         | --alsologtostderr -v=3                                     |                                                |                   |         |                     |                     |
	| addons  | enable dashboard -p                                        | newest-cni-20220512011616-7184                 | minikube4\jenkins | v1.25.2 | 12 May 22 01:18 GMT | 12 May 22 01:19 GMT |
	|         | newest-cni-20220512011616-7184                             |                                                |                   |         |                     |                     |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4          |                                                |                   |         |                     |                     |
	| delete  | -p                                                         | embed-certs-20220512010611-7184                | minikube4\jenkins | v1.25.2 | 12 May 22 01:18 GMT | 12 May 22 01:19 GMT |
	|         | embed-certs-20220512010611-7184                            |                                                |                   |         |                     |                     |
	| delete  | -p                                                         | embed-certs-20220512010611-7184                | minikube4\jenkins | v1.25.2 | 12 May 22 01:19 GMT | 12 May 22 01:19 GMT |
	|         | embed-certs-20220512010611-7184                            |                                                |                   |         |                     |                     |
	| start   | -p newest-cni-20220512011616-7184 --memory=2200            | newest-cni-20220512011616-7184                 | minikube4\jenkins | v1.25.2 | 12 May 22 01:19 GMT | 12 May 22 01:20 GMT |
	|         | --alsologtostderr --wait=apiserver,system_pods,default_sa  |                                                |                   |         |                     |                     |
	|         | --feature-gates ServerSideApply=true --network-plugin=cni  |                                                |                   |         |                     |                     |
	|         | --extra-config=kubelet.network-plugin=cni                  |                                                |                   |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 |                                                |                   |         |                     |                     |
	|         | --driver=docker --kubernetes-version=v1.23.6-rc.0          |                                                |                   |         |                     |                     |
	| start   | -p                                                         | old-k8s-version-20220512010246-7184            | minikube4\jenkins | v1.25.2 | 12 May 22 01:12 GMT | 12 May 22 01:20 GMT |
	|         | old-k8s-version-20220512010246-7184                        |                                                |                   |         |                     |                     |
	|         | --memory=2200 --alsologtostderr                            |                                                |                   |         |                     |                     |
	|         | --wait=true --kvm-network=default                          |                                                |                   |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                              |                                                |                   |         |                     |                     |
	|         | --disable-driver-mounts                                    |                                                |                   |         |                     |                     |
	|         | --keep-context=false                                       |                                                |                   |         |                     |                     |
	|         | --driver=docker                                            |                                                |                   |         |                     |                     |
	|         | --kubernetes-version=v1.16.0                               |                                                |                   |         |                     |                     |
	| ssh     | -p                                                         | newest-cni-20220512011616-7184                 | minikube4\jenkins | v1.25.2 | 12 May 22 01:20 GMT | 12 May 22 01:20 GMT |
	|         | newest-cni-20220512011616-7184                             |                                                |                   |         |                     |                     |
	|         | sudo crictl images -o json                                 |                                                |                   |         |                     |                     |
	| pause   | -p                                                         | newest-cni-20220512011616-7184                 | minikube4\jenkins | v1.25.2 | 12 May 22 01:20 GMT | 12 May 22 01:20 GMT |
	|         | newest-cni-20220512011616-7184                             |                                                |                   |         |                     |                     |
	|         | --alsologtostderr -v=1                                     |                                                |                   |         |                     |                     |
	| ssh     | -p                                                         | old-k8s-version-20220512010246-7184            | minikube4\jenkins | v1.25.2 | 12 May 22 01:20 GMT | 12 May 22 01:20 GMT |
	|         | old-k8s-version-20220512010246-7184                        |                                                |                   |         |                     |                     |
	|         | sudo crictl images -o json                                 |                                                |                   |         |                     |                     |
	| unpause | -p                                                         | newest-cni-20220512011616-7184                 | minikube4\jenkins | v1.25.2 | 12 May 22 01:20 GMT | 12 May 22 01:21 GMT |
	|         | newest-cni-20220512011616-7184                             |                                                |                   |         |                     |                     |
	|         | --alsologtostderr -v=1                                     |                                                |                   |         |                     |                     |
	| logs    | old-k8s-version-20220512010246-7184                        | old-k8s-version-20220512010246-7184            | minikube4\jenkins | v1.25.2 | 12 May 22 01:21 GMT | 12 May 22 01:21 GMT |
	|         | logs -n 25                                                 |                                                |                   |         |                     |                     |
	|---------|------------------------------------------------------------|------------------------------------------------|-------------------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2022/05/12 01:19:49
	Running on machine: minikube4
	Binary: Built with gc go1.18.1 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0512 01:19:49.571176    4188 out.go:296] Setting OutFile to fd 1860 ...
	I0512 01:19:49.639221    4188 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0512 01:19:49.639221    4188 out.go:309] Setting ErrFile to fd 1796...
	I0512 01:19:49.639221    4188 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0512 01:19:49.650223    4188 out.go:303] Setting JSON to false
	I0512 01:19:49.653228    4188 start.go:115] hostinfo: {"hostname":"minikube4","uptime":16842,"bootTime":1652301547,"procs":166,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19042 Build 19042","kernelVersion":"10.0.19042 Build 19042","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"df6bfb5f-73f2-4acb-9365-df7854ecbb28"}
	W0512 01:19:49.653228    4188 start.go:123] gopshost.Virtualization returned error: not implemented yet
	I0512 01:19:49.659225    4188 out.go:177] * [auto-20220512010229-7184] minikube v1.25.2 on Microsoft Windows 10 Enterprise N 10.0.19042 Build 19042
	I0512 01:19:49.662226    4188 notify.go:193] Checking for updates...
	I0512 01:19:49.668232    4188 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	I0512 01:19:49.671297    4188 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube4\minikube-integration\.minikube
	I0512 01:19:49.676237    4188 out.go:177]   - MINIKUBE_LOCATION=13639
	I0512 01:19:49.678249    4188 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0512 01:19:46.933351    4792 pod_ready.go:102] pod "metrics-server-b955d9d8-rm42p" in "kube-system" namespace has status "Ready":"False"
	I0512 01:19:49.432852    4792 pod_ready.go:102] pod "metrics-server-b955d9d8-rm42p" in "kube-system" namespace has status "Ready":"False"
	I0512 01:19:49.672237    4756 system_pods.go:86] 4 kube-system pods found
	I0512 01:19:49.672237    4756 system_pods.go:89] "coredns-5644d7b6d9-ds6wg" [274c71a2-5a74-40cf-9719-e53e1901acdb] Running
	I0512 01:19:49.672237    4756 system_pods.go:89] "kube-proxy-5dp6x" [29ed9a2f-069f-409e-8a9f-ce8869e1a908] Running
	I0512 01:19:49.672237    4756 system_pods.go:89] "metrics-server-6f89b5864b-xnzbk" [7c6b6847-36d4-4700-b45c-4e00a73b9477] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0512 01:19:49.672237    4756 system_pods.go:89] "storage-provisioner" [aab59255-6979-4cee-bb62-a1d8611e5cf8] Running
	I0512 01:19:49.672237    4756 retry.go:31] will retry after 4.086092664s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0512 01:19:47.965926    6912 api_server.go:165] Checking apiserver status ...
	I0512 01:19:47.982987    6912 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0512 01:19:48.013100    6912 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0512 01:19:48.171006    6912 api_server.go:165] Checking apiserver status ...
	I0512 01:19:48.186022    6912 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0512 01:19:48.211529    6912 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0512 01:19:48.374994    6912 api_server.go:165] Checking apiserver status ...
	I0512 01:19:48.390804    6912 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0512 01:19:48.421480    6912 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0512 01:19:48.563441    6912 api_server.go:165] Checking apiserver status ...
	I0512 01:19:48.578770    6912 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0512 01:19:48.616536    6912 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0512 01:19:48.769628    6912 api_server.go:165] Checking apiserver status ...
	I0512 01:19:48.786513    6912 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0512 01:19:48.815443    6912 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0512 01:19:48.973003    6912 api_server.go:165] Checking apiserver status ...
	I0512 01:19:48.985016    6912 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0512 01:19:49.012504    6912 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0512 01:19:49.176835    6912 api_server.go:165] Checking apiserver status ...
	I0512 01:19:49.204038    6912 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0512 01:19:49.236355    6912 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0512 01:19:49.365687    6912 api_server.go:165] Checking apiserver status ...
	I0512 01:19:49.382965    6912 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0512 01:19:49.423915    6912 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0512 01:19:49.567179    6912 api_server.go:165] Checking apiserver status ...
	I0512 01:19:49.584863    6912 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0512 01:19:49.616760    6912 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0512 01:19:49.769459    6912 api_server.go:165] Checking apiserver status ...
	I0512 01:19:49.782464    6912 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0512 01:19:49.809387    6912 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0512 01:19:49.974446    6912 api_server.go:165] Checking apiserver status ...
	I0512 01:19:49.993561    6912 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0512 01:19:50.022494    6912 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0512 01:19:50.163756    6912 api_server.go:165] Checking apiserver status ...
	I0512 01:19:50.184636    6912 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0512 01:19:50.211748    6912 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0512 01:19:50.211812    6912 api_server.go:165] Checking apiserver status ...
	I0512 01:19:50.222237    6912 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0512 01:19:50.246716    6912 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0512 01:19:50.246716    6912 kubeadm.go:576] needs reconfigure: apiserver error: timed out waiting for the condition
	I0512 01:19:50.246716    6912 kubeadm.go:1067] stopping kube-system containers ...
	I0512 01:19:50.255348    6912 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0512 01:19:50.340290    6912 docker.go:442] Stopping containers: [d227455ccdde 2705871ca0f7 49817350aebd 852768ca0726 0cbc97ed8c11 badfe163ceb4 9367d74fd2f8 4d1db2f18b33 4a44055f81f8 fbd2796c00bf 48054c5b8de4 6f1ab527264d 4f0acad8f528 3e9d5d1a9343 6b5810bcd73a 114027ffb054 7e8a5a194b38 d8804284e08f dda6e2dbf316]
	I0512 01:19:50.352131    6912 ssh_runner.go:195] Run: docker stop d227455ccdde 2705871ca0f7 49817350aebd 852768ca0726 0cbc97ed8c11 badfe163ceb4 9367d74fd2f8 4d1db2f18b33 4a44055f81f8 fbd2796c00bf 48054c5b8de4 6f1ab527264d 4f0acad8f528 3e9d5d1a9343 6b5810bcd73a 114027ffb054 7e8a5a194b38 d8804284e08f dda6e2dbf316
	I0512 01:19:50.444468    6912 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0512 01:19:50.490417    6912 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0512 01:19:50.512429    6912 kubeadm.go:155] found existing configuration files:
	-rw------- 1 root root 5643 May 12 01:17 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5656 May 12 01:17 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2059 May 12 01:18 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5600 May 12 01:17 /etc/kubernetes/scheduler.conf
	
	I0512 01:19:50.522401    6912 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0512 01:19:50.551414    6912 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0512 01:19:50.593427    6912 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0512 01:19:50.613499    6912 kubeadm.go:166] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0512 01:19:50.622421    6912 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0512 01:19:50.649418    6912 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0512 01:19:50.670974    6912 kubeadm.go:166] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0512 01:19:50.685280    6912 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0512 01:19:50.727494    6912 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0512 01:19:50.752047    6912 kubeadm.go:678] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0512 01:19:50.752047    6912 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6-rc.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0512 01:19:50.884508    6912 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6-rc.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0512 01:19:52.347373    6912 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6-rc.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.4627672s)
	I0512 01:19:52.347447    6912 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6-rc.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0512 01:19:52.668499    6912 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6-rc.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0512 01:19:52.897641    6912 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6-rc.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0512 01:19:49.682253    4188 config.go:178] Loaded profile config "default-k8s-different-port-20220512011148-7184": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.5
	I0512 01:19:49.682253    4188 config.go:178] Loaded profile config "newest-cni-20220512011616-7184": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.6-rc.0
	I0512 01:19:49.683411    4188 config.go:178] Loaded profile config "old-k8s-version-20220512010246-7184": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.16.0
	I0512 01:19:49.683411    4188 driver.go:358] Setting default libvirt URI to qemu:///system
	I0512 01:19:52.538476    4188 docker.go:137] docker version: linux-20.10.14
	I0512 01:19:52.546247    4188 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0512 01:19:51.988243    4792 pod_ready.go:102] pod "metrics-server-b955d9d8-rm42p" in "kube-system" namespace has status "Ready":"False"
	I0512 01:19:54.426932    4792 pod_ready.go:102] pod "metrics-server-b955d9d8-rm42p" in "kube-system" namespace has status "Ready":"False"
	I0512 01:19:54.890230    4188 cli_runner.go:217] Completed: docker system info --format "{{json .}}": (2.3438627s)
	I0512 01:19:55.419165    4188 info.go:265] docker info: {ID:5MOX:W55Z:6RSS:V5PU:46KT:D723:NTM4:N7FK:USOO:URA3:TW6J:2PNT Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:86 OomKillDisable:true NGoroutines:63 SystemTime:2022-05-12 01:19:53.7251318 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.102.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_
64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:53902323712 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.14 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3df54a852345ae127d1fa3092b95168e4a88e2f8 Expected:3df54a852345ae127d1fa3092b95168e4a88e2f8} RuncCommit:{ID:v1.0.3-0-gf46b6ba Expected:v1.0.3-0-gf46b6ba} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,
profile=default] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.5.0] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:C:\Program Files\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc.
Version:v0.17.0]] Warnings:<nil>}}
	I0512 01:19:55.739799    4188 out.go:177] * Using the docker driver based on user configuration
	I0512 01:19:53.773240    4756 system_pods.go:86] 5 kube-system pods found
	I0512 01:19:53.773240    4756 system_pods.go:89] "coredns-5644d7b6d9-ds6wg" [274c71a2-5a74-40cf-9719-e53e1901acdb] Running
	I0512 01:19:53.773240    4756 system_pods.go:89] "kube-apiserver-old-k8s-version-20220512010246-7184" [5e4b74f1-7f9f-4b1e-bfbb-762b651204a1] Pending
	I0512 01:19:53.773240    4756 system_pods.go:89] "kube-proxy-5dp6x" [29ed9a2f-069f-409e-8a9f-ce8869e1a908] Running
	I0512 01:19:53.773240    4756 system_pods.go:89] "metrics-server-6f89b5864b-xnzbk" [7c6b6847-36d4-4700-b45c-4e00a73b9477] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0512 01:19:53.773240    4756 system_pods.go:89] "storage-provisioner" [aab59255-6979-4cee-bb62-a1d8611e5cf8] Running
	I0512 01:19:53.773240    4756 retry.go:31] will retry after 6.402197611s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0512 01:19:53.169962    6912 api_server.go:51] waiting for apiserver process to appear ...
	I0512 01:19:53.187515    6912 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0512 01:19:53.730427    6912 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0512 01:19:54.229642    6912 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0512 01:19:54.732637    6912 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0512 01:19:55.239773    6912 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0512 01:19:55.743657    6912 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0512 01:19:56.227267    6912 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0512 01:19:56.733277    6912 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0512 01:19:57.239840    6912 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0512 01:19:57.726233    6912 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0512 01:19:55.744334    4188 start.go:284] selected driver: docker
	I0512 01:19:55.744334    4188 start.go:801] validating driver "docker" against <nil>
	I0512 01:19:55.744334    4188 start.go:812] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0512 01:19:55.828778    4188 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0512 01:19:58.492957    4188 cli_runner.go:217] Completed: docker system info --format "{{json .}}": (2.6640423s)
	I0512 01:19:58.492957    4188 info.go:265] docker info: {ID:5MOX:W55Z:6RSS:V5PU:46KT:D723:NTM4:N7FK:USOO:URA3:TW6J:2PNT Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:86 OomKillDisable:true NGoroutines:63 SystemTime:2022-05-12 01:19:57.0171379 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.102.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_
64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:53902323712 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.14 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3df54a852345ae127d1fa3092b95168e4a88e2f8 Expected:3df54a852345ae127d1fa3092b95168e4a88e2f8} RuncCommit:{ID:v1.0.3-0-gf46b6ba Expected:v1.0.3-0-gf46b6ba} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,
profile=default] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.5.0] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:C:\Program Files\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc.
Version:v0.17.0]] Warnings:<nil>}}
	I0512 01:19:58.492957    4188 start_flags.go:292] no existing cluster config was found, will generate one from the flags 
	I0512 01:19:58.493961    4188 start_flags.go:847] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0512 01:19:58.496990    4188 out.go:177] * Using Docker Desktop driver with the root privilege
	I0512 01:19:58.498953    4188 cni.go:95] Creating CNI manager for ""
	I0512 01:19:58.498953    4188 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0512 01:19:58.498953    4188 start_flags.go:306] config:
	{Name:auto-20220512010229-7184 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1652251400-14138@sha256:8c847a4aa2afc5a7fc659f9731046bf9cc7e788283deecc83c8633014fb0828a Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.5 ClusterName:auto-20220512010229-7184 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:5m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube4:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0512 01:19:58.502956    4188 out.go:177] * Starting control plane node auto-20220512010229-7184 in cluster auto-20220512010229-7184
	I0512 01:19:58.504961    4188 cache.go:120] Beginning downloading kic base image for docker with docker
	I0512 01:19:58.507965    4188 out.go:177] * Pulling base image ...
	I0512 01:19:58.509945    4188 preload.go:132] Checking if preload exists for k8s version v1.23.5 and runtime docker
	I0512 01:19:58.509945    4188 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1652251400-14138@sha256:8c847a4aa2afc5a7fc659f9731046bf9cc7e788283deecc83c8633014fb0828a in local docker daemon
	I0512 01:19:58.509945    4188 preload.go:148] Found local preload: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.23.5-docker-overlay2-amd64.tar.lz4
	I0512 01:19:58.509945    4188 cache.go:57] Caching tarball of preloaded images
	I0512 01:19:58.509945    4188 preload.go:174] Found C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.23.5-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0512 01:19:58.510965    4188 cache.go:60] Finished verifying existence of preloaded tar for  v1.23.5 on docker
	I0512 01:19:58.510965    4188 profile.go:148] Saving config to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\auto-20220512010229-7184\config.json ...
	I0512 01:19:58.510965    4188 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\auto-20220512010229-7184\config.json: {Name:mkd138d070c3656e8dfc555bf2a37060768135d6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0512 01:19:59.973645    4188 image.go:79] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1652251400-14138@sha256:8c847a4aa2afc5a7fc659f9731046bf9cc7e788283deecc83c8633014fb0828a in local docker daemon, skipping pull
	I0512 01:19:59.973645    4188 cache.go:141] gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1652251400-14138@sha256:8c847a4aa2afc5a7fc659f9731046bf9cc7e788283deecc83c8633014fb0828a exists in daemon, skipping load
	I0512 01:19:59.973645    4188 cache.go:206] Successfully downloaded all kic artifacts
	I0512 01:19:59.973645    4188 start.go:352] acquiring machines lock for auto-20220512010229-7184: {Name:mkce085adb4528067fc9b8e27ba1f8fcfad3c3c6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0512 01:19:59.973645    4188 start.go:356] acquired machines lock for "auto-20220512010229-7184" in 0s
	I0512 01:19:59.974439    4188 start.go:91] Provisioning new machine with config: &{Name:auto-20220512010229-7184 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1652251400-14138@sha256:8c847a4aa2afc5a7fc659f9731046bf9cc7e788283deecc83c8633014fb0828a Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.5 ClusterName:auto-20220512010229-7184 Namespace:default APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.23.5 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:5m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube4:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimiz
ations:false DisableMetrics:false} &{Name: IP: Port:8443 KubernetesVersion:v1.23.5 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0512 01:19:59.974439    4188 start.go:131] createHost starting for "" (driver="docker")
	I0512 01:19:56.448640    4792 pod_ready.go:102] pod "metrics-server-b955d9d8-rm42p" in "kube-system" namespace has status "Ready":"False"
	I0512 01:19:58.907967    4792 pod_ready.go:81] duration metric: took 4m0.0148566s waiting for pod "metrics-server-b955d9d8-rm42p" in "kube-system" namespace to be "Ready" ...
	E0512 01:19:58.907967    4792 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-b955d9d8-rm42p" in "kube-system" namespace to be "Ready" (will not retry!)
	I0512 01:19:58.907967    4792 pod_ready.go:38] duration metric: took 4m5.7987586s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0512 01:19:58.907967    4792 kubeadm.go:605] restartCluster took 4m38.5249642s
	W0512 01:19:58.907967    4792 out.go:239] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I0512 01:19:58.908975    4792 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.5:$PATH" kubeadm reset --cri-socket /var/run/dockershim.sock --force"
	I0512 01:20:00.291379    4756 system_pods.go:86] 5 kube-system pods found
	I0512 01:20:00.291379    4756 system_pods.go:89] "coredns-5644d7b6d9-ds6wg" [274c71a2-5a74-40cf-9719-e53e1901acdb] Running
	I0512 01:20:00.291379    4756 system_pods.go:89] "kube-apiserver-old-k8s-version-20220512010246-7184" [5e4b74f1-7f9f-4b1e-bfbb-762b651204a1] Running
	I0512 01:20:00.291530    4756 system_pods.go:89] "kube-proxy-5dp6x" [29ed9a2f-069f-409e-8a9f-ce8869e1a908] Running
	I0512 01:20:00.291567    4756 system_pods.go:89] "metrics-server-6f89b5864b-xnzbk" [7c6b6847-36d4-4700-b45c-4e00a73b9477] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0512 01:20:00.291567    4756 system_pods.go:89] "storage-provisioner" [aab59255-6979-4cee-bb62-a1d8611e5cf8] Running
	I0512 01:20:00.291639    4756 retry.go:31] will retry after 6.062999549s: missing components: etcd, kube-controller-manager, kube-scheduler
	I0512 01:19:58.232279    6912 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0512 01:19:58.731964    6912 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0512 01:19:58.890978    6912 api_server.go:71] duration metric: took 5.7207215s to wait for apiserver process to appear ...
	I0512 01:19:58.890978    6912 api_server.go:87] waiting for apiserver healthz status ...
	I0512 01:19:58.890978    6912 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:50853/healthz ...
	I0512 01:19:58.896961    6912 api_server.go:256] stopped: https://127.0.0.1:50853/healthz: Get "https://127.0.0.1:50853/healthz": EOF
	I0512 01:19:59.401007    6912 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:50853/healthz ...
	I0512 01:19:59.978436    4188 out.go:204] * Creating docker container (CPUs=2, Memory=2048MB) ...
	I0512 01:19:59.978436    4188 start.go:165] libmachine.API.Create for "auto-20220512010229-7184" (driver="docker")
	I0512 01:19:59.978436    4188 client.go:168] LocalClient.Create starting
	I0512 01:19:59.979496    4188 main.go:134] libmachine: Reading certificate data from C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem
	I0512 01:19:59.979496    4188 main.go:134] libmachine: Decoding PEM data...
	I0512 01:19:59.979496    4188 main.go:134] libmachine: Parsing certificate...
	I0512 01:19:59.979496    4188 main.go:134] libmachine: Reading certificate data from C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem
	I0512 01:19:59.980461    4188 main.go:134] libmachine: Decoding PEM data...
	I0512 01:19:59.980461    4188 main.go:134] libmachine: Parsing certificate...
	I0512 01:19:59.994433    4188 cli_runner.go:164] Run: docker network inspect auto-20220512010229-7184 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0512 01:20:01.234030    4188 cli_runner.go:211] docker network inspect auto-20220512010229-7184 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0512 01:20:01.234030    4188 cli_runner.go:217] Completed: docker network inspect auto-20220512010229-7184 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}": (1.2395329s)
	I0512 01:20:01.241764    4188 network_create.go:272] running [docker network inspect auto-20220512010229-7184] to gather additional debugging logs...
	I0512 01:20:01.241798    4188 cli_runner.go:164] Run: docker network inspect auto-20220512010229-7184
	W0512 01:20:02.452635    4188 cli_runner.go:211] docker network inspect auto-20220512010229-7184 returned with exit code 1
	I0512 01:20:02.452635    4188 cli_runner.go:217] Completed: docker network inspect auto-20220512010229-7184: (1.2106913s)
	I0512 01:20:02.452635    4188 network_create.go:275] error running [docker network inspect auto-20220512010229-7184]: docker network inspect auto-20220512010229-7184: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: auto-20220512010229-7184
	I0512 01:20:02.452635    4188 network_create.go:277] output of [docker network inspect auto-20220512010229-7184]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: auto-20220512010229-7184
	
	** /stderr **
	I0512 01:20:02.460063    4188 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0512 01:20:03.649817    4188 cli_runner.go:217] Completed: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}": (1.1896924s)
	I0512 01:20:03.673052    4188 network.go:288] reserving subnet 192.168.49.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[] amended:true}} dirty:map[192.168.49.0:0xc000794630] misses:0}
	I0512 01:20:03.673052    4188 network.go:235] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0512 01:20:03.673052    4188 network_create.go:115] attempt to create docker network auto-20220512010229-7184 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0512 01:20:03.687033    4188 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true auto-20220512010229-7184
	I0512 01:20:06.376301    4756 system_pods.go:86] 7 kube-system pods found
	I0512 01:20:06.376301    4756 system_pods.go:89] "coredns-5644d7b6d9-ds6wg" [274c71a2-5a74-40cf-9719-e53e1901acdb] Running
	I0512 01:20:06.376301    4756 system_pods.go:89] "etcd-old-k8s-version-20220512010246-7184" [8197f31d-c95a-42f1-9974-091d1c27c60b] Pending
	I0512 01:20:06.376301    4756 system_pods.go:89] "kube-apiserver-old-k8s-version-20220512010246-7184" [5e4b74f1-7f9f-4b1e-bfbb-762b651204a1] Running
	I0512 01:20:06.376301    4756 system_pods.go:89] "kube-controller-manager-old-k8s-version-20220512010246-7184" [464fedb8-445d-4d2b-98af-2fea913fa291] Pending
	I0512 01:20:06.376301    4756 system_pods.go:89] "kube-proxy-5dp6x" [29ed9a2f-069f-409e-8a9f-ce8869e1a908] Running
	I0512 01:20:06.376301    4756 system_pods.go:89] "metrics-server-6f89b5864b-xnzbk" [7c6b6847-36d4-4700-b45c-4e00a73b9477] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0512 01:20:06.376301    4756 system_pods.go:89] "storage-provisioner" [aab59255-6979-4cee-bb62-a1d8611e5cf8] Running
	I0512 01:20:06.376301    4756 retry.go:31] will retry after 10.504197539s: missing components: etcd, kube-controller-manager, kube-scheduler
	I0512 01:20:04.406423    6912 api_server.go:256] stopped: https://127.0.0.1:50853/healthz: Get "https://127.0.0.1:50853/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0512 01:20:04.910700    6912 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:50853/healthz ...
	I0512 01:20:05.276798    6912 api_server.go:266] https://127.0.0.1:50853/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0512 01:20:05.276798    6912 api_server.go:102] status: https://127.0.0.1:50853/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0512 01:20:05.400835    6912 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:50853/healthz ...
	I0512 01:20:05.577325    6912 api_server.go:266] https://127.0.0.1:50853/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0512 01:20:05.577325    6912 api_server.go:102] status: https://127.0.0.1:50853/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0512 01:20:05.902960    6912 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:50853/healthz ...
	I0512 01:20:05.983572    6912 api_server.go:266] https://127.0.0.1:50853/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0512 01:20:05.983572    6912 api_server.go:102] status: https://127.0.0.1:50853/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0512 01:20:06.405287    6912 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:50853/healthz ...
	I0512 01:20:06.483009    6912 api_server.go:266] https://127.0.0.1:50853/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0512 01:20:06.483127    6912 api_server.go:102] status: https://127.0.0.1:50853/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0512 01:20:06.909964    6912 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:50853/healthz ...
	I0512 01:20:07.552892    6912 api_server.go:266] https://127.0.0.1:50853/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0512 01:20:07.553019    6912 api_server.go:102] status: https://127.0.0.1:50853/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0512 01:20:07.904511    6912 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:50853/healthz ...
	W0512 01:20:04.799654    4188 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true auto-20220512010229-7184 returned with exit code 1
	I0512 01:20:04.799654    4188 cli_runner.go:217] Completed: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true auto-20220512010229-7184: (1.1124913s)
	W0512 01:20:04.799654    4188 network_create.go:107] failed to create docker network auto-20220512010229-7184 192.168.49.0/24, will retry: subnet is taken
	I0512 01:20:04.822835    4188 network.go:279] skipping subnet 192.168.49.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc000794630] amended:false}} dirty:map[] misses:0}
	I0512 01:20:04.822875    4188 network.go:238] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0512 01:20:04.843798    4188 network.go:288] reserving subnet 192.168.58.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc000794630] amended:true}} dirty:map[192.168.49.0:0xc000794630 192.168.58.0:0xc000006a08] misses:0}
	I0512 01:20:04.843798    4188 network.go:235] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0512 01:20:04.843798    4188 network_create.go:115] attempt to create docker network auto-20220512010229-7184 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
	I0512 01:20:04.850795    4188 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true auto-20220512010229-7184
	I0512 01:20:06.166981    4188 cli_runner.go:217] Completed: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true auto-20220512010229-7184: (1.3151242s)
	I0512 01:20:06.166981    4188 network_create.go:99] docker network auto-20220512010229-7184 192.168.58.0/24 created
	I0512 01:20:06.166981    4188 kic.go:106] calculated static IP "192.168.58.2" for the "auto-20220512010229-7184" container
	I0512 01:20:06.193972    4188 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0512 01:20:07.431171    4188 cli_runner.go:217] Completed: docker ps -a --format {{.Names}}: (1.2370298s)
	I0512 01:20:07.439300    4188 cli_runner.go:164] Run: docker volume create auto-20220512010229-7184 --label name.minikube.sigs.k8s.io=auto-20220512010229-7184 --label created_by.minikube.sigs.k8s.io=true
	I0512 01:20:08.711061    4188 cli_runner.go:217] Completed: docker volume create auto-20220512010229-7184 --label name.minikube.sigs.k8s.io=auto-20220512010229-7184 --label created_by.minikube.sigs.k8s.io=true: (1.2716197s)
	I0512 01:20:08.711061    4188 oci.go:103] Successfully created a docker volume auto-20220512010229-7184
	I0512 01:20:08.718042    4188 cli_runner.go:164] Run: docker run --rm --name auto-20220512010229-7184-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=auto-20220512010229-7184 --entrypoint /usr/bin/test -v auto-20220512010229-7184:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1652251400-14138@sha256:8c847a4aa2afc5a7fc659f9731046bf9cc7e788283deecc83c8633014fb0828a -d /var/lib
	I0512 01:20:07.981541    6912 api_server.go:266] https://127.0.0.1:50853/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0512 01:20:07.981541    6912 api_server.go:102] status: https://127.0.0.1:50853/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0512 01:20:08.412223    6912 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:50853/healthz ...
	I0512 01:20:08.484191    6912 api_server.go:266] https://127.0.0.1:50853/healthz returned 200:
	ok
	I0512 01:20:08.588044    6912 api_server.go:140] control plane version: v1.23.6-rc.0
	I0512 01:20:08.588044    6912 api_server.go:130] duration metric: took 9.6965749s to wait for apiserver health ...
	I0512 01:20:08.588189    6912 cni.go:95] Creating CNI manager for ""
	I0512 01:20:08.588189    6912 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0512 01:20:08.588189    6912 system_pods.go:43] waiting for kube-system pods to appear ...
	I0512 01:20:08.693623    6912 system_pods.go:59] 8 kube-system pods found
	I0512 01:20:08.693623    6912 system_pods.go:61] "coredns-64897985d-5ws8d" [3ab4607e-b641-4ec4-95b8-e748182293c3] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0512 01:20:08.693623    6912 system_pods.go:61] "etcd-newest-cni-20220512011616-7184" [bd9ea317-d13b-4bd1-816d-a9cebacb0f9d] Running
	I0512 01:20:08.693623    6912 system_pods.go:61] "kube-apiserver-newest-cni-20220512011616-7184" [fea8351d-15a8-453d-a564-46ca2334caf1] Running
	I0512 01:20:08.693623    6912 system_pods.go:61] "kube-controller-manager-newest-cni-20220512011616-7184" [18ff8810-e805-4c54-bca7-51c98357c897] Running
	I0512 01:20:08.693623    6912 system_pods.go:61] "kube-proxy-4rh4b" [b0893ff4-bc22-47ac-8feb-c4f6dd7d3fb0] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0512 01:20:08.693623    6912 system_pods.go:61] "kube-scheduler-newest-cni-20220512011616-7184" [658fbc66-063f-4b58-b41f-e054ec6b9ec4] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0512 01:20:08.693623    6912 system_pods.go:61] "metrics-server-b955d9d8-nkjgl" [6ec6d39c-3946-4260-a4ae-3b080a511a18] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0512 01:20:08.693623    6912 system_pods.go:61] "storage-provisioner" [e46bc56d-a455-44fd-a6ca-36a598ad3fdd] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0512 01:20:08.693623    6912 system_pods.go:74] duration metric: took 105.4288ms to wait for pod list to return data ...
	I0512 01:20:08.693623    6912 node_conditions.go:102] verifying NodePressure condition ...
	I0512 01:20:08.775510    6912 node_conditions.go:122] node storage ephemeral capacity is 263174212Ki
	I0512 01:20:08.775510    6912 node_conditions.go:123] node cpu capacity is 16
	I0512 01:20:08.775698    6912 node_conditions.go:105] duration metric: took 82.0704ms to run NodePressure ...
	I0512 01:20:08.775698    6912 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6-rc.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0512 01:20:11.396272    6912 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6-rc.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml": (2.6204433s)
	I0512 01:20:11.396272    6912 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0512 01:20:11.485562    6912 ops.go:34] apiserver oom_adj: -16
	I0512 01:20:11.485562    6912 kubeadm.go:605] restartCluster took 25.5629595s
	I0512 01:20:11.485562    6912 kubeadm.go:393] StartCluster complete in 25.6695855s
	I0512 01:20:11.485562    6912 settings.go:142] acquiring lock: {Name:mk5d8710830d010adb6db61f855b0ef766a8622c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0512 01:20:11.485562    6912 settings.go:150] Updating kubeconfig:  C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	I0512 01:20:11.493548    6912 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\kubeconfig: {Name:mk97c09b788e5010ffd4c9dd9525f9245d5edd25 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0512 01:20:11.672134    6912 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "newest-cni-20220512011616-7184" rescaled to 1
	I0512 01:20:11.672386    6912 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.23.6-rc.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0512 01:20:11.672500    6912 addons.go:415] enableAddons start: toEnable=map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true], additional=[]
	I0512 01:20:11.672386    6912 start.go:208] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.23.6-rc.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0512 01:20:11.676966    6912 out.go:177] * Verifying Kubernetes components...
	I0512 01:20:11.672594    6912 addons.go:65] Setting storage-provisioner=true in profile "newest-cni-20220512011616-7184"
	I0512 01:20:11.672594    6912 addons.go:65] Setting default-storageclass=true in profile "newest-cni-20220512011616-7184"
	I0512 01:20:11.672594    6912 addons.go:65] Setting dashboard=true in profile "newest-cni-20220512011616-7184"
	I0512 01:20:11.672594    6912 addons.go:65] Setting metrics-server=true in profile "newest-cni-20220512011616-7184"
	I0512 01:20:11.672932    6912 config.go:178] Loaded profile config "newest-cni-20220512011616-7184": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.6-rc.0
	I0512 01:20:11.677159    6912 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-20220512011616-7184"
	I0512 01:20:11.677159    6912 addons.go:153] Setting addon dashboard=true in "newest-cni-20220512011616-7184"
	W0512 01:20:11.677307    6912 addons.go:165] addon dashboard should already be in state true
	I0512 01:20:11.677344    6912 addons.go:153] Setting addon metrics-server=true in "newest-cni-20220512011616-7184"
	W0512 01:20:11.677344    6912 addons.go:165] addon metrics-server should already be in state true
	I0512 01:20:11.677600    6912 host.go:66] Checking if "newest-cni-20220512011616-7184" exists ...
	I0512 01:20:11.677637    6912 addons.go:153] Setting addon storage-provisioner=true in "newest-cni-20220512011616-7184"
	W0512 01:20:11.679387    6912 addons.go:165] addon storage-provisioner should already be in state true
	I0512 01:20:11.680588    6912 host.go:66] Checking if "newest-cni-20220512011616-7184" exists ...
	I0512 01:20:11.677637    6912 host.go:66] Checking if "newest-cni-20220512011616-7184" exists ...
	I0512 01:20:11.701049    6912 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0512 01:20:11.708053    6912 cli_runner.go:164] Run: docker container inspect newest-cni-20220512011616-7184 --format={{.State.Status}}
	I0512 01:20:11.713433    6912 cli_runner.go:164] Run: docker container inspect newest-cni-20220512011616-7184 --format={{.State.Status}}
	I0512 01:20:11.716939    6912 cli_runner.go:164] Run: docker container inspect newest-cni-20220512011616-7184 --format={{.State.Status}}
	I0512 01:20:11.719578    6912 cli_runner.go:164] Run: docker container inspect newest-cni-20220512011616-7184 --format={{.State.Status}}
	I0512 01:20:12.588862    6912 start.go:795] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I0512 01:20:12.599856    6912 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" newest-cni-20220512011616-7184
	I0512 01:20:13.246533    6912 cli_runner.go:217] Completed: docker container inspect newest-cni-20220512011616-7184 --format={{.State.Status}}: (1.5263316s)
	I0512 01:20:13.249817    6912 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0512 01:20:13.252968    6912 addons.go:348] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0512 01:20:13.252968    6912 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0512 01:20:13.262555    6912 cli_runner.go:217] Completed: docker container inspect newest-cni-20220512011616-7184 --format={{.State.Status}}: (1.5451467s)
	I0512 01:20:13.265858    6912 out.go:177]   - Using image kubernetesui/dashboard:v2.5.1
	I0512 01:20:13.267570    6912 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220512011616-7184
	I0512 01:20:13.272555    6912 out.go:177]   - Using image k8s.gcr.io/echoserver:1.4
	I0512 01:20:13.276529    6912 addons.go:348] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0512 01:20:13.276529    6912 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0512 01:20:13.295546    6912 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220512011616-7184
	I0512 01:20:13.300559    6912 cli_runner.go:217] Completed: docker container inspect newest-cni-20220512011616-7184 --format={{.State.Status}}: (1.5870464s)
	I0512 01:20:13.305548    6912 out.go:177]   - Using image fake.domain/k8s.gcr.io/echoserver:1.4
	I0512 01:20:12.064431    4188 cli_runner.go:217] Completed: docker run --rm --name auto-20220512010229-7184-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=auto-20220512010229-7184 --entrypoint /usr/bin/test -v auto-20220512010229-7184:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1652251400-14138@sha256:8c847a4aa2afc5a7fc659f9731046bf9cc7e788283deecc83c8633014fb0828a -d /var/lib: (3.3462212s)
	I0512 01:20:12.064431    4188 oci.go:107] Successfully prepared a docker volume auto-20220512010229-7184
	I0512 01:20:12.064431    4188 preload.go:132] Checking if preload exists for k8s version v1.23.5 and runtime docker
	I0512 01:20:12.064431    4188 kic.go:179] Starting extracting preloaded images to volume ...
	I0512 01:20:12.084632    4188 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.23.5-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v auto-20220512010229-7184:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1652251400-14138@sha256:8c847a4aa2afc5a7fc659f9731046bf9cc7e788283deecc83c8633014fb0828a -I lz4 -xf /preloaded.tar -C /extractDir
	I0512 01:20:16.900552    4756 system_pods.go:86] 8 kube-system pods found
	I0512 01:20:16.900552    4756 system_pods.go:89] "coredns-5644d7b6d9-ds6wg" [274c71a2-5a74-40cf-9719-e53e1901acdb] Running
	I0512 01:20:16.900552    4756 system_pods.go:89] "etcd-old-k8s-version-20220512010246-7184" [8197f31d-c95a-42f1-9974-091d1c27c60b] Running
	I0512 01:20:16.900552    4756 system_pods.go:89] "kube-apiserver-old-k8s-version-20220512010246-7184" [5e4b74f1-7f9f-4b1e-bfbb-762b651204a1] Running
	I0512 01:20:16.900552    4756 system_pods.go:89] "kube-controller-manager-old-k8s-version-20220512010246-7184" [464fedb8-445d-4d2b-98af-2fea913fa291] Running
	I0512 01:20:16.900552    4756 system_pods.go:89] "kube-proxy-5dp6x" [29ed9a2f-069f-409e-8a9f-ce8869e1a908] Running
	I0512 01:20:16.900552    4756 system_pods.go:89] "kube-scheduler-old-k8s-version-20220512010246-7184" [ee09078d-37ef-42bd-bdc4-c6d4d41df903] Pending
	I0512 01:20:16.900552    4756 system_pods.go:89] "metrics-server-6f89b5864b-xnzbk" [7c6b6847-36d4-4700-b45c-4e00a73b9477] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0512 01:20:16.900552    4756 system_pods.go:89] "storage-provisioner" [aab59255-6979-4cee-bb62-a1d8611e5cf8] Running
	I0512 01:20:16.900552    4756 retry.go:31] will retry after 12.194240946s: missing components: kube-scheduler
	I0512 01:20:13.308529    6912 addons.go:348] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0512 01:20:13.308529    6912 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0512 01:20:13.309551    6912 cli_runner.go:217] Completed: docker container inspect newest-cni-20220512011616-7184 --format={{.State.Status}}: (1.6014178s)
	I0512 01:20:13.322538    6912 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220512011616-7184
	I0512 01:20:13.395534    6912 addons.go:153] Setting addon default-storageclass=true in "newest-cni-20220512011616-7184"
	W0512 01:20:13.395534    6912 addons.go:165] addon default-storageclass should already be in state true
	I0512 01:20:13.395534    6912 host.go:66] Checking if "newest-cni-20220512011616-7184" exists ...
	I0512 01:20:13.418576    6912 cli_runner.go:164] Run: docker container inspect newest-cni-20220512011616-7184 --format={{.State.Status}}
	I0512 01:20:14.242039    6912 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" newest-cni-20220512011616-7184: (1.6421002s)
	I0512 01:20:14.242039    6912 api_server.go:51] waiting for apiserver process to appear ...
	I0512 01:20:14.255895    6912 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0512 01:20:14.370900    6912 api_server.go:71] duration metric: took 2.6981703s to wait for apiserver process to appear ...
	I0512 01:20:14.370900    6912 api_server.go:87] waiting for apiserver healthz status ...
	I0512 01:20:14.370900    6912 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:50853/healthz ...
	I0512 01:20:14.398889    6912 api_server.go:266] https://127.0.0.1:50853/healthz returned 200:
	ok
	I0512 01:20:14.403882    6912 api_server.go:140] control plane version: v1.23.6-rc.0
	I0512 01:20:14.403882    6912 api_server.go:130] duration metric: took 32.9806ms to wait for apiserver health ...
	I0512 01:20:14.403882    6912 system_pods.go:43] waiting for kube-system pods to appear ...
	I0512 01:20:14.420910    6912 system_pods.go:59] 8 kube-system pods found
	I0512 01:20:14.420910    6912 system_pods.go:61] "coredns-64897985d-5ws8d" [3ab4607e-b641-4ec4-95b8-e748182293c3] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0512 01:20:14.420910    6912 system_pods.go:61] "etcd-newest-cni-20220512011616-7184" [bd9ea317-d13b-4bd1-816d-a9cebacb0f9d] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0512 01:20:14.420910    6912 system_pods.go:61] "kube-apiserver-newest-cni-20220512011616-7184" [fea8351d-15a8-453d-a564-46ca2334caf1] Running
	I0512 01:20:14.420910    6912 system_pods.go:61] "kube-controller-manager-newest-cni-20220512011616-7184" [18ff8810-e805-4c54-bca7-51c98357c897] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0512 01:20:14.420910    6912 system_pods.go:61] "kube-proxy-4rh4b" [b0893ff4-bc22-47ac-8feb-c4f6dd7d3fb0] Running
	I0512 01:20:14.425116    6912 system_pods.go:61] "kube-scheduler-newest-cni-20220512011616-7184" [658fbc66-063f-4b58-b41f-e054ec6b9ec4] Running
	I0512 01:20:14.425158    6912 system_pods.go:61] "metrics-server-b955d9d8-nkjgl" [6ec6d39c-3946-4260-a4ae-3b080a511a18] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0512 01:20:14.425158    6912 system_pods.go:61] "storage-provisioner" [e46bc56d-a455-44fd-a6ca-36a598ad3fdd] Running
	I0512 01:20:14.425208    6912 system_pods.go:74] duration metric: took 21.3249ms to wait for pod list to return data ...
	I0512 01:20:14.425208    6912 default_sa.go:34] waiting for default service account to be created ...
	I0512 01:20:14.478487    6912 default_sa.go:45] found service account: "default"
	I0512 01:20:14.478487    6912 default_sa.go:55] duration metric: took 53.2761ms for default service account to be created ...
	I0512 01:20:14.478487    6912 kubeadm.go:548] duration metric: took 2.8057519s to wait for : map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] ...
	I0512 01:20:14.478487    6912 node_conditions.go:102] verifying NodePressure condition ...
	I0512 01:20:14.502796    6912 node_conditions.go:122] node storage ephemeral capacity is 263174212Ki
	I0512 01:20:14.502796    6912 node_conditions.go:123] node cpu capacity is 16
	I0512 01:20:14.502796    6912 node_conditions.go:105] duration metric: took 24.3079ms to run NodePressure ...
	I0512 01:20:14.502796    6912 start.go:213] waiting for startup goroutines ...
	I0512 01:20:14.892055    6912 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220512011616-7184: (1.596429s)
	I0512 01:20:14.893039    6912 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50854 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\newest-cni-20220512011616-7184\id_rsa Username:docker}
	I0512 01:20:14.907528    6912 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220512011616-7184: (1.5849104s)
	I0512 01:20:14.908022    6912 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50854 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\newest-cni-20220512011616-7184\id_rsa Username:docker}
	I0512 01:20:14.923028    6912 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220512011616-7184: (1.653413s)
	I0512 01:20:14.923028    6912 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50854 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\newest-cni-20220512011616-7184\id_rsa Username:docker}
	I0512 01:20:15.065660    6912 cli_runner.go:217] Completed: docker container inspect newest-cni-20220512011616-7184 --format={{.State.Status}}: (1.6470015s)
	I0512 01:20:15.065660    6912 addons.go:348] installing /etc/kubernetes/addons/storageclass.yaml
	I0512 01:20:15.065660    6912 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0512 01:20:15.087656    6912 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220512011616-7184
	I0512 01:20:15.403525    6912 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.6-rc.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0512 01:20:15.468338    6912 addons.go:348] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0512 01:20:15.468445    6912 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1849 bytes)
	I0512 01:20:15.473573    6912 addons.go:348] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0512 01:20:15.473573    6912 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0512 01:20:15.681380    6912 addons.go:348] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0512 01:20:15.681380    6912 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0512 01:20:15.772044    6912 addons.go:348] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0512 01:20:15.772044    6912 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0512 01:20:15.974184    6912 addons.go:348] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0512 01:20:15.974184    6912 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0512 01:20:16.070371    6912 addons.go:348] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0512 01:20:16.070371    6912 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0512 01:20:16.325294    6912 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220512011616-7184: (1.2374346s)
	I0512 01:20:16.325740    6912 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50854 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\newest-cni-20220512011616-7184\id_rsa Username:docker}
	I0512 01:20:16.382374    6912 addons.go:348] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0512 01:20:16.382374    6912 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4196 bytes)
	I0512 01:20:16.399346    6912 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.6-rc.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0512 01:20:16.601484    6912 addons.go:348] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0512 01:20:16.602267    6912 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0512 01:20:16.879990    6912 addons.go:348] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0512 01:20:16.879990    6912 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0512 01:20:16.891990    6912 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.6-rc.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0512 01:20:17.187007    6912 addons.go:348] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0512 01:20:17.187007    6912 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0512 01:20:17.300673    6912 addons.go:348] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0512 01:20:17.300673    6912 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0512 01:20:17.571563    6912 addons.go:348] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0512 01:20:17.571563    6912 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0512 01:20:17.799999    6912 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.6-rc.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0512 01:20:19.474901    6912 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.6-rc.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (4.0711719s)
	I0512 01:20:20.060411    6912 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.6-rc.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (3.660882s)
	I0512 01:20:20.060411    6912 addons.go:386] Verifying addon metrics-server=true in "newest-cni-20220512011616-7184"
	I0512 01:20:20.060411    6912 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.6-rc.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (3.1682621s)
	I0512 01:20:21.385198    6912 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.6-rc.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (3.5850169s)
	I0512 01:20:21.388281    6912 out.go:177] * Enabled addons: storage-provisioner, metrics-server, default-storageclass, dashboard
	I0512 01:20:21.390184    6912 addons.go:417] enableAddons completed in 9.7173095s
	I0512 01:20:21.640158    6912 start.go:499] kubectl: 1.18.2, cluster: 1.23.6-rc.0 (minor skew: 5)
	I0512 01:20:21.643797    6912 out.go:177] 
	W0512 01:20:21.646687    6912 out.go:239] ! C:\ProgramData\chocolatey\bin\kubectl.exe is version 1.18.2, which may have incompatibilites with Kubernetes 1.23.6-rc.0.
	I0512 01:20:21.651852    6912 out.go:177]   - Want kubectl v1.23.6-rc.0? Try 'minikube kubectl -- get pods -A'
	I0512 01:20:21.660836    6912 out.go:177] * Done! kubectl is now configured to use "newest-cni-20220512011616-7184" cluster and "default" namespace by default
	I0512 01:20:29.127663    4756 system_pods.go:86] 8 kube-system pods found
	I0512 01:20:29.127843    4756 system_pods.go:89] "coredns-5644d7b6d9-ds6wg" [274c71a2-5a74-40cf-9719-e53e1901acdb] Running
	I0512 01:20:29.127956    4756 system_pods.go:89] "etcd-old-k8s-version-20220512010246-7184" [8197f31d-c95a-42f1-9974-091d1c27c60b] Running
	I0512 01:20:29.127994    4756 system_pods.go:89] "kube-apiserver-old-k8s-version-20220512010246-7184" [5e4b74f1-7f9f-4b1e-bfbb-762b651204a1] Running
	I0512 01:20:29.127994    4756 system_pods.go:89] "kube-controller-manager-old-k8s-version-20220512010246-7184" [464fedb8-445d-4d2b-98af-2fea913fa291] Running
	I0512 01:20:29.127994    4756 system_pods.go:89] "kube-proxy-5dp6x" [29ed9a2f-069f-409e-8a9f-ce8869e1a908] Running
	I0512 01:20:29.127994    4756 system_pods.go:89] "kube-scheduler-old-k8s-version-20220512010246-7184" [ee09078d-37ef-42bd-bdc4-c6d4d41df903] Running
	I0512 01:20:29.128099    4756 system_pods.go:89] "metrics-server-6f89b5864b-xnzbk" [7c6b6847-36d4-4700-b45c-4e00a73b9477] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0512 01:20:29.128099    4756 system_pods.go:89] "storage-provisioner" [aab59255-6979-4cee-bb62-a1d8611e5cf8] Running
	I0512 01:20:29.128171    4756 system_pods.go:126] duration metric: took 56.7371491s to wait for k8s-apps to be running ...
	I0512 01:20:29.128226    4756 system_svc.go:44] waiting for kubelet service to be running ....
	I0512 01:20:29.142548    4756 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0512 01:20:29.165073    4756 system_svc.go:56] duration metric: took 36.845ms WaitForService to wait for kubelet.
	I0512 01:20:29.165613    4756 kubeadm.go:548] duration metric: took 1m9.344661s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0512 01:20:29.165613    4756 node_conditions.go:102] verifying NodePressure condition ...
	I0512 01:20:29.178919    4756 node_conditions.go:122] node storage ephemeral capacity is 263174212Ki
	I0512 01:20:29.178919    4756 node_conditions.go:123] node cpu capacity is 16
	I0512 01:20:29.178919    4756 node_conditions.go:105] duration metric: took 13.3051ms to run NodePressure ...
	I0512 01:20:29.178919    4756 start.go:213] waiting for startup goroutines ...
	I0512 01:20:29.451947    4756 start.go:499] kubectl: 1.18.2, cluster: 1.16.0 (minor skew: 2)
	I0512 01:20:29.635603    4756 out.go:177] 
	W0512 01:20:29.769121    4756 out.go:239] ! C:\ProgramData\chocolatey\bin\kubectl.exe is version 1.18.2, which may have incompatibilites with Kubernetes 1.16.0.
	I0512 01:20:29.772435    4756 out.go:177]   - Want kubectl v1.16.0? Try 'minikube kubectl -- get pods -A'
	I0512 01:20:29.780826    4756 out.go:177] * Done! kubectl is now configured to use "old-k8s-version-20220512010246-7184" cluster and "default" namespace by default
	I0512 01:20:35.734579    4188 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.23.5-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v auto-20220512010229-7184:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1652251400-14138@sha256:8c847a4aa2afc5a7fc659f9731046bf9cc7e788283deecc83c8633014fb0828a -I lz4 -xf /preloaded.tar -C /extractDir: (23.6487399s)
	I0512 01:20:35.734579    4188 kic.go:188] duration metric: took 23.668940 seconds to extract preloaded images to volume
	I0512 01:20:35.740587    4188 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0512 01:20:38.037595    4188 cli_runner.go:217] Completed: docker system info --format "{{json .}}": (2.2968901s)
	I0512 01:20:38.037595    4188 info.go:265] docker info: {ID:5MOX:W55Z:6RSS:V5PU:46KT:D723:NTM4:N7FK:USOO:URA3:TW6J:2PNT Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:86 OomKillDisable:true NGoroutines:63 SystemTime:2022-05-12 01:20:36.9027889 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.102.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_
64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:53902323712 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.14 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3df54a852345ae127d1fa3092b95168e4a88e2f8 Expected:3df54a852345ae127d1fa3092b95168e4a88e2f8} RuncCommit:{ID:v1.0.3-0-gf46b6ba Expected:v1.0.3-0-gf46b6ba} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,
profile=default] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.5.0] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:C:\Program Files\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc.
Version:v0.17.0]] Warnings:<nil>}}
	I0512 01:20:38.051149    4188 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0512 01:20:40.242818    4188 cli_runner.go:217] Completed: docker info --format "'{{json .SecurityOptions}}'": (2.191556s)
	I0512 01:20:40.255389    4188 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname auto-20220512010229-7184 --name auto-20220512010229-7184 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=auto-20220512010229-7184 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=auto-20220512010229-7184 --network auto-20220512010229-7184 --ip 192.168.58.2 --volume auto-20220512010229-7184:/var --security-opt apparmor=unconfined --memory=2048mb --memory-swap=2048mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1652251400-14138@sha256:8c847a4aa2afc5a7fc659f9731046bf9cc7e788283deecc83c8633014fb0828a
	I0512 01:20:42.478695    4188 cli_runner.go:217] Completed: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname auto-20220512010229-7184 --name auto-20220512010229-7184 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=auto-20220512010229-7184 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=auto-20220512010229-7184 --network auto-20220512010229-7184 --ip 192.168.58.2 --volume auto-20220512010229-7184:/var --security-opt apparmor=unconfined --memory=2048mb --memory-swap=2048mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1652251400-14138@sha256:8c847a4aa2afc5a7fc659f9731046bf9cc7e788283deecc83c8633014fb0828a: (2.2231917s)
	I0512 01:20:42.488699    4188 cli_runner.go:164] Run: docker container inspect auto-20220512010229-7184 --format={{.State.Running}}
	I0512 01:20:43.648407    4188 cli_runner.go:217] Completed: docker container inspect auto-20220512010229-7184 --format={{.State.Running}}: (1.1586552s)
	I0512 01:20:43.656247    4188 cli_runner.go:164] Run: docker container inspect auto-20220512010229-7184 --format={{.State.Status}}
	I0512 01:20:44.760296    4188 cli_runner.go:217] Completed: docker container inspect auto-20220512010229-7184 --format={{.State.Status}}: (1.1039925s)
	I0512 01:20:44.767306    4188 cli_runner.go:164] Run: docker exec auto-20220512010229-7184 stat /var/lib/dpkg/alternatives/iptables
	I0512 01:20:46.054013    4188 cli_runner.go:217] Completed: docker exec auto-20220512010229-7184 stat /var/lib/dpkg/alternatives/iptables: (1.2866406s)
	I0512 01:20:46.054013    4188 oci.go:247] the created container "auto-20220512010229-7184" has a running status.
	I0512 01:20:46.054013    4188 kic.go:210] Creating ssh key for kic: C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\auto-20220512010229-7184\id_rsa...
	I0512 01:20:46.471711    4188 kic_runner.go:191] docker (temp): C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\auto-20220512010229-7184\id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0512 01:20:47.767250    4188 cli_runner.go:164] Run: docker container inspect auto-20220512010229-7184 --format={{.State.Status}}
	I0512 01:20:48.908876    4188 cli_runner.go:217] Completed: docker container inspect auto-20220512010229-7184 --format={{.State.Status}}: (1.1414628s)
	I0512 01:20:48.926841    4188 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0512 01:20:48.926841    4188 kic_runner.go:114] Args: [docker exec --privileged auto-20220512010229-7184 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0512 01:20:49.968722    4792 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.5:$PATH" kubeadm reset --cri-socket /var/run/dockershim.sock --force": (51.0545819s)
	I0512 01:20:49.982715    4792 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0512 01:20:50.021715    4792 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0512 01:20:50.047725    4792 kubeadm.go:221] ignoring SystemVerification for kubeadm because of docker driver
	I0512 01:20:50.059709    4792 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0512 01:20:50.085720    4792 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0512 01:20:50.085720    4792 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.5:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0512 01:20:50.250555    4188 kic_runner.go:123] Done: [docker exec --privileged auto-20220512010229-7184 chown docker:docker /home/docker/.ssh/authorized_keys]: (1.3235276s)
	I0512 01:20:50.255049    4188 kic.go:250] ensuring only current user has permissions to key file located at : C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\auto-20220512010229-7184\id_rsa...
	I0512 01:20:50.850170    4188 cli_runner.go:164] Run: docker container inspect auto-20220512010229-7184 --format={{.State.Status}}
	I0512 01:20:51.926226    4188 cli_runner.go:217] Completed: docker container inspect auto-20220512010229-7184 --format={{.State.Status}}: (1.0758274s)
	I0512 01:20:51.926302    4188 machine.go:88] provisioning docker machine ...
	I0512 01:20:51.926390    4188 ubuntu.go:169] provisioning hostname "auto-20220512010229-7184"
	I0512 01:20:51.936531    4188 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-20220512010229-7184
	I0512 01:20:53.032292    4188 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-20220512010229-7184: (1.0955797s)
	I0512 01:20:53.036616    4188 main.go:134] libmachine: Using SSH client type: native
	I0512 01:20:53.037612    4188 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x422e20] 0x425c80 <nil>  [] 0s} 127.0.0.1 50942 <nil> <nil>}
	I0512 01:20:53.037612    4188 main.go:134] libmachine: About to run SSH command:
	sudo hostname auto-20220512010229-7184 && echo "auto-20220512010229-7184" | sudo tee /etc/hostname
	I0512 01:20:53.236394    4188 main.go:134] libmachine: SSH cmd err, output: <nil>: auto-20220512010229-7184
	
	I0512 01:20:53.245809    4188 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-20220512010229-7184
	I0512 01:20:54.422089    4188 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-20220512010229-7184: (1.1762193s)
	I0512 01:20:54.425089    4188 main.go:134] libmachine: Using SSH client type: native
	I0512 01:20:54.426091    4188 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x422e20] 0x425c80 <nil>  [] 0s} 127.0.0.1 50942 <nil> <nil>}
	I0512 01:20:54.426091    4188 main.go:134] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sauto-20220512010229-7184' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 auto-20220512010229-7184/g' /etc/hosts;
				else 
					echo '127.0.1.1 auto-20220512010229-7184' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0512 01:20:54.622533    4188 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I0512 01:20:54.622533    4188 ubuntu.go:175] set auth options {CertDir:C:\Users\jenkins.minikube4\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube4\minikube-integration\.minikube}
	I0512 01:20:54.622533    4188 ubuntu.go:177] setting up certificates
	I0512 01:20:54.622533    4188 provision.go:83] configureAuth start
	I0512 01:20:54.634553    4188 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" auto-20220512010229-7184
	I0512 01:20:55.837746    4188 cli_runner.go:217] Completed: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" auto-20220512010229-7184: (1.2030963s)
	I0512 01:20:55.837866    4188 provision.go:138] copyHostCerts
	I0512 01:20:55.837866    4188 exec_runner.go:144] found C:\Users\jenkins.minikube4\minikube-integration\.minikube/ca.pem, removing ...
	I0512 01:20:55.837866    4188 exec_runner.go:207] rm: C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.pem
	I0512 01:20:55.838813    4188 exec_runner.go:151] cp: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube4\minikube-integration\.minikube/ca.pem (1078 bytes)
	I0512 01:20:55.840112    4188 exec_runner.go:144] found C:\Users\jenkins.minikube4\minikube-integration\.minikube/cert.pem, removing ...
	I0512 01:20:55.840184    4188 exec_runner.go:207] rm: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cert.pem
	I0512 01:20:55.840549    4188 exec_runner.go:151] cp: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube4\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0512 01:20:55.841894    4188 exec_runner.go:144] found C:\Users\jenkins.minikube4\minikube-integration\.minikube/key.pem, removing ...
	I0512 01:20:55.841981    4188 exec_runner.go:207] rm: C:\Users\jenkins.minikube4\minikube-integration\.minikube\key.pem
	I0512 01:20:55.842440    4188 exec_runner.go:151] cp: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube4\minikube-integration\.minikube/key.pem (1675 bytes)
	I0512 01:20:55.843369    4188 provision.go:112] generating server cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.auto-20220512010229-7184 san=[192.168.58.2 127.0.0.1 localhost 127.0.0.1 minikube auto-20220512010229-7184]
	I0512 01:20:56.306928    4188 provision.go:172] copyRemoteCerts
	I0512 01:20:56.319974    4188 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0512 01:20:56.329935    4188 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-20220512010229-7184
	I0512 01:20:57.552955    4188 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-20220512010229-7184: (1.2229562s)
	I0512 01:20:57.552955    4188 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50942 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\auto-20220512010229-7184\id_rsa Username:docker}
	I0512 01:20:57.711987    4188 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (1.3919414s)
	I0512 01:20:57.713721    4188 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0512 01:20:57.772018    4188 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1249 bytes)
	I0512 01:20:57.821436    4188 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0512 01:20:57.873681    4188 provision.go:86] duration metric: configureAuth took 3.2509807s
	I0512 01:20:57.873681    4188 ubuntu.go:193] setting minikube options for container-runtime
	I0512 01:20:57.874705    4188 config.go:178] Loaded profile config "auto-20220512010229-7184": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.5
	I0512 01:20:57.890672    4188 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-20220512010229-7184
	I0512 01:20:59.036444    4188 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-20220512010229-7184: (1.1457132s)
	I0512 01:20:59.040446    4188 main.go:134] libmachine: Using SSH client type: native
	I0512 01:20:59.040446    4188 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x422e20] 0x425c80 <nil>  [] 0s} 127.0.0.1 50942 <nil> <nil>}
	I0512 01:20:59.040446    4188 main.go:134] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0512 01:20:59.262939    4188 main.go:134] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0512 01:20:59.262939    4188 ubuntu.go:71] root file system type: overlay
	I0512 01:20:59.262939    4188 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0512 01:20:59.280110    4188 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-20220512010229-7184
	I0512 01:21:00.500281    4188 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-20220512010229-7184: (1.2199894s)
	I0512 01:21:00.506156    4188 main.go:134] libmachine: Using SSH client type: native
	I0512 01:21:00.507194    4188 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x422e20] 0x425c80 <nil>  [] 0s} 127.0.0.1 50942 <nil> <nil>}
	I0512 01:21:00.507194    4188 main.go:134] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0512 01:21:00.731136    4188 main.go:134] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0512 01:21:00.744785    4188 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-20220512010229-7184
	I0512 01:21:01.916813    4188 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-20220512010229-7184: (1.1719673s)
	I0512 01:21:01.923820    4188 main.go:134] libmachine: Using SSH client type: native
	I0512 01:21:01.924819    4188 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x422e20] 0x425c80 <nil>  [] 0s} 127.0.0.1 50942 <nil> <nil>}
	I0512 01:21:01.924819    4188 main.go:134] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0512 01:21:03.446750    4188 main.go:134] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2022-05-05 13:17:28.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2022-05-12 01:21:00.714844000 +0000
	@@ -1,30 +1,32 @@
	 [Unit]
	 Description=Docker Application Container Engine
	 Documentation=https://docs.docker.com
	-After=network-online.target docker.socket firewalld.service containerd.service
	+BindsTo=containerd.service
	+After=network-online.target firewalld.service containerd.service
	 Wants=network-online.target
	-Requires=docker.socket containerd.service
	+Requires=docker.socket
	+StartLimitBurst=3
	+StartLimitIntervalSec=60
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutSec=0
	-RestartSec=2
	-Restart=always
	-
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	+Restart=on-failure
	 
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP $MAINPID
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	@@ -32,16 +34,16 @@
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	 
	 # kill only the docker process, not all processes in the cgroup
	 KillMode=process
	-OOMScoreAdjust=-500
	 
	 [Install]
	 WantedBy=multi-user.target
	Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install enable docker
	
	I0512 01:21:03.447107    4188 machine.go:91] provisioned docker machine in 11.5201641s
	I0512 01:21:03.447107    4188 client.go:171] LocalClient.Create took 1m3.465425s
	I0512 01:21:03.447206    4188 start.go:173] duration metric: libmachine.API.Create for "auto-20220512010229-7184" took 1m3.465478s
	I0512 01:21:03.447258    4188 start.go:306] post-start starting for "auto-20220512010229-7184" (driver="docker")
	I0512 01:21:03.447296    4188 start.go:316] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0512 01:21:03.460656    4188 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0512 01:21:03.467833    4188 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-20220512010229-7184
	I0512 01:21:04.732787    4188 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-20220512010229-7184: (1.2648893s)
	I0512 01:21:04.732787    4188 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50942 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\auto-20220512010229-7184\id_rsa Username:docker}
	I0512 01:21:04.867883    4188 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (1.407155s)
	I0512 01:21:04.890873    4188 ssh_runner.go:195] Run: cat /etc/os-release
	I0512 01:21:04.901879    4188 main.go:134] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0512 01:21:04.901879    4188 main.go:134] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0512 01:21:04.901879    4188 main.go:134] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0512 01:21:04.901879    4188 info.go:137] Remote host: Ubuntu 20.04.4 LTS
	I0512 01:21:04.901879    4188 filesync.go:126] Scanning C:\Users\jenkins.minikube4\minikube-integration\.minikube\addons for local assets ...
	I0512 01:21:04.902978    4188 filesync.go:126] Scanning C:\Users\jenkins.minikube4\minikube-integration\.minikube\files for local assets ...
	I0512 01:21:04.903870    4188 filesync.go:149] local asset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\71842.pem -> 71842.pem in /etc/ssl/certs
	I0512 01:21:04.914881    4188 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0512 01:21:04.941868    4188 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\71842.pem --> /etc/ssl/certs/71842.pem (1708 bytes)
	I0512 01:21:04.997105    4188 start.go:309] post-start completed in 1.5497672s
	I0512 01:21:05.009155    4188 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" auto-20220512010229-7184
	I0512 01:21:06.301449    4188 cli_runner.go:217] Completed: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" auto-20220512010229-7184: (1.2922275s)
	I0512 01:21:06.301449    4188 profile.go:148] Saving config to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\auto-20220512010229-7184\config.json ...
	I0512 01:21:06.323447    4188 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0512 01:21:06.333451    4188 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-20220512010229-7184
	I0512 01:21:07.645074    4188 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-20220512010229-7184: (1.3115548s)
	I0512 01:21:07.645074    4188 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50942 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\auto-20220512010229-7184\id_rsa Username:docker}
	I0512 01:21:07.789349    4188 ssh_runner.go:235] Completed: sh -c "df -h /var | awk 'NR==2{print $5}'": (1.4658263s)
	I0512 01:21:07.810348    4188 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0512 01:21:07.825336    4188 start.go:134] duration metric: createHost completed in 1m7.8474256s
	I0512 01:21:07.825336    4188 start.go:81] releasing machines lock for "auto-20220512010229-7184", held for 1m7.84822s
	I0512 01:21:07.834344    4188 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" auto-20220512010229-7184
	I0512 01:21:09.366448    4188 cli_runner.go:217] Completed: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" auto-20220512010229-7184: (1.5320259s)
	I0512 01:21:09.368461    4188 ssh_runner.go:195] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0512 01:21:09.393448    4188 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-20220512010229-7184
	I0512 01:21:09.397453    4188 ssh_runner.go:195] Run: systemctl --version
	I0512 01:21:09.413472    4188 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-20220512010229-7184
	I0512 01:21:11.127873    4188 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-20220512010229-7184: (1.7343356s)
	I0512 01:21:11.127873    4188 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50942 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\auto-20220512010229-7184\id_rsa Username:docker}
	I0512 01:21:11.142895    4188 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-20220512010229-7184: (1.7293339s)
	I0512 01:21:11.143898    4188 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50942 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\auto-20220512010229-7184\id_rsa Username:docker}
	I0512 01:21:11.293901    4188 ssh_runner.go:235] Completed: curl -sS -m 2 https://k8s.gcr.io/: (1.9253405s)
	I0512 01:21:11.294905    4188 ssh_runner.go:235] Completed: systemctl --version: (1.8973548s)
	I0512 01:21:11.307892    4188 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0512 01:21:11.423767    4188 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0512 01:21:11.450794    4188 cruntime.go:273] skipping containerd shutdown because we are bound to it
	I0512 01:21:11.459790    4188 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0512 01:21:11.491780    4188 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/dockershim.sock
	image-endpoint: unix:///var/run/dockershim.sock
	" | sudo tee /etc/crictl.yaml"
	I0512 01:21:11.535770    4188 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0512 01:21:11.733908    4188 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0512 01:21:11.927206    4188 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0512 01:21:12.009214    4188 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0512 01:21:12.209796    4188 ssh_runner.go:195] Run: sudo systemctl start docker
	I0512 01:21:12.248824    4188 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0512 01:21:12.355817    4188 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0512 01:21:13.705926    4792 out.go:204]   - Generating certificates and keys ...
	I0512 01:21:13.710906    4792 out.go:204]   - Booting up control plane ...
	I0512 01:21:12.464817    4188 out.go:204] * Preparing Kubernetes v1.23.5 on Docker 20.10.15 ...
	I0512 01:21:12.483832    4188 cli_runner.go:164] Run: docker exec -t auto-20220512010229-7184 dig +short host.docker.internal
	I0512 01:21:14.077242    4188 cli_runner.go:217] Completed: docker exec -t auto-20220512010229-7184 dig +short host.docker.internal: (1.5933278s)
	I0512 01:21:14.077242    4188 network.go:96] got host ip for mount in container by digging dns: 192.168.65.2
	I0512 01:21:14.102374    4188 ssh_runner.go:195] Run: grep 192.168.65.2	host.minikube.internal$ /etc/hosts
	I0512 01:21:14.112403    4188 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.65.2	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0512 01:21:14.207031    4188 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" auto-20220512010229-7184
	I0512 01:21:13.717901    4792 out.go:204]   - Configuring RBAC rules ...
	I0512 01:21:13.721901    4792 cni.go:95] Creating CNI manager for ""
	I0512 01:21:13.721901    4792 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0512 01:21:13.721901    4792 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0512 01:21:13.735916    4792 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0512 01:21:13.735916    4792 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl label nodes minikube.k8s.io/version=v1.25.2 minikube.k8s.io/commit=50a7977b568d2ad3e04003527a57f4502d6177a0 minikube.k8s.io/name=default-k8s-different-port-20220512011148-7184 minikube.k8s.io/updated_at=2022_05_12T01_21_13_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0512 01:21:13.789921    4792 ops.go:34] apiserver oom_adj: -16
	I0512 01:21:14.414703    4792 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0512 01:21:15.547387    4188 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" auto-20220512010229-7184: (1.3402872s)
	I0512 01:21:15.547829    4188 preload.go:132] Checking if preload exists for k8s version v1.23.5 and runtime docker
	I0512 01:21:15.555848    4188 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0512 01:21:15.643705    4188 docker.go:610] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.23.5
	k8s.gcr.io/kube-proxy:v1.23.5
	k8s.gcr.io/kube-controller-manager:v1.23.5
	k8s.gcr.io/kube-scheduler:v1.23.5
	k8s.gcr.io/etcd:3.5.1-0
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0512 01:21:15.643705    4188 docker.go:541] Images already preloaded, skipping extraction
	I0512 01:21:15.652704    4188 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0512 01:21:15.739971    4188 docker.go:610] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.23.5
	k8s.gcr.io/kube-proxy:v1.23.5
	k8s.gcr.io/kube-scheduler:v1.23.5
	k8s.gcr.io/kube-controller-manager:v1.23.5
	k8s.gcr.io/etcd:3.5.1-0
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0512 01:21:15.739971    4188 cache_images.go:84] Images are preloaded, skipping loading
	I0512 01:21:15.748971    4188 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0512 01:21:15.956197    4188 cni.go:95] Creating CNI manager for ""
	I0512 01:21:15.956197    4188 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0512 01:21:15.956197    4188 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0512 01:21:15.956197    4188 kubeadm.go:158] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.58.2 APIServerPort:8443 KubernetesVersion:v1.23.5 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:auto-20220512010229-7184 NodeName:auto-20220512010229-7184 DNSDomain:cluster.local CRISocket:/var/run/dockershim.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.58.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.58.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/miniku
be/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	I0512 01:21:15.957212    4188 kubeadm.go:162] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.58.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/dockershim.sock
	  name: "auto-20220512010229-7184"
	  kubeletExtraArgs:
	    node-ip: 192.168.58.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.58.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.23.5
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0512 01:21:15.957212    4188 kubeadm.go:936] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.23.5/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=docker --hostname-override=auto-20220512010229-7184 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.58.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.23.5 ClusterName:auto-20220512010229-7184 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0512 01:21:15.966201    4188 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.23.5
	I0512 01:21:16.012198    4188 binaries.go:44] Found k8s binaries, skipping transfer
	I0512 01:21:16.024199    4188 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0512 01:21:16.046199    4188 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (350 bytes)
	I0512 01:21:16.098719    4188 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0512 01:21:16.137599    4188 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2046 bytes)
	I0512 01:21:16.209888    4188 ssh_runner.go:195] Run: grep 192.168.58.2	control-plane.minikube.internal$ /etc/hosts
	I0512 01:21:16.220891    4188 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.58.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0512 01:21:16.243399    4188 certs.go:54] Setting up C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\auto-20220512010229-7184 for IP: 192.168.58.2
	I0512 01:21:16.243929    4188 certs.go:182] skipping minikubeCA CA generation: C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.key
	I0512 01:21:16.245785    4188 certs.go:182] skipping proxyClientCA CA generation: C:\Users\jenkins.minikube4\minikube-integration\.minikube\proxy-client-ca.key
	I0512 01:21:16.245889    4188 certs.go:302] generating minikube-user signed cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\auto-20220512010229-7184\client.key
	I0512 01:21:16.245889    4188 crypto.go:68] Generating cert C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\auto-20220512010229-7184\client.crt with IP's: []
	I0512 01:21:16.657223    4188 crypto.go:156] Writing cert to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\auto-20220512010229-7184\client.crt ...
	I0512 01:21:16.657223    4188 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\auto-20220512010229-7184\client.crt: {Name:mka40a0e5fbccf72384b811be0c0cfee758601c2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0512 01:21:16.659232    4188 crypto.go:164] Writing key to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\auto-20220512010229-7184\client.key ...
	I0512 01:21:16.659232    4188 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\auto-20220512010229-7184\client.key: {Name:mk009dbf99415d993c2dba5148db02bacc304041 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0512 01:21:16.659232    4188 certs.go:302] generating minikube signed cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\auto-20220512010229-7184\apiserver.key.cee25041
	I0512 01:21:16.660701    4188 crypto.go:68] Generating cert C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\auto-20220512010229-7184\apiserver.crt.cee25041 with IP's: [192.168.58.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I0512 01:21:17.408137    4188 crypto.go:156] Writing cert to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\auto-20220512010229-7184\apiserver.crt.cee25041 ...
	I0512 01:21:17.408137    4188 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\auto-20220512010229-7184\apiserver.crt.cee25041: {Name:mkef71b75a55c0560ad8580c10899f6590e15f63 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0512 01:21:17.409189    4188 crypto.go:164] Writing key to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\auto-20220512010229-7184\apiserver.key.cee25041 ...
	I0512 01:21:17.409189    4188 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\auto-20220512010229-7184\apiserver.key.cee25041: {Name:mk627a71416c3c4081ab8552e01e2ddea7dda28a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0512 01:21:17.410217    4188 certs.go:320] copying C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\auto-20220512010229-7184\apiserver.crt.cee25041 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\auto-20220512010229-7184\apiserver.crt
	I0512 01:21:17.418205    4188 certs.go:324] copying C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\auto-20220512010229-7184\apiserver.key.cee25041 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\auto-20220512010229-7184\apiserver.key
	I0512 01:21:17.419196    4188 certs.go:302] generating aggregator signed cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\auto-20220512010229-7184\proxy-client.key
	I0512 01:21:17.419196    4188 crypto.go:68] Generating cert C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\auto-20220512010229-7184\proxy-client.crt with IP's: []
	I0512 01:21:17.548857    4188 crypto.go:156] Writing cert to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\auto-20220512010229-7184\proxy-client.crt ...
	I0512 01:21:17.548857    4188 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\auto-20220512010229-7184\proxy-client.crt: {Name:mk98d65ba752be7b3c5b81838a05cde9dfaf6188 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0512 01:21:17.550865    4188 crypto.go:164] Writing key to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\auto-20220512010229-7184\proxy-client.key ...
	I0512 01:21:17.550865    4188 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\auto-20220512010229-7184\proxy-client.key: {Name:mk87670321cbd80b396a73257e69c5aa83418ff1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0512 01:21:17.558591    4188 certs.go:388] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\7184.pem (1338 bytes)
	W0512 01:21:17.558953    4188 certs.go:384] ignoring C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\7184_empty.pem, impossibly tiny 0 bytes
	I0512 01:21:17.558953    4188 certs.go:388] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca-key.pem (1675 bytes)
	I0512 01:21:17.559280    4188 certs.go:388] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem (1078 bytes)
	I0512 01:21:17.559280    4188 certs.go:388] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0512 01:21:17.559929    4188 certs.go:388] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\key.pem (1675 bytes)
	I0512 01:21:17.560070    4188 certs.go:388] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\71842.pem (1708 bytes)
	I0512 01:21:17.560935    4188 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\auto-20220512010229-7184\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0512 01:21:17.616554    4188 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\auto-20220512010229-7184\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0512 01:21:17.666266    4188 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\auto-20220512010229-7184\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0512 01:21:17.719633    4188 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\auto-20220512010229-7184\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0512 01:21:17.777476    4188 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0512 01:21:17.837038    4188 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0512 01:21:17.882048    4188 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0512 01:21:17.931044    4188 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0512 01:21:17.979235    4188 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\71842.pem --> /usr/share/ca-certificates/71842.pem (1708 bytes)
	I0512 01:21:18.033019    4188 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0512 01:21:18.095076    4188 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\7184.pem --> /usr/share/ca-certificates/7184.pem (1338 bytes)
	I0512 01:21:18.149588    4188 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0512 01:21:18.212080    4188 ssh_runner.go:195] Run: openssl version
	I0512 01:21:18.234071    4188 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/71842.pem && ln -fs /usr/share/ca-certificates/71842.pem /etc/ssl/certs/71842.pem"
	I0512 01:21:18.287433    4188 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/71842.pem
	I0512 01:21:18.300419    4188 certs.go:431] hashing: -rw-r--r-- 1 root root 1708 May 11 23:10 /usr/share/ca-certificates/71842.pem
	I0512 01:21:18.316434    4188 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/71842.pem
	I0512 01:21:18.343426    4188 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/71842.pem /etc/ssl/certs/3ec20f2e.0"
	I0512 01:21:18.387426    4188 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0512 01:21:18.439454    4188 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0512 01:21:18.450435    4188 certs.go:431] hashing: -rw-r--r-- 1 root root 1111 May 11 23:00 /usr/share/ca-certificates/minikubeCA.pem
	I0512 01:21:18.459436    4188 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0512 01:21:18.496701    4188 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0512 01:21:18.533647    4188 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/7184.pem && ln -fs /usr/share/ca-certificates/7184.pem /etc/ssl/certs/7184.pem"
	I0512 01:21:18.582603    4188 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/7184.pem
	I0512 01:21:18.596592    4188 certs.go:431] hashing: -rw-r--r-- 1 root root 1338 May 11 23:10 /usr/share/ca-certificates/7184.pem
	I0512 01:21:18.609605    4188 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/7184.pem
	I0512 01:21:18.638604    4188 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/7184.pem /etc/ssl/certs/51391683.0"
	I0512 01:21:18.667598    4188 kubeadm.go:391] StartCluster: {Name:auto-20220512010229-7184 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1652251400-14138@sha256:8c847a4aa2afc5a7fc659f9731046bf9cc7e788283deecc83c8633014fb0828a Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.5 ClusterName:auto-20220512010229-7184 Namespace:default APIServerName:minikubeCA A
PIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.23.5 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:5m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube4:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fal
se DisableMetrics:false}
	I0512 01:21:18.674673    4188 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0512 01:21:18.760056    4188 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0512 01:21:18.798050    4188 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0512 01:21:18.820896    4188 kubeadm.go:221] ignoring SystemVerification for kubeadm because of docker driver
	I0512 01:21:18.840303    4188 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0512 01:21:18.902243    4188 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0512 01:21:18.902243    4188 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.5:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0512 01:21:15.915215    4792 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0512 01:21:16.413064    4792 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0512 01:21:16.904090    4792 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0512 01:21:17.408369    4792 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0512 01:21:17.907036    4792 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0512 01:21:18.406458    4792 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0512 01:21:18.915244    4792 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0512 01:21:19.405568    4792 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0512 01:21:19.906514    4792 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0512 01:21:20.398007    4792 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0512 01:21:20.903052    4792 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0512 01:21:21.408624    4792 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0512 01:21:21.906473    4792 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0512 01:21:22.406096    4792 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0512 01:21:22.911716    4792 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0512 01:21:23.411997    4792 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0512 01:21:23.912706    4792 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0512 01:21:24.404853    4792 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0512 01:21:24.909794    4792 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0512 01:21:25.408891    4792 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0512 01:21:25.905723    4792 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0512 01:21:26.405291    4792 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0512 01:21:27.382949    4792 kubeadm.go:1020] duration metric: took 13.6603439s to wait for elevateKubeSystemPrivileges.
	I0512 01:21:27.383104    4792 kubeadm.go:393] StartCluster complete in 6m7.1139535s
	I0512 01:21:27.383104    4792 settings.go:142] acquiring lock: {Name:mk5d8710830d010adb6db61f855b0ef766a8622c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0512 01:21:27.383104    4792 settings.go:150] Updating kubeconfig:  C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	I0512 01:21:27.386692    4792 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\kubeconfig: {Name:mk97c09b788e5010ffd4c9dd9525f9245d5edd25 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0512 01:21:28.227916    4792 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "default-k8s-different-port-20220512011148-7184" rescaled to 1
	I0512 01:21:28.227916    4792 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.23.5/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0512 01:21:28.227916    4792 start.go:208] Will wait 6m0s for node &{Name: IP:192.168.67.2 Port:8444 KubernetesVersion:v1.23.5 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0512 01:21:28.229916    4792 out.go:177] * Verifying Kubernetes components...
	I0512 01:21:28.227916    4792 addons.go:415] enableAddons start: toEnable=map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true], additional=[]
	I0512 01:21:28.228915    4792 config.go:178] Loaded profile config "default-k8s-different-port-20220512011148-7184": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.5
	I0512 01:21:28.234916    4792 addons.go:65] Setting storage-provisioner=true in profile "default-k8s-different-port-20220512011148-7184"
	I0512 01:21:28.234916    4792 addons.go:65] Setting default-storageclass=true in profile "default-k8s-different-port-20220512011148-7184"
	I0512 01:21:28.234916    4792 addons.go:153] Setting addon storage-provisioner=true in "default-k8s-different-port-20220512011148-7184"
	W0512 01:21:28.234916    4792 addons.go:165] addon storage-provisioner should already be in state true
	I0512 01:21:28.234916    4792 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-different-port-20220512011148-7184"
	I0512 01:21:28.234916    4792 addons.go:65] Setting dashboard=true in profile "default-k8s-different-port-20220512011148-7184"
	I0512 01:21:28.234916    4792 addons.go:153] Setting addon dashboard=true in "default-k8s-different-port-20220512011148-7184"
	I0512 01:21:28.234916    4792 addons.go:65] Setting metrics-server=true in profile "default-k8s-different-port-20220512011148-7184"
	W0512 01:21:28.234916    4792 addons.go:165] addon dashboard should already be in state true
	I0512 01:21:28.234916    4792 addons.go:153] Setting addon metrics-server=true in "default-k8s-different-port-20220512011148-7184"
	W0512 01:21:28.234916    4792 addons.go:165] addon metrics-server should already be in state true
	I0512 01:21:28.234916    4792 host.go:66] Checking if "default-k8s-different-port-20220512011148-7184" exists ...
	I0512 01:21:28.234916    4792 host.go:66] Checking if "default-k8s-different-port-20220512011148-7184" exists ...
	I0512 01:21:28.234916    4792 host.go:66] Checking if "default-k8s-different-port-20220512011148-7184" exists ...
	I0512 01:21:28.258920    4792 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0512 01:21:28.274950    4792 cli_runner.go:164] Run: docker container inspect default-k8s-different-port-20220512011148-7184 --format={{.State.Status}}
	I0512 01:21:28.274950    4792 cli_runner.go:164] Run: docker container inspect default-k8s-different-port-20220512011148-7184 --format={{.State.Status}}
	I0512 01:21:28.274950    4792 cli_runner.go:164] Run: docker container inspect default-k8s-different-port-20220512011148-7184 --format={{.State.Status}}
	I0512 01:21:28.276940    4792 cli_runner.go:164] Run: docker container inspect default-k8s-different-port-20220512011148-7184 --format={{.State.Status}}
	I0512 01:21:28.610927    4792 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.23.5/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.65.2 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.23.5/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0512 01:21:28.629875    4792 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8444/tcp") 0).HostPort}}'" default-k8s-different-port-20220512011148-7184
	I0512 01:21:30.118744    4792 cli_runner.go:217] Completed: docker container inspect default-k8s-different-port-20220512011148-7184 --format={{.State.Status}}: (1.8406853s)
	I0512 01:21:30.123755    4792 out.go:177]   - Using image fake.domain/k8s.gcr.io/echoserver:1.4
	I0512 01:21:30.127757    4792 addons.go:348] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0512 01:21:30.127757    4792 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0512 01:21:30.130749    4792 cli_runner.go:217] Completed: docker container inspect default-k8s-different-port-20220512011148-7184 --format={{.State.Status}}: (1.8557033s)
	I0512 01:21:30.135739    4792 out.go:177]   - Using image kubernetesui/dashboard:v2.5.1
	I0512 01:21:30.137744    4792 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220512011148-7184
	I0512 01:21:30.149767    4792 out.go:177]   - Using image k8s.gcr.io/echoserver:1.4
	I0512 01:21:30.146745    4792 cli_runner.go:217] Completed: docker container inspect default-k8s-different-port-20220512011148-7184 --format={{.State.Status}}: (1.8716983s)
	I0512 01:21:30.154751    4792 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0512 01:21:30.153752    4792 addons.go:348] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0512 01:21:30.157824    4792 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0512 01:21:30.158749    4792 addons.go:348] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0512 01:21:30.158749    4792 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0512 01:21:30.162758    4792 cli_runner.go:217] Completed: docker container inspect default-k8s-different-port-20220512011148-7184 --format={{.State.Status}}: (1.8877105s)
	I0512 01:21:30.177460    4792 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220512011148-7184
	I0512 01:21:30.180459    4792 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220512011148-7184
	I0512 01:21:30.296375    4792 addons.go:153] Setting addon default-storageclass=true in "default-k8s-different-port-20220512011148-7184"
	W0512 01:21:30.297351    4792 addons.go:165] addon default-storageclass should already be in state true
	I0512 01:21:30.297351    4792 host.go:66] Checking if "default-k8s-different-port-20220512011148-7184" exists ...
	I0512 01:21:30.341370    4792 cli_runner.go:164] Run: docker container inspect default-k8s-different-port-20220512011148-7184 --format={{.State.Status}}
	I0512 01:21:30.446468    4792 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8444/tcp") 0).HostPort}}'" default-k8s-different-port-20220512011148-7184: (1.8164997s)
	I0512 01:21:30.446468    4792 node_ready.go:35] waiting up to 6m0s for node "default-k8s-different-port-20220512011148-7184" to be "Ready" ...
	I0512 01:21:30.576970    4792 node_ready.go:49] node "default-k8s-different-port-20220512011148-7184" has status "Ready":"True"
	I0512 01:21:30.576970    4792 node_ready.go:38] duration metric: took 130.4948ms waiting for node "default-k8s-different-port-20220512011148-7184" to be "Ready" ...
	I0512 01:21:30.576970    4792 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0512 01:21:30.606954    4792 pod_ready.go:78] waiting up to 6m0s for pod "coredns-64897985d-748vh" in "kube-system" namespace to be "Ready" ...
	I0512 01:21:31.742450    4792 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220512011148-7184: (1.5619105s)
	I0512 01:21:31.742450    4792 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220512011148-7184: (1.5996072s)
	I0512 01:21:31.742450    4792 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50704 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\default-k8s-different-port-20220512011148-7184\id_rsa Username:docker}
	I0512 01:21:31.742450    4792 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50704 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\default-k8s-different-port-20220512011148-7184\id_rsa Username:docker}
	I0512 01:21:31.788495    4792 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220512011148-7184: (1.6109519s)
	I0512 01:21:31.788495    4792 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50704 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\default-k8s-different-port-20220512011148-7184\id_rsa Username:docker}
	I0512 01:21:31.960513    4792 cli_runner.go:217] Completed: docker container inspect default-k8s-different-port-20220512011148-7184 --format={{.State.Status}}: (1.6180547s)
	I0512 01:21:31.960513    4792 addons.go:348] installing /etc/kubernetes/addons/storageclass.yaml
	I0512 01:21:31.960513    4792 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0512 01:21:31.966526    4792 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220512011148-7184
	I0512 01:21:32.278880    4792 addons.go:348] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0512 01:21:32.278880    4792 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1849 bytes)
	I0512 01:21:32.294506    4792 addons.go:348] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0512 01:21:32.294506    4792 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0512 01:21:32.315491    4792 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.5/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0512 01:21:32.485453    4792 addons.go:348] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0512 01:21:32.485453    4792 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0512 01:21:32.573139    4792 addons.go:348] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0512 01:21:32.573244    4792 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0512 01:21:32.686872    4792 addons.go:348] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0512 01:21:32.686872    4792 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0512 01:21:32.700863    4792 addons.go:348] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0512 01:21:32.700863    4792 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0512 01:21:32.804833    4792 addons.go:348] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0512 01:21:32.804833    4792 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4196 bytes)
	I0512 01:21:32.914364    4792 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.5/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0512 01:21:32.980208    4792 addons.go:348] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0512 01:21:32.980208    4792 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0512 01:21:33.173866    4792 addons.go:348] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0512 01:21:33.173866    4792 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0512 01:21:33.407704    4792 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220512011148-7184: (1.4411039s)
	I0512 01:21:33.407704    4792 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50704 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\default-k8s-different-port-20220512011148-7184\id_rsa Username:docker}
	I0512 01:21:33.487173    4792 addons.go:348] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0512 01:21:33.487173    4792 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0512 01:21:33.608123    4792 addons.go:348] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0512 01:21:33.608123    4792 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0512 01:21:33.690985    4792 pod_ready.go:102] pod "coredns-64897985d-748vh" in "kube-system" namespace has status "Ready":"False"
	I0512 01:21:33.801050    4792 addons.go:348] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0512 01:21:33.801050    4792 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0512 01:21:33.822296    4792 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.5/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0512 01:21:33.914665    4792 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.5/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0512 01:21:35.582822    4792 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.23.5/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.65.2 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.23.5/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (6.9704468s)
	I0512 01:21:35.582972    4792 start.go:815] {"host.minikube.internal": 192.168.65.2} host record injected into CoreDNS
	I0512 01:21:36.081616    4792 pod_ready.go:102] pod "coredns-64897985d-748vh" in "kube-system" namespace has status "Ready":"False"
	I0512 01:21:37.683055    4792 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.5/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (5.3662255s)
	I0512 01:21:38.188442    4792 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.5/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (5.2738066s)
	I0512 01:21:38.188442    4792 addons.go:386] Verifying addon metrics-server=true in "default-k8s-different-port-20220512011148-7184"
	I0512 01:21:38.188442    4792 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.5/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (4.3659211s)
	I0512 01:21:38.581328    4792 pod_ready.go:92] pod "coredns-64897985d-748vh" in "kube-system" namespace has status "Ready":"True"
	I0512 01:21:38.581328    4792 pod_ready.go:81] duration metric: took 7.9739625s waiting for pod "coredns-64897985d-748vh" in "kube-system" namespace to be "Ready" ...
	I0512 01:21:38.581328    4792 pod_ready.go:78] waiting up to 6m0s for pod "coredns-64897985d-7b96c" in "kube-system" namespace to be "Ready" ...
	I0512 01:21:38.880554    4792 pod_ready.go:92] pod "coredns-64897985d-7b96c" in "kube-system" namespace has status "Ready":"True"
	I0512 01:21:38.880554    4792 pod_ready.go:81] duration metric: took 299.211ms waiting for pod "coredns-64897985d-7b96c" in "kube-system" namespace to be "Ready" ...
	I0512 01:21:38.880554    4792 pod_ready.go:78] waiting up to 6m0s for pod "etcd-default-k8s-different-port-20220512011148-7184" in "kube-system" namespace to be "Ready" ...
	I0512 01:21:39.073619    4792 pod_ready.go:92] pod "etcd-default-k8s-different-port-20220512011148-7184" in "kube-system" namespace has status "Ready":"True"
	I0512 01:21:39.073619    4792 pod_ready.go:81] duration metric: took 193.0555ms waiting for pod "etcd-default-k8s-different-port-20220512011148-7184" in "kube-system" namespace to be "Ready" ...
	I0512 01:21:39.073619    4792 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-default-k8s-different-port-20220512011148-7184" in "kube-system" namespace to be "Ready" ...
	I0512 01:21:39.089651    4792 pod_ready.go:92] pod "kube-apiserver-default-k8s-different-port-20220512011148-7184" in "kube-system" namespace has status "Ready":"True"
	I0512 01:21:39.089651    4792 pod_ready.go:81] duration metric: took 16.0306ms waiting for pod "kube-apiserver-default-k8s-different-port-20220512011148-7184" in "kube-system" namespace to be "Ready" ...
	I0512 01:21:39.089651    4792 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-different-port-20220512011148-7184" in "kube-system" namespace to be "Ready" ...
	I0512 01:21:41.633139    4792 pod_ready.go:92] pod "kube-controller-manager-default-k8s-different-port-20220512011148-7184" in "kube-system" namespace has status "Ready":"True"
	I0512 01:21:41.633139    4792 pod_ready.go:81] duration metric: took 2.5433572s waiting for pod "kube-controller-manager-default-k8s-different-port-20220512011148-7184" in "kube-system" namespace to be "Ready" ...
	I0512 01:21:41.633139    4792 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-4dg2c" in "kube-system" namespace to be "Ready" ...
	I0512 01:21:43.978755    4792 pod_ready.go:92] pod "kube-proxy-4dg2c" in "kube-system" namespace has status "Ready":"True"
	I0512 01:21:43.978755    4792 pod_ready.go:81] duration metric: took 2.3454948s waiting for pod "kube-proxy-4dg2c" in "kube-system" namespace to be "Ready" ...
	I0512 01:21:43.978755    4792 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-default-k8s-different-port-20220512011148-7184" in "kube-system" namespace to be "Ready" ...
	I0512 01:21:44.086702    4792 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.5/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (10.1715134s)
	I0512 01:21:44.090718    4792 out.go:177] * Enabled addons: storage-provisioner, metrics-server, default-storageclass, dashboard
	
	* 
	* ==> Docker <==
	* -- Logs begin at Thu 2022-05-12 01:12:54 UTC, end at Thu 2022-05-12 01:21:52 UTC. --
	May 12 01:18:32 old-k8s-version-20220512010246-7184 dockerd[248]: time="2022-05-12T01:18:32.586513900Z" level=info msg="ignoring event" container=26850d91e05e50e404cfbae0eb9a3758099cd1a8ad614d8e6c7b3f9e1d0d9b18 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	May 12 01:19:23 old-k8s-version-20220512010246-7184 dockerd[248]: time="2022-05-12T01:19:23.288891700Z" level=error msg="stream copy error: reading from a closed fifo"
	May 12 01:19:23 old-k8s-version-20220512010246-7184 dockerd[248]: time="2022-05-12T01:19:23.289200100Z" level=error msg="stream copy error: reading from a closed fifo"
	May 12 01:19:24 old-k8s-version-20220512010246-7184 dockerd[248]: time="2022-05-12T01:19:24.267261600Z" level=error msg="e7cb0d7181edd1d86d79ad9b4191be26320d98e31cbf341325033d69e3fc3cb3 cleanup: failed to delete container from containerd: no such container"
	May 12 01:19:24 old-k8s-version-20220512010246-7184 dockerd[248]: time="2022-05-12T01:19:24.267674600Z" level=error msg="Handler for POST /containers/e7cb0d7181edd1d86d79ad9b4191be26320d98e31cbf341325033d69e3fc3cb3/start returned error: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: writing syncT \"procResume\": write init-p: broken pipe: unknown"
	May 12 01:19:31 old-k8s-version-20220512010246-7184 dockerd[248]: time="2022-05-12T01:19:31.161874800Z" level=warning msg="Error getting v2 registry: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host"
	May 12 01:19:31 old-k8s-version-20220512010246-7184 dockerd[248]: time="2022-05-12T01:19:31.162183300Z" level=info msg="Attempting next endpoint for pull after error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host"
	May 12 01:19:31 old-k8s-version-20220512010246-7184 dockerd[248]: time="2022-05-12T01:19:31.171747000Z" level=error msg="Handler for POST /images/create returned error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host"
	May 12 01:19:32 old-k8s-version-20220512010246-7184 dockerd[248]: time="2022-05-12T01:19:32.720085200Z" level=warning msg="reference for unknown type: " digest="sha256:cc746e7a0b1eec0db01cbabbb6386b23d7af97e79fa9e36bb883a95b7eb96fe2" remote="docker.io/kubernetesui/dashboard@sha256:cc746e7a0b1eec0db01cbabbb6386b23d7af97e79fa9e36bb883a95b7eb96fe2"
	May 12 01:19:56 old-k8s-version-20220512010246-7184 dockerd[248]: time="2022-05-12T01:19:56.391525800Z" level=warning msg="reference for unknown type: application/vnd.docker.distribution.manifest.v1+prettyjws" digest="sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb" remote="k8s.gcr.io/echoserver:1.4"
	May 12 01:19:56 old-k8s-version-20220512010246-7184 dockerd[248]: time="2022-05-12T01:19:56.591358600Z" level=warning msg="Error persisting manifest" digest="sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb" error="error committing manifest to content store: commit failed: unexpected commit digest sha256:eaee4c452b076cdb05b391ed7e75e1ad0aca136665875ab5d7e2f3d9f4675769, expected sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb: failed precondition" remote="k8s.gcr.io/echoserver:1.4"
	May 12 01:20:14 old-k8s-version-20220512010246-7184 dockerd[248]: time="2022-05-12T01:20:14.202798600Z" level=warning msg="Error getting v2 registry: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host"
	May 12 01:20:14 old-k8s-version-20220512010246-7184 dockerd[248]: time="2022-05-12T01:20:14.205167900Z" level=info msg="Attempting next endpoint for pull after error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host"
	May 12 01:20:14 old-k8s-version-20220512010246-7184 dockerd[248]: time="2022-05-12T01:20:14.232090300Z" level=error msg="Handler for POST /images/create returned error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host"
	May 12 01:20:14 old-k8s-version-20220512010246-7184 dockerd[248]: time="2022-05-12T01:20:14.712562600Z" level=info msg="ignoring event" container=47b400ef79f5d137f34caa11383a0e9ad1c28f2ae99e7685c5bb6e7bd9513f91 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	May 12 01:20:15 old-k8s-version-20220512010246-7184 dockerd[248]: time="2022-05-12T01:20:15.789896000Z" level=info msg="ignoring event" container=ab00bd231ec66c29a530b2ea2b905bcc464fa8d5d6ed515a1825aa8501bc2d08 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	May 12 01:20:32 old-k8s-version-20220512010246-7184 dockerd[248]: time="2022-05-12T01:20:32.639646500Z" level=info msg="ignoring event" container=0754e7bc90666385decb9bee83def6c80f7df268f31433e5f339e9e447963dc6 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	May 12 01:20:41 old-k8s-version-20220512010246-7184 dockerd[248]: time="2022-05-12T01:20:41.799381300Z" level=warning msg="Error getting v2 registry: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host"
	May 12 01:20:41 old-k8s-version-20220512010246-7184 dockerd[248]: time="2022-05-12T01:20:41.799567000Z" level=info msg="Attempting next endpoint for pull after error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host"
	May 12 01:20:41 old-k8s-version-20220512010246-7184 dockerd[248]: time="2022-05-12T01:20:41.812380400Z" level=error msg="Handler for POST /images/create returned error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host"
	May 12 01:21:02 old-k8s-version-20220512010246-7184 dockerd[248]: time="2022-05-12T01:21:02.500445200Z" level=info msg="ignoring event" container=73624efb9aa4e1285e0ad418a15d51078278772c4d29458bccf6eb606275ff27 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	May 12 01:21:22 old-k8s-version-20220512010246-7184 dockerd[248]: time="2022-05-12T01:21:22.829733400Z" level=warning msg="Error getting v2 registry: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host"
	May 12 01:21:22 old-k8s-version-20220512010246-7184 dockerd[248]: time="2022-05-12T01:21:22.829969100Z" level=info msg="Attempting next endpoint for pull after error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host"
	May 12 01:21:22 old-k8s-version-20220512010246-7184 dockerd[248]: time="2022-05-12T01:21:22.869715200Z" level=error msg="Handler for POST /images/create returned error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host"
	May 12 01:21:49 old-k8s-version-20220512010246-7184 dockerd[248]: time="2022-05-12T01:21:49.576338300Z" level=info msg="ignoring event" container=30bfb9a9fc53c40bae9991635654aa23f5cb99bfb75aa12020b77dc7937e0db3 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                                                            CREATED              STATE               NAME                        ATTEMPT             POD ID
	30bfb9a9fc53c       a90209bb39e3d                                                                                    4 seconds ago        Exited              dashboard-metrics-scraper   4                   30a183e4533ff
	57dce2ef6b231       kubernetesui/dashboard@sha256:cc746e7a0b1eec0db01cbabbb6386b23d7af97e79fa9e36bb883a95b7eb96fe2   About a minute ago   Running             kubernetes-dashboard        0                   2059a775d6bc9
	4a09b1dec8680       6e38f40d628db                                                                                    2 minutes ago        Running             storage-provisioner         0                   082d65acc8fdc
	65058f3069c2f       bf261d1579144                                                                                    2 minutes ago        Running             coredns                     0                   5dcf8588c18e0
	e2f6eb90e5344       c21b0c7400f98                                                                                    2 minutes ago        Running             kube-proxy                  0                   b83104470f2c5
	9f2268fa3de9e       b2756210eeabf                                                                                    3 minutes ago        Running             etcd                        0                   0f8a4822f8593
	1e0cf8fdf46a2       06a629a7e51cd                                                                                    3 minutes ago        Running             kube-controller-manager     0                   19ad00af00379
	289649ce5a72e       b305571ca60a5                                                                                    3 minutes ago        Running             kube-apiserver              0                   090830a0e30d8
	eb642e98a5e5e       301ddc62b80b1                                                                                    3 minutes ago        Running             kube-scheduler              0                   fa0ec8426828a
	
	* 
	* ==> coredns [65058f3069c2] <==
	* .:53
	2022-05-12T01:19:25.474Z [INFO] plugin/reload: Running configuration MD5 = f64cb9b977c7dfca58c4fab108535a76
	2022-05-12T01:19:25.475Z [INFO] CoreDNS-1.6.2
	2022-05-12T01:19:25.475Z [INFO] linux/amd64, go1.12.8, 795a3eb
	CoreDNS-1.6.2
	linux/amd64, go1.12.8, 795a3eb
	[INFO] Reloading
	2022-05-12T01:19:54.427Z [INFO] plugin/reload: Running configuration MD5 = 034a4984a79adc08e57427d1bc08b68f
	[INFO] Reloading complete
	
	* 
	* ==> describe nodes <==
	* Name:               old-k8s-version-20220512010246-7184
	Roles:              master
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=old-k8s-version-20220512010246-7184
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=50a7977b568d2ad3e04003527a57f4502d6177a0
	                    minikube.k8s.io/name=old-k8s-version-20220512010246-7184
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2022_05_12T01_18_55_0700
	                    minikube.k8s.io/version=v1.25.2
	                    node-role.kubernetes.io/master=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 12 May 2022 01:18:50 +0000
	Taints:             <none>
	Unschedulable:      false
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 12 May 2022 01:21:43 +0000   Thu, 12 May 2022 01:18:43 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 12 May 2022 01:21:43 +0000   Thu, 12 May 2022 01:18:43 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 12 May 2022 01:21:43 +0000   Thu, 12 May 2022 01:18:43 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 12 May 2022 01:21:43 +0000   Thu, 12 May 2022 01:18:43 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    old-k8s-version-20220512010246-7184
	Capacity:
	 cpu:                16
	 ephemeral-storage:  263174212Ki
	 hugepages-1Gi:      0
	 hugepages-2Mi:      0
	 memory:             52638988Ki
	 pods:               110
	Allocatable:
	 cpu:                16
	 ephemeral-storage:  263174212Ki
	 hugepages-1Gi:      0
	 hugepages-2Mi:      0
	 memory:             52638988Ki
	 pods:               110
	System Info:
	 Machine ID:                 8556a0a9a0e64ba4b825f672d2dce0b9
	 System UUID:                8556a0a9a0e64ba4b825f672d2dce0b9
	 Boot ID:                    10186544-b659-4889-afdb-c2512535b797
	 Kernel Version:             5.10.102.1-microsoft-standard-WSL2
	 OS Image:                   Ubuntu 20.04.4 LTS
	 Operating System:           linux
	 Architecture:               amd64
	 Container Runtime Version:  docker://20.10.15
	 Kubelet Version:            v1.16.0
	 Kube-Proxy Version:         v1.16.0
	PodCIDR:                     10.244.0.0/24
	PodCIDRs:                    10.244.0.0/24
	Non-terminated Pods:         (10 in total)
	  Namespace                  Name                                                           CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE
	  ---------                  ----                                                           ------------  ----------  ---------------  -------------  ---
	  kube-system                coredns-5644d7b6d9-ds6wg                                       100m (0%!)(MISSING)     0 (0%!)(MISSING)      70Mi (0%!)(MISSING)        170Mi (0%!)(MISSING)     2m42s
	  kube-system                etcd-old-k8s-version-20220512010246-7184                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         111s
	  kube-system                kube-apiserver-old-k8s-version-20220512010246-7184             250m (1%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m1s
	  kube-system                kube-controller-manager-old-k8s-version-20220512010246-7184    200m (1%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         112s
	  kube-system                kube-proxy-5dp6x                                               0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m42s
	  kube-system                kube-scheduler-old-k8s-version-20220512010246-7184             100m (0%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         104s
	  kube-system                metrics-server-6f89b5864b-xnzbk                                100m (0%!)(MISSING)     0 (0%!)(MISSING)      200Mi (0%!)(MISSING)       0 (0%!)(MISSING)         2m25s
	  kube-system                storage-provisioner                                            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m26s
	  kubernetes-dashboard       dashboard-metrics-scraper-6b84985989-bn4zg                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m23s
	  kubernetes-dashboard       kubernetes-dashboard-6fb5469cf5-mrs7d                          0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m23s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (4%!)(MISSING)   0 (0%!)(MISSING)
	  memory             270Mi (0%!)(MISSING)  170Mi (0%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From                                             Message
	  ----    ------                   ----                   ----                                             -------
	  Normal  NodeHasSufficientMemory  3m13s (x8 over 3m14s)  kubelet, old-k8s-version-20220512010246-7184     Node old-k8s-version-20220512010246-7184 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m13s (x8 over 3m14s)  kubelet, old-k8s-version-20220512010246-7184     Node old-k8s-version-20220512010246-7184 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m13s (x7 over 3m14s)  kubelet, old-k8s-version-20220512010246-7184     Node old-k8s-version-20220512010246-7184 status is now: NodeHasSufficientPID
	  Normal  Starting                 2m29s                  kube-proxy, old-k8s-version-20220512010246-7184  Starting kube-proxy.
	
	* 
	* ==> dmesg <==
	* [May12 00:52] WSL2: Performing memory compaction.
	[May12 00:54] WSL2: Performing memory compaction.
	[May12 00:56] overlayfs: upperdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
	[  +0.000004] overlayfs: workdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
	[  +0.010948] overlayfs: upperdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
	[  +0.000005] overlayfs: workdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
	[  +0.036593] overlayfs: upperdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
	[  +0.000001] overlayfs: upperdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
	[  +0.000004] overlayfs: workdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
	[  +0.000001] overlayfs: workdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
	[May12 00:57] WSL2: Performing memory compaction.
	[May12 00:58] WSL2: Performing memory compaction.
	[May12 01:00] WSL2: Performing memory compaction.
	[May12 01:01] WSL2: Performing memory compaction.
	[May12 01:02] WSL2: Performing memory compaction.
	[May12 01:03] WSL2: Performing memory compaction.
	[May12 01:05] WSL2: Performing memory compaction.
	[May12 01:06] WSL2: Performing memory compaction.
	[May12 01:07] WSL2: Performing memory compaction.
	[May12 01:08] WSL2: Performing memory compaction.
	[May12 01:09] WSL2: Performing memory compaction.
	[May12 01:12] WSL2: Performing memory compaction.
	[May12 01:14] WSL2: Performing memory compaction.
	[May12 01:16] WSL2: Performing memory compaction.
	[May12 01:19] WSL2: Performing memory compaction.
	
	* 
	* ==> etcd [9f2268fa3de9] <==
	* 2022-05-12 01:19:55.761407 W | etcdserver: read-only range request "key:\"/registry/limitranges\" range_end:\"/registry/limitranget\" count_only:true " with result "range_response_count:0 size:5" took too long (304.7121ms) to execute
	2022-05-12 01:20:00.282286 W | etcdserver: read-only range request "key:\"/registry/resourcequotas\" range_end:\"/registry/resourcequotat\" count_only:true " with result "range_response_count:0 size:5" took too long (116.9054ms) to execute
	2022-05-12 01:20:07.571835 W | etcdserver: read-only range request "key:\"/registry/configmaps\" range_end:\"/registry/configmapt\" count_only:true " with result "range_response_count:0 size:7" took too long (197.2191ms) to execute
	2022-05-12 01:20:18.575884 W | etcdserver: read-only range request "key:\"/registry/health\" " with result "range_response_count:0 size:5" took too long (429.1823ms) to execute
	2022-05-12 01:20:29.778209 W | etcdserver: read-only range request "key:\"/registry/horizontalpodautoscalers\" range_end:\"/registry/horizontalpodautoscalert\" count_only:true " with result "range_response_count:0 size:5" took too long (100.2334ms) to execute
	2022-05-12 01:20:29.778668 W | etcdserver: read-only range request "key:\"/registry/apiextensions.k8s.io/customresourcedefinitions\" range_end:\"/registry/apiextensions.k8s.io/customresourcedefinitiont\" count_only:true " with result "range_response_count:0 size:5" took too long (452.3901ms) to execute
	2022-05-12 01:20:29.779006 W | etcdserver: read-only range request "key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" " with result "range_response_count:1 size:765" took too long (392.5599ms) to execute
	2022-05-12 01:20:31.908269 W | etcdserver: read-only range request "key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" " with result "range_response_count:1 size:765" took too long (104.2955ms) to execute
	2022-05-12 01:20:50.371028 W | etcdserver: read-only range request "key:\"/registry/clusterrolebindings\" range_end:\"/registry/clusterrolebindingt\" count_only:true " with result "range_response_count:0 size:7" took too long (238.0992ms) to execute
	2022-05-12 01:21:34.894331 W | etcdserver: read-only range request "key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" " with result "range_response_count:1 size:765" took too long (483.7141ms) to execute
	2022-05-12 01:21:34.894644 W | etcdserver: read-only range request "key:\"/registry/events/kubernetes-dashboard/dashboard-metrics-scraper-6b84985989-bn4zg.16ee36d855315d6c\" " with result "range_response_count:1 size:597" took too long (1.153514s) to execute
	2022-05-12 01:21:34.894664 W | etcdserver: read-only range request "key:\"/registry/pods/kube-system/metrics-server-6f89b5864b-xnzbk\" " with result "range_response_count:1 size:1858" took too long (1.1536641s) to execute
	2022-05-12 01:21:34.894864 W | etcdserver: read-only range request "key:\"/registry/pods/default/metrics-server-6f89b5864b-xnzbk\" " with result "range_response_count:0 size:5" took too long (2.2392057s) to execute
	2022-05-12 01:21:34.895116 W | etcdserver: read-only range request "key:\"/registry/horizontalpodautoscalers\" range_end:\"/registry/horizontalpodautoscalert\" count_only:true " with result "range_response_count:0 size:5" took too long (1.9505979s) to execute
	2022-05-12 01:21:34.895225 W | etcdserver: read-only range request "key:\"/registry/namespaces/default\" " with result "range_response_count:1 size:172" took too long (889.5701ms) to execute
	2022-05-12 01:21:35.146379 W | etcdserver: read-only range request "key:\"/registry/events/kube-system/metrics-server-6f89b5864b-xnzbk.16ee36ce139d593c\" " with result "range_response_count:1 size:550" took too long (235.4427ms) to execute
	2022-05-12 01:21:35.146598 W | etcdserver: read-only range request "key:\"/registry/pods/default/\" range_end:\"/registry/pods/default0\" " with result "range_response_count:0 size:5" took too long (177.6334ms) to execute
	2022-05-12 01:21:41.573249 W | wal: sync duration of 2.2130918s, expected less than 1s
	2022-05-12 01:21:41.605968 W | etcdserver: read-only range request "key:\"/registry/persistentvolumeclaims\" range_end:\"/registry/persistentvolumeclaimt\" count_only:true " with result "range_response_count:0 size:5" took too long (639.1294ms) to execute
	2022-05-12 01:21:41.606178 W | etcdserver: read-only range request "key:\"/registry/jobs/\" range_end:\"/registry/jobs0\" limit:500 " with result "range_response_count:0 size:5" took too long (1.136667s) to execute
	2022-05-12 01:21:41.606255 W | etcdserver: read-only range request "key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" " with result "range_response_count:1 size:765" took too long (627.4337ms) to execute
	2022-05-12 01:21:43.889584 W | wal: sync duration of 2.266591s, expected less than 1s
	2022-05-12 01:21:43.932620 W | etcdserver: read-only range request "key:\"/registry/runtimeclasses\" range_end:\"/registry/runtimeclasset\" count_only:true " with result "range_response_count:0 size:5" took too long (406.8092ms) to execute
	2022-05-12 01:21:43.942023 W | etcdserver: read-only range request "key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" " with result "range_response_count:1 size:765" took too long (311.0111ms) to execute
	2022-05-12 01:21:43.942492 W | etcdserver: read-only range request "key:\"/registry/health\" " with result "range_response_count:0 size:5" took too long (148.4831ms) to execute
	
	* 
	* ==> kernel <==
	*  01:21:53 up  2:29,  0 users,  load average: 12.28, 8.14, 5.74
	Linux old-k8s-version-20220512010246-7184 5.10.102.1-microsoft-standard-WSL2 #1 SMP Wed Mar 2 00:30:59 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 20.04.4 LTS"
	
	* 
	* ==> kube-apiserver [289649ce5a72] <==
	* E0512 01:20:30.574669       1 controller.go:114] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0512 01:20:30.574739       1 controller.go:127] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0512 01:21:34.895827       1 trace.go:116] Trace[821162573]: "Get" url:/api/v1/namespaces/default/pods/metrics-server-6f89b5864b-xnzbk (started: 2022-05-12 01:21:32.6542932 +0000 UTC m=+170.761999901) (total time: 2.241485s):
	Trace[821162573]: [2.241485s] [2.2413436s] END
	I0512 01:21:34.895890       1 trace.go:116] Trace[110587263]: "Get" url:/api/v1/namespaces/default (started: 2022-05-12 01:21:33.9965401 +0000 UTC m=+172.104246901) (total time: 899.3178ms):
	Trace[110587263]: [899.2272ms] [899.1843ms] About to write a response
	I0512 01:21:34.896288       1 trace.go:116] Trace[1877376660]: "Get" url:/api/v1/namespaces/kube-system/pods/metrics-server-6f89b5864b-xnzbk (started: 2022-05-12 01:21:33.7393378 +0000 UTC m=+171.847044901) (total time: 1.1569113s):
	Trace[1877376660]: [1.1566757s] [1.1565213s] About to write a response
	I0512 01:21:34.902882       1 trace.go:116] Trace[2062771071]: "GuaranteedUpdate etcd3" type:*core.Event (started: 2022-05-12 01:21:33.7397063 +0000 UTC m=+171.847413201) (total time: 1.1631419s):
	Trace[2062771071]: [1.1560911s] [1.1560911s] initial value restored
	I0512 01:21:34.903015       1 trace.go:116] Trace[947070464]: "Patch" url:/api/v1/namespaces/kubernetes-dashboard/events/dashboard-metrics-scraper-6b84985989-bn4zg.16ee36d855315d6c (started: 2022-05-12 01:21:33.7393663 +0000 UTC m=+171.847073101) (total time: 1.1636242s):
	Trace[947070464]: [1.1564355s] [1.1562416s] About to apply patch
	I0512 01:21:41.606850       1 trace.go:116] Trace[402812668]: "List etcd3" key:/jobs,resourceVersion:,limit:500,continue: (started: 2022-05-12 01:21:40.4682848 +0000 UTC m=+178.575991601) (total time: 1.1385159s):
	Trace[402812668]: [1.1385159s] [1.1385159s] END
	I0512 01:21:41.606995       1 trace.go:116] Trace[54644791]: "Create" url:/api/v1/namespaces/kube-system/events (started: 2022-05-12 01:21:40.9738174 +0000 UTC m=+179.081524201) (total time: 633.1387ms):
	Trace[54644791]: [632.961ms] [632.6663ms] Object stored in database
	I0512 01:21:41.607081       1 trace.go:116] Trace[894261720]: "List" url:/apis/batch/v1/jobs (started: 2022-05-12 01:21:40.4681824 +0000 UTC m=+178.575889201) (total time: 1.1388751s):
	Trace[894261720]: [1.1387756s] [1.1386821s] Listing from storage done
	I0512 01:21:41.610869       1 trace.go:116] Trace[2026560988]: "GuaranteedUpdate etcd3" type:*coordination.Lease (started: 2022-05-12 01:21:40.4299668 +0000 UTC m=+178.537673601) (total time: 1.1808674s):
	Trace[2026560988]: [1.1808331s] [1.180579s] Transaction committed
	I0512 01:21:41.611116       1 trace.go:116] Trace[372149738]: "Update" url:/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/old-k8s-version-20220512010246-7184 (started: 2022-05-12 01:21:40.429629 +0000 UTC m=+178.537335801) (total time: 1.1814589s):
	Trace[372149738]: [1.1813938s] [1.1811102s] Object stored in database
	I0512 01:21:41.612044       1 trace.go:116] Trace[1850281406]: "Get" url:/api/v1/namespaces/kube-system/endpoints/k8s.io-minikube-hostpath (started: 2022-05-12 01:21:40.9777727 +0000 UTC m=+179.085479501) (total time: 634.2413ms):
	Trace[1850281406]: [634.1491ms] [634.1143ms] About to write a response
	
	* 
	* ==> kube-controller-manager [1e0cf8fdf46a] <==
	* I0512 01:19:27.963562       1 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kubernetes-dashboard", Name:"kubernetes-dashboard-6fb5469cf5", UID:"59b4dcea-7d7c-4c1c-bb2d-03dd882b242c", APIVersion:"apps/v1", ResourceVersion:"437", FieldPath:""}): type: 'Warning' reason: 'FailedCreate' Error creating: pods "kubernetes-dashboard-6fb5469cf5-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	E0512 01:19:28.161750       1 replica_set.go:450] Sync "kubernetes-dashboard/dashboard-metrics-scraper-6b84985989" failed with pods "dashboard-metrics-scraper-6b84985989-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0512 01:19:28.162035       1 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kubernetes-dashboard", Name:"dashboard-metrics-scraper-6b84985989", UID:"f11d33f0-30f1-47e8-b3d7-3cd32f9b7c85", APIVersion:"apps/v1", ResourceVersion:"434", FieldPath:""}): type: 'Warning' reason: 'FailedCreate' Error creating: pods "dashboard-metrics-scraper-6b84985989-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0512 01:19:28.162074       1 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kubernetes-dashboard", Name:"kubernetes-dashboard-6fb5469cf5", UID:"59b4dcea-7d7c-4c1c-bb2d-03dd882b242c", APIVersion:"apps/v1", ResourceVersion:"437", FieldPath:""}): type: 'Warning' reason: 'FailedCreate' Error creating: pods "kubernetes-dashboard-6fb5469cf5-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	E0512 01:19:28.162087       1 replica_set.go:450] Sync "kubernetes-dashboard/kubernetes-dashboard-6fb5469cf5" failed with pods "kubernetes-dashboard-6fb5469cf5-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0512 01:19:28.273348       1 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kubernetes-dashboard", Name:"kubernetes-dashboard-6fb5469cf5", UID:"59b4dcea-7d7c-4c1c-bb2d-03dd882b242c", APIVersion:"apps/v1", ResourceVersion:"437", FieldPath:""}): type: 'Warning' reason: 'FailedCreate' Error creating: pods "kubernetes-dashboard-6fb5469cf5-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	E0512 01:19:28.273385       1 replica_set.go:450] Sync "kubernetes-dashboard/kubernetes-dashboard-6fb5469cf5" failed with pods "kubernetes-dashboard-6fb5469cf5-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	E0512 01:19:28.279867       1 replica_set.go:450] Sync "kubernetes-dashboard/dashboard-metrics-scraper-6b84985989" failed with pods "dashboard-metrics-scraper-6b84985989-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0512 01:19:28.279906       1 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kubernetes-dashboard", Name:"dashboard-metrics-scraper-6b84985989", UID:"f11d33f0-30f1-47e8-b3d7-3cd32f9b7c85", APIVersion:"apps/v1", ResourceVersion:"434", FieldPath:""}): type: 'Warning' reason: 'FailedCreate' Error creating: pods "dashboard-metrics-scraper-6b84985989-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	E0512 01:19:28.364396       1 replica_set.go:450] Sync "kubernetes-dashboard/kubernetes-dashboard-6fb5469cf5" failed with pods "kubernetes-dashboard-6fb5469cf5-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0512 01:19:28.364636       1 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kubernetes-dashboard", Name:"kubernetes-dashboard-6fb5469cf5", UID:"59b4dcea-7d7c-4c1c-bb2d-03dd882b242c", APIVersion:"apps/v1", ResourceVersion:"437", FieldPath:""}): type: 'Warning' reason: 'FailedCreate' Error creating: pods "kubernetes-dashboard-6fb5469cf5-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0512 01:19:28.366962       1 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kubernetes-dashboard", Name:"dashboard-metrics-scraper-6b84985989", UID:"f11d33f0-30f1-47e8-b3d7-3cd32f9b7c85", APIVersion:"apps/v1", ResourceVersion:"434", FieldPath:""}): type: 'Warning' reason: 'FailedCreate' Error creating: pods "dashboard-metrics-scraper-6b84985989-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	E0512 01:19:28.367129       1 replica_set.go:450] Sync "kubernetes-dashboard/dashboard-metrics-scraper-6b84985989" failed with pods "dashboard-metrics-scraper-6b84985989-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0512 01:19:29.562167       1 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kubernetes-dashboard", Name:"dashboard-metrics-scraper-6b84985989", UID:"f11d33f0-30f1-47e8-b3d7-3cd32f9b7c85", APIVersion:"apps/v1", ResourceVersion:"434", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: dashboard-metrics-scraper-6b84985989-bn4zg
	I0512 01:19:29.563929       1 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kubernetes-dashboard", Name:"kubernetes-dashboard-6fb5469cf5", UID:"59b4dcea-7d7c-4c1c-bb2d-03dd882b242c", APIVersion:"apps/v1", ResourceVersion:"437", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: kubernetes-dashboard-6fb5469cf5-mrs7d
	E0512 01:19:41.065948       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0512 01:19:42.867438       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0512 01:20:11.367852       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0512 01:20:14.875860       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0512 01:20:41.624150       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0512 01:20:46.881699       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0512 01:21:11.881591       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0512 01:21:18.895095       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0512 01:21:42.136971       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0512 01:21:50.906276       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	
	* 
	* ==> kube-proxy [e2f6eb90e534] <==
	* W0512 01:19:23.689317       1 proxier.go:584] Failed to read file /lib/modules/5.10.102.1-microsoft-standard-WSL2/modules.builtin with error open /lib/modules/5.10.102.1-microsoft-standard-WSL2/modules.builtin: no such file or directory. You can ignore this message when kube-proxy is running inside container without mounting /lib/modules
	W0512 01:19:23.691124       1 proxier.go:597] Failed to load kernel module ip_vs with modprobe. You can ignore this message when kube-proxy is running inside container without mounting /lib/modules
	W0512 01:19:23.692841       1 proxier.go:597] Failed to load kernel module ip_vs_rr with modprobe. You can ignore this message when kube-proxy is running inside container without mounting /lib/modules
	W0512 01:19:23.694545       1 proxier.go:597] Failed to load kernel module ip_vs_wrr with modprobe. You can ignore this message when kube-proxy is running inside container without mounting /lib/modules
	W0512 01:19:23.696083       1 proxier.go:597] Failed to load kernel module ip_vs_sh with modprobe. You can ignore this message when kube-proxy is running inside container without mounting /lib/modules
	W0512 01:19:23.697485       1 proxier.go:597] Failed to load kernel module nf_conntrack with modprobe. You can ignore this message when kube-proxy is running inside container without mounting /lib/modules
	W0512 01:19:23.704995       1 server_others.go:329] Flag proxy-mode="" unknown, assuming iptables proxy
	I0512 01:19:23.766882       1 node.go:135] Successfully retrieved node IP: 192.168.76.2
	I0512 01:19:23.767034       1 server_others.go:149] Using iptables Proxier.
	I0512 01:19:23.768628       1 server.go:529] Version: v1.16.0
	I0512 01:19:23.770453       1 config.go:313] Starting service config controller
	I0512 01:19:23.770890       1 config.go:131] Starting endpoints config controller
	I0512 01:19:23.772729       1 shared_informer.go:197] Waiting for caches to sync for endpoints config
	I0512 01:19:23.773119       1 shared_informer.go:197] Waiting for caches to sync for service config
	I0512 01:19:23.873839       1 shared_informer.go:204] Caches are synced for endpoints config 
	I0512 01:19:23.874148       1 shared_informer.go:204] Caches are synced for service config 
	
	* 
	* ==> kube-scheduler [eb642e98a5e5] <==
	* I0512 01:18:50.868135       1 deprecated_insecure_serving.go:51] Serving healthz insecurely on [::]:10251
	I0512 01:18:50.869133       1 secure_serving.go:123] Serving securely on 127.0.0.1:10259
	E0512 01:18:51.170922       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0512 01:18:51.171110       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0512 01:18:51.171114       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1beta1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0512 01:18:51.264435       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0512 01:18:51.264576       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0512 01:18:51.264541       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0512 01:18:51.264733       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0512 01:18:51.267317       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0512 01:18:51.267461       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0512 01:18:51.267341       1 reflector.go:123] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:236: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0512 01:18:51.267439       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0512 01:18:52.172926       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0512 01:18:52.263523       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0512 01:18:52.265790       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1beta1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0512 01:18:52.266851       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0512 01:18:52.268806       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0512 01:18:52.270000       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0512 01:18:52.272297       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0512 01:18:52.273212       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0512 01:18:52.275642       1 reflector.go:123] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:236: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0512 01:18:52.276881       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0512 01:18:52.279021       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0512 01:19:10.669095       1 factory.go:585] pod is already present in the activeQ
	
	* 
	* ==> kubelet <==
	* -- Logs begin at Thu 2022-05-12 01:12:54 UTC, end at Thu 2022-05-12 01:21:53 UTC. --
	May 12 01:20:41 old-k8s-version-20220512010246-7184 kubelet[5464]: E0512 01:20:41.813887    5464 kuberuntime_image.go:50] Pull image "fake.domain/k8s.gcr.io/echoserver:1.4" failed: rpc error: code = Unknown desc = Error response from daemon: Get "https://fake.domain/v2/": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host
	May 12 01:20:41 old-k8s-version-20220512010246-7184 kubelet[5464]: E0512 01:20:41.814171    5464 kuberuntime_manager.go:783] container start failed: ErrImagePull: rpc error: code = Unknown desc = Error response from daemon: Get "https://fake.domain/v2/": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host
	May 12 01:20:41 old-k8s-version-20220512010246-7184 kubelet[5464]: E0512 01:20:41.814221    5464 pod_workers.go:191] Error syncing pod 7c6b6847-36d4-4700-b45c-4e00a73b9477 ("metrics-server-6f89b5864b-xnzbk_kube-system(7c6b6847-36d4-4700-b45c-4e00a73b9477)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = Error response from daemon: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host"
	May 12 01:20:50 old-k8s-version-20220512010246-7184 kubelet[5464]: E0512 01:20:50.735711    5464 pod_workers.go:191] Error syncing pod b83b5a1e-8008-45e7-b80d-6a9c27bf5f98 ("dashboard-metrics-scraper-6b84985989-bn4zg_kubernetes-dashboard(b83b5a1e-8008-45e7-b80d-6a9c27bf5f98)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6b84985989-bn4zg_kubernetes-dashboard(b83b5a1e-8008-45e7-b80d-6a9c27bf5f98)"
	May 12 01:20:56 old-k8s-version-20220512010246-7184 kubelet[5464]: E0512 01:20:56.740908    5464 pod_workers.go:191] Error syncing pod 7c6b6847-36d4-4700-b45c-4e00a73b9477 ("metrics-server-6f89b5864b-xnzbk_kube-system(7c6b6847-36d4-4700-b45c-4e00a73b9477)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/k8s.gcr.io/echoserver:1.4\""
	May 12 01:21:02 old-k8s-version-20220512010246-7184 kubelet[5464]: W0512 01:21:02.076747    5464 docker_sandbox.go:394] failed to read pod IP from plugin/docker: Couldn't find network status for kubernetes-dashboard/dashboard-metrics-scraper-6b84985989-bn4zg through plugin: invalid network status for
	May 12 01:21:02 old-k8s-version-20220512010246-7184 kubelet[5464]: W0512 01:21:02.587432    5464 container.go:409] Failed to create summary reader for "/kubepods/besteffort/podb83b5a1e-8008-45e7-b80d-6a9c27bf5f98/73624efb9aa4e1285e0ad418a15d51078278772c4d29458bccf6eb606275ff27": none of the resources are being tracked.
	May 12 01:21:03 old-k8s-version-20220512010246-7184 kubelet[5464]: W0512 01:21:03.498096    5464 docker_sandbox.go:394] failed to read pod IP from plugin/docker: Couldn't find network status for kubernetes-dashboard/dashboard-metrics-scraper-6b84985989-bn4zg through plugin: invalid network status for
	May 12 01:21:03 old-k8s-version-20220512010246-7184 kubelet[5464]: E0512 01:21:03.512093    5464 pod_workers.go:191] Error syncing pod b83b5a1e-8008-45e7-b80d-6a9c27bf5f98 ("dashboard-metrics-scraper-6b84985989-bn4zg_kubernetes-dashboard(b83b5a1e-8008-45e7-b80d-6a9c27bf5f98)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6b84985989-bn4zg_kubernetes-dashboard(b83b5a1e-8008-45e7-b80d-6a9c27bf5f98)"
	May 12 01:21:04 old-k8s-version-20220512010246-7184 kubelet[5464]: W0512 01:21:04.534363    5464 docker_sandbox.go:394] failed to read pod IP from plugin/docker: Couldn't find network status for kubernetes-dashboard/dashboard-metrics-scraper-6b84985989-bn4zg through plugin: invalid network status for
	May 12 01:21:06 old-k8s-version-20220512010246-7184 kubelet[5464]: E0512 01:21:06.433186    5464 pod_workers.go:191] Error syncing pod b83b5a1e-8008-45e7-b80d-6a9c27bf5f98 ("dashboard-metrics-scraper-6b84985989-bn4zg_kubernetes-dashboard(b83b5a1e-8008-45e7-b80d-6a9c27bf5f98)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6b84985989-bn4zg_kubernetes-dashboard(b83b5a1e-8008-45e7-b80d-6a9c27bf5f98)"
	May 12 01:21:10 old-k8s-version-20220512010246-7184 kubelet[5464]: E0512 01:21:10.740213    5464 pod_workers.go:191] Error syncing pod 7c6b6847-36d4-4700-b45c-4e00a73b9477 ("metrics-server-6f89b5864b-xnzbk_kube-system(7c6b6847-36d4-4700-b45c-4e00a73b9477)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/k8s.gcr.io/echoserver:1.4\""
	May 12 01:21:21 old-k8s-version-20220512010246-7184 kubelet[5464]: E0512 01:21:21.736597    5464 pod_workers.go:191] Error syncing pod b83b5a1e-8008-45e7-b80d-6a9c27bf5f98 ("dashboard-metrics-scraper-6b84985989-bn4zg_kubernetes-dashboard(b83b5a1e-8008-45e7-b80d-6a9c27bf5f98)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6b84985989-bn4zg_kubernetes-dashboard(b83b5a1e-8008-45e7-b80d-6a9c27bf5f98)"
	May 12 01:21:22 old-k8s-version-20220512010246-7184 kubelet[5464]: E0512 01:21:22.871658    5464 remote_image.go:113] PullImage "fake.domain/k8s.gcr.io/echoserver:1.4" from image service failed: rpc error: code = Unknown desc = Error response from daemon: Get "https://fake.domain/v2/": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host
	May 12 01:21:22 old-k8s-version-20220512010246-7184 kubelet[5464]: E0512 01:21:22.871843    5464 kuberuntime_image.go:50] Pull image "fake.domain/k8s.gcr.io/echoserver:1.4" failed: rpc error: code = Unknown desc = Error response from daemon: Get "https://fake.domain/v2/": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host
	May 12 01:21:22 old-k8s-version-20220512010246-7184 kubelet[5464]: E0512 01:21:22.872027    5464 kuberuntime_manager.go:783] container start failed: ErrImagePull: rpc error: code = Unknown desc = Error response from daemon: Get "https://fake.domain/v2/": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host
	May 12 01:21:22 old-k8s-version-20220512010246-7184 kubelet[5464]: E0512 01:21:22.872094    5464 pod_workers.go:191] Error syncing pod 7c6b6847-36d4-4700-b45c-4e00a73b9477 ("metrics-server-6f89b5864b-xnzbk_kube-system(7c6b6847-36d4-4700-b45c-4e00a73b9477)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = Error response from daemon: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host"
	May 12 01:21:33 old-k8s-version-20220512010246-7184 kubelet[5464]: E0512 01:21:33.737247    5464 pod_workers.go:191] Error syncing pod b83b5a1e-8008-45e7-b80d-6a9c27bf5f98 ("dashboard-metrics-scraper-6b84985989-bn4zg_kubernetes-dashboard(b83b5a1e-8008-45e7-b80d-6a9c27bf5f98)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6b84985989-bn4zg_kubernetes-dashboard(b83b5a1e-8008-45e7-b80d-6a9c27bf5f98)"
	May 12 01:21:33 old-k8s-version-20220512010246-7184 kubelet[5464]: E0512 01:21:33.741469    5464 pod_workers.go:191] Error syncing pod 7c6b6847-36d4-4700-b45c-4e00a73b9477 ("metrics-server-6f89b5864b-xnzbk_kube-system(7c6b6847-36d4-4700-b45c-4e00a73b9477)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/k8s.gcr.io/echoserver:1.4\""
	May 12 01:21:45 old-k8s-version-20220512010246-7184 kubelet[5464]: E0512 01:21:45.742806    5464 pod_workers.go:191] Error syncing pod 7c6b6847-36d4-4700-b45c-4e00a73b9477 ("metrics-server-6f89b5864b-xnzbk_kube-system(7c6b6847-36d4-4700-b45c-4e00a73b9477)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/k8s.gcr.io/echoserver:1.4\""
	May 12 01:21:49 old-k8s-version-20220512010246-7184 kubelet[5464]: W0512 01:21:49.233321    5464 docker_sandbox.go:394] failed to read pod IP from plugin/docker: Couldn't find network status for kubernetes-dashboard/dashboard-metrics-scraper-6b84985989-bn4zg through plugin: invalid network status for
	May 12 01:21:49 old-k8s-version-20220512010246-7184 kubelet[5464]: W0512 01:21:49.787441    5464 container.go:409] Failed to create summary reader for "/kubepods/besteffort/podb83b5a1e-8008-45e7-b80d-6a9c27bf5f98/30bfb9a9fc53c40bae9991635654aa23f5cb99bfb75aa12020b77dc7937e0db3": none of the resources are being tracked.
	May 12 01:21:50 old-k8s-version-20220512010246-7184 kubelet[5464]: W0512 01:21:50.525239    5464 docker_sandbox.go:394] failed to read pod IP from plugin/docker: Couldn't find network status for kubernetes-dashboard/dashboard-metrics-scraper-6b84985989-bn4zg through plugin: invalid network status for
	May 12 01:21:50 old-k8s-version-20220512010246-7184 kubelet[5464]: E0512 01:21:50.547818    5464 pod_workers.go:191] Error syncing pod b83b5a1e-8008-45e7-b80d-6a9c27bf5f98 ("dashboard-metrics-scraper-6b84985989-bn4zg_kubernetes-dashboard(b83b5a1e-8008-45e7-b80d-6a9c27bf5f98)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6b84985989-bn4zg_kubernetes-dashboard(b83b5a1e-8008-45e7-b80d-6a9c27bf5f98)"
	May 12 01:21:51 old-k8s-version-20220512010246-7184 kubelet[5464]: W0512 01:21:51.561685    5464 docker_sandbox.go:394] failed to read pod IP from plugin/docker: Couldn't find network status for kubernetes-dashboard/dashboard-metrics-scraper-6b84985989-bn4zg through plugin: invalid network status for
	
	* 
	* ==> kubernetes-dashboard [57dce2ef6b23] <==
	* 2022/05/12 01:19:56 Starting overwatch
	2022/05/12 01:19:56 Using namespace: kubernetes-dashboard
	2022/05/12 01:19:56 Using in-cluster config to connect to apiserver
	2022/05/12 01:19:56 Using secret token for csrf signing
	2022/05/12 01:19:56 Initializing csrf token from kubernetes-dashboard-csrf secret
	2022/05/12 01:19:56 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2022/05/12 01:19:56 Successful initial request to the apiserver, version: v1.16.0
	2022/05/12 01:19:56 Generating JWE encryption key
	2022/05/12 01:19:56 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2022/05/12 01:19:56 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2022/05/12 01:19:58 Initializing JWE encryption key from synchronized object
	2022/05/12 01:19:58 Creating in-cluster Sidecar client
	2022/05/12 01:19:58 Serving insecurely on HTTP port: 9090
	2022/05/12 01:19:58 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2022/05/12 01:20:28 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2022/05/12 01:20:58 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2022/05/12 01:21:28 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	
	* 
	* ==> storage-provisioner [4a09b1dec868] <==
	* I0512 01:19:30.284416       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0512 01:19:30.376943       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0512 01:19:30.377085       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0512 01:19:30.473700       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0512 01:19:30.473797       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"0ac414cb-8b00-43fe-ac13-d4acc19bfd4f", APIVersion:"v1", ResourceVersion:"488", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-20220512010246-7184_1f959df6-09f4-46af-8951-76ce3599dc39 became leader
	I0512 01:19:30.474071       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-20220512010246-7184_1f959df6-09f4-46af-8951-76ce3599dc39!
	I0512 01:19:30.574586       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-20220512010246-7184_1f959df6-09f4-46af-8951-76ce3599dc39!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p old-k8s-version-20220512010246-7184 -n old-k8s-version-20220512010246-7184

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/Pause
helpers_test.go:254: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p old-k8s-version-20220512010246-7184 -n old-k8s-version-20220512010246-7184: (7.4344579s)
helpers_test.go:261: (dbg) Run:  kubectl --context old-k8s-version-20220512010246-7184 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:270: non-running pods: metrics-server-6f89b5864b-xnzbk
helpers_test.go:272: ======> post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: describe non-running pods <======
helpers_test.go:275: (dbg) Run:  kubectl --context old-k8s-version-20220512010246-7184 describe pod metrics-server-6f89b5864b-xnzbk
helpers_test.go:275: (dbg) Non-zero exit: kubectl --context old-k8s-version-20220512010246-7184 describe pod metrics-server-6f89b5864b-xnzbk: exit status 1 (436.4623ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-6f89b5864b-xnzbk" not found

                                                
                                                
** /stderr **
helpers_test.go:277: kubectl --context old-k8s-version-20220512010246-7184 describe pod metrics-server-6f89b5864b-xnzbk: exit status 1
--- FAIL: TestStartStop/group/old-k8s-version/serial/Pause (68.39s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium/Start (977.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium/Start
net_test.go:101: (dbg) Run:  out/minikube-windows-amd64.exe start -p cilium-20220512010244-7184 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=cilium --driver=docker

                                                
                                                
=== CONT  TestNetworkPlugins/group/cilium/Start
net_test.go:101: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p cilium-20220512010244-7184 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=cilium --driver=docker: exit status 80 (16m16.4109429s)

                                                
                                                
-- stdout --
	* [cilium-20220512010244-7184] minikube v1.25.2 on Microsoft Windows 10 Enterprise N 10.0.19042 Build 19042
	  - KUBECONFIG=C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube4\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=13639
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	* Using the docker driver based on user configuration
	* Using Docker Desktop driver with the root privilege
	* Starting control plane node cilium-20220512010244-7184 in cluster cilium-20220512010244-7184
	* Pulling base image ...
	* Creating docker container (CPUs=2, Memory=2048MB) ...
	* docker "cilium-20220512010244-7184" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=2048MB) ...
	* Preparing Kubernetes v1.23.5 on Docker 20.10.15 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Configuring RBAC rules ...
	* Configuring Cilium (Container Networking Interface) ...
	* Verifying Kubernetes components...
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	* Enabled addons: storage-provisioner, default-storageclass
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0512 01:22:15.190948    5648 out.go:296] Setting OutFile to fd 1732 ...
	I0512 01:22:15.256897    5648 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0512 01:22:15.256897    5648 out.go:309] Setting ErrFile to fd 1956...
	I0512 01:22:15.256897    5648 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0512 01:22:15.270065    5648 out.go:303] Setting JSON to false
	I0512 01:22:15.272590    5648 start.go:115] hostinfo: {"hostname":"minikube4","uptime":16988,"bootTime":1652301547,"procs":168,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19042 Build 19042","kernelVersion":"10.0.19042 Build 19042","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"df6bfb5f-73f2-4acb-9365-df7854ecbb28"}
	W0512 01:22:15.272590    5648 start.go:123] gopshost.Virtualization returned error: not implemented yet
	I0512 01:22:15.287569    5648 out.go:177] * [cilium-20220512010244-7184] minikube v1.25.2 on Microsoft Windows 10 Enterprise N 10.0.19042 Build 19042
	I0512 01:22:15.298675    5648 notify.go:193] Checking for updates...
	I0512 01:22:15.306306    5648 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	I0512 01:22:15.318255    5648 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube4\minikube-integration\.minikube
	I0512 01:22:15.328165    5648 out.go:177]   - MINIKUBE_LOCATION=13639
	I0512 01:22:15.339760    5648 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0512 01:22:15.350855    5648 config.go:178] Loaded profile config "auto-20220512010229-7184": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.5
	I0512 01:22:15.351538    5648 config.go:178] Loaded profile config "default-k8s-different-port-20220512011148-7184": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.5
	I0512 01:22:15.352747    5648 config.go:178] Loaded profile config "old-k8s-version-20220512010246-7184": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.16.0
	I0512 01:22:15.352956    5648 driver.go:358] Setting default libvirt URI to qemu:///system
	I0512 01:22:18.445218    5648 docker.go:137] docker version: linux-20.10.14
	I0512 01:22:18.454376    5648 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0512 01:22:20.769209    5648 cli_runner.go:217] Completed: docker system info --format "{{json .}}": (2.3147139s)
	I0512 01:22:20.770405    5648 info.go:265] docker info: {ID:5MOX:W55Z:6RSS:V5PU:46KT:D723:NTM4:N7FK:USOO:URA3:TW6J:2PNT Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:79 OomKillDisable:true NGoroutines:60 SystemTime:2022-05-12 01:22:19.6140051 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.102.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_
64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:53902323712 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.14 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3df54a852345ae127d1fa3092b95168e4a88e2f8 Expected:3df54a852345ae127d1fa3092b95168e4a88e2f8} RuncCommit:{ID:v1.0.3-0-gf46b6ba Expected:v1.0.3-0-gf46b6ba} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,
profile=default] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.5.0] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:C:\Program Files\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc.
Version:v0.17.0]] Warnings:<nil>}}
	I0512 01:22:20.778722    5648 out.go:177] * Using the docker driver based on user configuration
	I0512 01:22:20.786973    5648 start.go:284] selected driver: docker
	I0512 01:22:20.787242    5648 start.go:801] validating driver "docker" against <nil>
	I0512 01:22:20.787242    5648 start.go:812] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0512 01:22:20.867479    5648 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0512 01:22:23.248247    5648 cli_runner.go:217] Completed: docker system info --format "{{json .}}": (2.3806454s)
	I0512 01:22:23.248247    5648 info.go:265] docker info: {ID:5MOX:W55Z:6RSS:V5PU:46KT:D723:NTM4:N7FK:USOO:URA3:TW6J:2PNT Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:72 OomKillDisable:true NGoroutines:58 SystemTime:2022-05-12 01:22:22.1156438 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.102.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_
64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:53902323712 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.14 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3df54a852345ae127d1fa3092b95168e4a88e2f8 Expected:3df54a852345ae127d1fa3092b95168e4a88e2f8} RuncCommit:{ID:v1.0.3-0-gf46b6ba Expected:v1.0.3-0-gf46b6ba} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,
profile=default] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.5.0] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:C:\Program Files\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc.
Version:v0.17.0]] Warnings:<nil>}}
	I0512 01:22:23.248247    5648 start_flags.go:292] no existing cluster config was found, will generate one from the flags 
	I0512 01:22:23.249258    5648 start_flags.go:847] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0512 01:22:23.257634    5648 out.go:177] * Using Docker Desktop driver with the root privilege
	I0512 01:22:23.262249    5648 cni.go:95] Creating CNI manager for "cilium"
	I0512 01:22:23.262249    5648 start_flags.go:301] Found "Cilium" CNI - setting NetworkPlugin=cni
	I0512 01:22:23.262249    5648 start_flags.go:306] config:
	{Name:cilium-20220512010244-7184 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1652251400-14138@sha256:8c847a4aa2afc5a7fc659f9731046bf9cc7e788283deecc83c8633014fb0828a Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.5 ClusterName:cilium-20220512010244-7184 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:cilium NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:5m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube4:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0512 01:22:23.279254    5648 out.go:177] * Starting control plane node cilium-20220512010244-7184 in cluster cilium-20220512010244-7184
	I0512 01:22:23.285816    5648 cache.go:120] Beginning downloading kic base image for docker with docker
	I0512 01:22:23.290503    5648 out.go:177] * Pulling base image ...
	I0512 01:22:23.296504    5648 preload.go:132] Checking if preload exists for k8s version v1.23.5 and runtime docker
	I0512 01:22:23.296504    5648 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1652251400-14138@sha256:8c847a4aa2afc5a7fc659f9731046bf9cc7e788283deecc83c8633014fb0828a in local docker daemon
	I0512 01:22:23.296504    5648 preload.go:148] Found local preload: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.23.5-docker-overlay2-amd64.tar.lz4
	I0512 01:22:23.296504    5648 cache.go:57] Caching tarball of preloaded images
	I0512 01:22:23.297332    5648 preload.go:174] Found C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.23.5-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0512 01:22:23.297332    5648 cache.go:60] Finished verifying existence of preloaded tar for  v1.23.5 on docker
	I0512 01:22:23.298006    5648 profile.go:148] Saving config to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\cilium-20220512010244-7184\config.json ...
	I0512 01:22:23.298006    5648 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\cilium-20220512010244-7184\config.json: {Name:mk6e91c5324c4751b8b8e41dcdd1fd69e6358f91 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0512 01:22:24.494151    5648 image.go:79] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1652251400-14138@sha256:8c847a4aa2afc5a7fc659f9731046bf9cc7e788283deecc83c8633014fb0828a in local docker daemon, skipping pull
	I0512 01:22:24.494362    5648 cache.go:141] gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1652251400-14138@sha256:8c847a4aa2afc5a7fc659f9731046bf9cc7e788283deecc83c8633014fb0828a exists in daemon, skipping load
	I0512 01:22:24.494362    5648 cache.go:206] Successfully downloaded all kic artifacts
	I0512 01:22:24.494466    5648 start.go:352] acquiring machines lock for cilium-20220512010244-7184: {Name:mkbe9678a6d90ab5b23947d8663ec9b1034e388f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0512 01:22:24.494775    5648 start.go:356] acquired machines lock for "cilium-20220512010244-7184" in 190.7µs
	I0512 01:22:24.494897    5648 start.go:91] Provisioning new machine with config: &{Name:cilium-20220512010244-7184 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1652251400-14138@sha256:8c847a4aa2afc5a7fc659f9731046bf9cc7e788283deecc83c8633014fb0828a Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.5 ClusterName:cilium-20220512010244-7184 Namespace:default
APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:cilium NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.23.5 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:5m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube4:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: D
isableOptimizations:false DisableMetrics:false} &{Name: IP: Port:8443 KubernetesVersion:v1.23.5 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0512 01:22:24.494897    5648 start.go:131] createHost starting for "" (driver="docker")
	I0512 01:22:24.501873    5648 out.go:204] * Creating docker container (CPUs=2, Memory=2048MB) ...
	I0512 01:22:24.502753    5648 start.go:165] libmachine.API.Create for "cilium-20220512010244-7184" (driver="docker")
	I0512 01:22:24.502817    5648 client.go:168] LocalClient.Create starting
	I0512 01:22:24.502943    5648 main.go:134] libmachine: Reading certificate data from C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem
	I0512 01:22:24.502943    5648 main.go:134] libmachine: Decoding PEM data...
	I0512 01:22:24.502943    5648 main.go:134] libmachine: Parsing certificate...
	I0512 01:22:24.503720    5648 main.go:134] libmachine: Reading certificate data from C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem
	I0512 01:22:24.503720    5648 main.go:134] libmachine: Decoding PEM data...
	I0512 01:22:24.503720    5648 main.go:134] libmachine: Parsing certificate...
	I0512 01:22:24.512729    5648 cli_runner.go:164] Run: docker network inspect cilium-20220512010244-7184 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0512 01:22:25.676043    5648 cli_runner.go:211] docker network inspect cilium-20220512010244-7184 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0512 01:22:25.676043    5648 cli_runner.go:217] Completed: docker network inspect cilium-20220512010244-7184 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}": (1.163133s)
	I0512 01:22:25.691152    5648 network_create.go:272] running [docker network inspect cilium-20220512010244-7184] to gather additional debugging logs...
	I0512 01:22:25.691152    5648 cli_runner.go:164] Run: docker network inspect cilium-20220512010244-7184
	W0512 01:22:26.856944    5648 cli_runner.go:211] docker network inspect cilium-20220512010244-7184 returned with exit code 1
	I0512 01:22:26.856944    5648 cli_runner.go:217] Completed: docker network inspect cilium-20220512010244-7184: (1.1657323s)
	I0512 01:22:26.856944    5648 network_create.go:275] error running [docker network inspect cilium-20220512010244-7184]: docker network inspect cilium-20220512010244-7184: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: cilium-20220512010244-7184
	I0512 01:22:26.856944    5648 network_create.go:277] output of [docker network inspect cilium-20220512010244-7184]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: cilium-20220512010244-7184
	
	** /stderr **
	I0512 01:22:26.864937    5648 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0512 01:22:28.023040    5648 cli_runner.go:217] Completed: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}": (1.1580427s)
	I0512 01:22:28.043052    5648 network.go:288] reserving subnet 192.168.49.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[] amended:true}} dirty:map[192.168.49.0:0xc0005a90e0] misses:0}
	I0512 01:22:28.043052    5648 network.go:235] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0512 01:22:28.043052    5648 network_create.go:115] attempt to create docker network cilium-20220512010244-7184 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0512 01:22:28.052053    5648 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true cilium-20220512010244-7184
	I0512 01:22:29.628480    5648 cli_runner.go:217] Completed: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true cilium-20220512010244-7184: (1.5763462s)
	I0512 01:22:29.628480    5648 network_create.go:99] docker network cilium-20220512010244-7184 192.168.49.0/24 created
	I0512 01:22:29.628480    5648 kic.go:106] calculated static IP "192.168.49.2" for the "cilium-20220512010244-7184" container
	I0512 01:22:29.642479    5648 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0512 01:22:30.914767    5648 cli_runner.go:217] Completed: docker ps -a --format {{.Names}}: (1.2722224s)
	I0512 01:22:32.033861    5648 cli_runner.go:164] Run: docker volume create cilium-20220512010244-7184 --label name.minikube.sigs.k8s.io=cilium-20220512010244-7184 --label created_by.minikube.sigs.k8s.io=true
	I0512 01:22:33.339176    5648 cli_runner.go:217] Completed: docker volume create cilium-20220512010244-7184 --label name.minikube.sigs.k8s.io=cilium-20220512010244-7184 --label created_by.minikube.sigs.k8s.io=true: (1.3052471s)
	I0512 01:22:33.339569    5648 oci.go:103] Successfully created a docker volume cilium-20220512010244-7184
	I0512 01:22:33.348742    5648 cli_runner.go:164] Run: docker run --rm --name cilium-20220512010244-7184-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=cilium-20220512010244-7184 --entrypoint /usr/bin/test -v cilium-20220512010244-7184:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1652251400-14138@sha256:8c847a4aa2afc5a7fc659f9731046bf9cc7e788283deecc83c8633014fb0828a -d /var/lib
	I0512 01:22:37.008393    5648 cli_runner.go:217] Completed: docker run --rm --name cilium-20220512010244-7184-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=cilium-20220512010244-7184 --entrypoint /usr/bin/test -v cilium-20220512010244-7184:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1652251400-14138@sha256:8c847a4aa2afc5a7fc659f9731046bf9cc7e788283deecc83c8633014fb0828a -d /var/lib: (3.6594333s)
	I0512 01:22:37.008393    5648 oci.go:107] Successfully prepared a docker volume cilium-20220512010244-7184
	I0512 01:22:37.008393    5648 preload.go:132] Checking if preload exists for k8s version v1.23.5 and runtime docker
	I0512 01:22:37.008393    5648 kic.go:179] Starting extracting preloaded images to volume ...
	I0512 01:22:37.018386    5648 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.23.5-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v cilium-20220512010244-7184:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1652251400-14138@sha256:8c847a4aa2afc5a7fc659f9731046bf9cc7e788283deecc83c8633014fb0828a -I lz4 -xf /preloaded.tar -C /extractDir
	I0512 01:23:05.895277    5648 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.23.5-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v cilium-20220512010244-7184:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1652251400-14138@sha256:8c847a4aa2afc5a7fc659f9731046bf9cc7e788283deecc83c8633014fb0828a -I lz4 -xf /preloaded.tar -C /extractDir: (28.8754035s)
	I0512 01:23:05.895277    5648 kic.go:188] duration metric: took 28.885396 seconds to extract preloaded images to volume
	I0512 01:23:05.911357    5648 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0512 01:23:08.000670    5648 cli_runner.go:217] Completed: docker system info --format "{{json .}}": (2.0892052s)
	I0512 01:23:08.000670    5648 info.go:265] docker info: {ID:5MOX:W55Z:6RSS:V5PU:46KT:D723:NTM4:N7FK:USOO:URA3:TW6J:2PNT Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:73 OomKillDisable:true NGoroutines:58 SystemTime:2022-05-12 01:23:06.9406973 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.102.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_
64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:53902323712 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.14 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3df54a852345ae127d1fa3092b95168e4a88e2f8 Expected:3df54a852345ae127d1fa3092b95168e4a88e2f8} RuncCommit:{ID:v1.0.3-0-gf46b6ba Expected:v1.0.3-0-gf46b6ba} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,
profile=default] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.5.0] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:C:\Program Files\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc.
Version:v0.17.0]] Warnings:<nil>}}
	I0512 01:23:08.008685    5648 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0512 01:23:10.172772    5648 cli_runner.go:217] Completed: docker info --format "'{{json .SecurityOptions}}'": (2.1639159s)
	I0512 01:23:10.180663    5648 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname cilium-20220512010244-7184 --name cilium-20220512010244-7184 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=cilium-20220512010244-7184 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=cilium-20220512010244-7184 --network cilium-20220512010244-7184 --ip 192.168.49.2 --volume cilium-20220512010244-7184:/var --security-opt apparmor=unconfined --memory=2048mb --memory-swap=2048mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1652251400-14138@sha256:8c847a4aa2afc5a7fc659f9731046bf9cc7e788283deecc83c8633014fb0828a
	W0512 01:23:11.490794    5648 cli_runner.go:211] docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname cilium-20220512010244-7184 --name cilium-20220512010244-7184 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=cilium-20220512010244-7184 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=cilium-20220512010244-7184 --network cilium-20220512010244-7184 --ip 192.168.49.2 --volume cilium-20220512010244-7184:/var --security-opt apparmor=unconfined --memory=2048mb --memory-swap=2048mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1652251400-14138@sha256:8c847a4aa2afc5a7fc659f9731046bf9cc7e788283deecc83c8633014fb0828a returned with exit code 125
	I0512 01:23:11.490794    5648 cli_runner.go:217] Completed: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname cilium-20220512010244-7184 --name cilium-20220512010244-7184 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=cilium-20220512010244-7184 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=cilium-20220512010244-7184 --network cilium-20220512010244-7184 --ip 192.168.49.2 --volume cilium-20220512010244-7184:/var --security-opt apparmor=unconfined --memory=2048mb --memory-swap=2048mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1652251400-14138@sha256:8c847a4aa2afc5a7fc659f9731046bf9cc7e788283deecc83c8633014fb0828a: (1.3100636s)
	I0512 01:23:11.490794    5648 client.go:171] LocalClient.Create took 46.9855578s
	I0512 01:23:13.516342    5648 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0512 01:23:13.522350    5648 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20220512010244-7184
	W0512 01:23:14.625687    5648 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20220512010244-7184 returned with exit code 1
	I0512 01:23:14.625687    5648 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20220512010244-7184: (1.1032805s)
	I0512 01:23:14.625687    5648 retry.go:31] will retry after 276.165072ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0512 01:23:14.921379    5648 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20220512010244-7184
	W0512 01:23:16.064775    5648 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20220512010244-7184 returned with exit code 1
	I0512 01:23:16.064775    5648 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20220512010244-7184: (1.1433377s)
	W0512 01:23:16.064775    5648 start.go:281] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	W0512 01:23:16.064775    5648 start.go:248] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0512 01:23:16.076772    5648 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0512 01:23:16.084785    5648 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20220512010244-7184
	W0512 01:23:17.140407    5648 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20220512010244-7184 returned with exit code 1
	I0512 01:23:17.140407    5648 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20220512010244-7184: (1.0554863s)
	I0512 01:23:17.140589    5648 retry.go:31] will retry after 291.140013ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0512 01:23:17.450349    5648 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20220512010244-7184
	W0512 01:23:18.638563    5648 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20220512010244-7184 returned with exit code 1
	I0512 01:23:18.638624    5648 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20220512010244-7184: (1.1880273s)
	W0512 01:23:18.638819    5648 start.go:296] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	W0512 01:23:18.638859    5648 start.go:253] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0512 01:23:18.638859    5648 start.go:134] duration metric: createHost completed in 54.1411745s
	I0512 01:23:18.638859    5648 start.go:81] releasing machines lock for "cilium-20220512010244-7184", held for 54.1411745s
	W0512 01:23:18.639078    5648 start.go:608] error starting host: creating host: create: creating: create kic node: create container: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname cilium-20220512010244-7184 --name cilium-20220512010244-7184 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=cilium-20220512010244-7184 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=cilium-20220512010244-7184 --network cilium-20220512010244-7184 --ip 192.168.49.2 --volume cilium-20220512010244-7184:/var --security-opt apparmor=unconfined --memory=2048mb --memory-swap=2048mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1652251400-14138@sha256:8c847a4aa2afc5a7fc659f9731046bf9cc7e788283deecc83c8633014fb0828a: exit status 1
25
	stdout:
	0d3518a691174bcf808ea9bc52659184ba79f38e0aec31c42ea6281a82917e86
	
	stderr:
	docker: Error response from daemon: network cilium-20220512010244-7184 not found.
	I0512 01:23:18.656716    5648 cli_runner.go:164] Run: docker container inspect cilium-20220512010244-7184 --format={{.State.Status}}
	I0512 01:23:19.802254    5648 cli_runner.go:217] Completed: docker container inspect cilium-20220512010244-7184 --format={{.State.Status}}: (1.1454799s)
	W0512 01:23:19.802254    5648 start.go:613] delete host: Docker machine "cilium-20220512010244-7184" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	W0512 01:23:19.802254    5648 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: create kic node: create container: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname cilium-20220512010244-7184 --name cilium-20220512010244-7184 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=cilium-20220512010244-7184 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=cilium-20220512010244-7184 --network cilium-20220512010244-7184 --ip 192.168.49.2 --volume cilium-20220512010244-7184:/var --security-opt apparmor=unconfined --memory=2048mb --memory-swap=2048mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1652251400-14138@sha256:8c847a4aa2afc5a7fc659f9731046bf9cc7e788283deecc83c8633014fb082
8a: exit status 125
	stdout:
	0d3518a691174bcf808ea9bc52659184ba79f38e0aec31c42ea6281a82917e86
	
	stderr:
	docker: Error response from daemon: network cilium-20220512010244-7184 not found.
	
	! StartHost failed, but will try again: creating host: create: creating: create kic node: create container: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname cilium-20220512010244-7184 --name cilium-20220512010244-7184 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=cilium-20220512010244-7184 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=cilium-20220512010244-7184 --network cilium-20220512010244-7184 --ip 192.168.49.2 --volume cilium-20220512010244-7184:/var --security-opt apparmor=unconfined --memory=2048mb --memory-swap=2048mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1652251400-14138@sha256:8c847a4aa2afc5a7fc659f9731046bf9cc7e788283deecc83c8633014fb0828a: exit status 125
	stdout:
	0d3518a691174bcf808ea9bc52659184ba79f38e0aec31c42ea6281a82917e86
	
	stderr:
	docker: Error response from daemon: network cilium-20220512010244-7184 not found.
	
	I0512 01:23:19.802254    5648 start.go:623] Will try again in 5 seconds ...
	I0512 01:23:24.806405    5648 start.go:352] acquiring machines lock for cilium-20220512010244-7184: {Name:mkbe9678a6d90ab5b23947d8663ec9b1034e388f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0512 01:23:24.806405    5648 start.go:356] acquired machines lock for "cilium-20220512010244-7184" in 0s
	I0512 01:23:24.806405    5648 start.go:94] Skipping create...Using existing machine configuration
	I0512 01:23:24.806405    5648 fix.go:55] fixHost starting: 
	I0512 01:23:24.825194    5648 cli_runner.go:164] Run: docker container inspect cilium-20220512010244-7184 --format={{.State.Status}}
	I0512 01:23:26.000895    5648 cli_runner.go:217] Completed: docker container inspect cilium-20220512010244-7184 --format={{.State.Status}}: (1.1756403s)
	I0512 01:23:26.000895    5648 fix.go:103] recreateIfNeeded on cilium-20220512010244-7184: state= err=<nil>
	I0512 01:23:26.000895    5648 fix.go:108] machineExists: false. err=machine does not exist
	I0512 01:23:26.426657    5648 out.go:177] * docker "cilium-20220512010244-7184" container is missing, will recreate.
	I0512 01:23:26.707903    5648 delete.go:124] DEMOLISHING cilium-20220512010244-7184 ...
	I0512 01:23:26.726190    5648 cli_runner.go:164] Run: docker container inspect cilium-20220512010244-7184 --format={{.State.Status}}
	I0512 01:23:27.830923    5648 cli_runner.go:217] Completed: docker container inspect cilium-20220512010244-7184 --format={{.State.Status}}: (1.1046768s)
	I0512 01:23:27.830923    5648 stop.go:79] host is in state 
	I0512 01:23:27.830923    5648 main.go:134] libmachine: Stopping "cilium-20220512010244-7184"...
	I0512 01:23:27.848564    5648 cli_runner.go:164] Run: docker container inspect cilium-20220512010244-7184 --format={{.State.Status}}
	I0512 01:23:28.978736    5648 cli_runner.go:217] Completed: docker container inspect cilium-20220512010244-7184 --format={{.State.Status}}: (1.1300256s)
	I0512 01:23:28.999275    5648 kic_runner.go:93] Run: systemctl --version
	I0512 01:23:28.999275    5648 kic_runner.go:114] Args: [docker exec --privileged cilium-20220512010244-7184 systemctl --version]
	I0512 01:23:30.209215    5648 kic_runner.go:93] Run: sudo service kubelet stop
	I0512 01:23:30.209215    5648 kic_runner.go:114] Args: [docker exec --privileged cilium-20220512010244-7184 sudo service kubelet stop]
	I0512 01:23:31.436008    5648 openrc.go:165] stop output: 
	** stderr ** 
	Error response from daemon: Container 0d3518a691174bcf808ea9bc52659184ba79f38e0aec31c42ea6281a82917e86 is not running
	
	** /stderr **
	W0512 01:23:31.436008    5648 kic.go:439] couldn't stop kubelet. will continue with stop anyways: sudo service kubelet stop: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: Container 0d3518a691174bcf808ea9bc52659184ba79f38e0aec31c42ea6281a82917e86 is not running
	I0512 01:23:31.462023    5648 kic_runner.go:93] Run: sudo service kubelet stop
	I0512 01:23:31.462023    5648 kic_runner.go:114] Args: [docker exec --privileged cilium-20220512010244-7184 sudo service kubelet stop]
	I0512 01:23:32.679209    5648 openrc.go:165] stop output: 
	** stderr ** 
	Error response from daemon: Container 0d3518a691174bcf808ea9bc52659184ba79f38e0aec31c42ea6281a82917e86 is not running
	
	** /stderr **
	W0512 01:23:32.679209    5648 kic.go:441] couldn't force stop kubelet. will continue with stop anyways: sudo service kubelet stop: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: Container 0d3518a691174bcf808ea9bc52659184ba79f38e0aec31c42ea6281a82917e86 is not running
	I0512 01:23:32.698875    5648 kic_runner.go:93] Run: docker ps -a --filter=name=k8s_.*_(kube-system|kubernetes-dashboard|storage-gluster|istio-operator)_ --format={{.ID}}
	I0512 01:23:32.698875    5648 kic_runner.go:114] Args: [docker exec --privileged cilium-20220512010244-7184 docker ps -a --filter=name=k8s_.*_(kube-system|kubernetes-dashboard|storage-gluster|istio-operator)_ --format={{.ID}}]
	I0512 01:23:33.912382    5648 kic.go:452] unable list containers : docker: docker ps -a --filter=name=k8s_.*_(kube-system|kubernetes-dashboard|storage-gluster|istio-operator)_ --format={{.ID}}: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: Container 0d3518a691174bcf808ea9bc52659184ba79f38e0aec31c42ea6281a82917e86 is not running
	I0512 01:23:33.912382    5648 kic.go:462] successfully stopped kubernetes!
	I0512 01:23:33.929405    5648 kic_runner.go:93] Run: pgrep kube-apiserver
	I0512 01:23:33.929405    5648 kic_runner.go:114] Args: [docker exec --privileged cilium-20220512010244-7184 pgrep kube-apiserver]
	I0512 01:23:36.237499    5648 cli_runner.go:164] Run: docker container inspect cilium-20220512010244-7184 --format={{.State.Status}}
	I0512 01:23:37.321466    5648 cli_runner.go:217] Completed: docker container inspect cilium-20220512010244-7184 --format={{.State.Status}}: (1.0833737s)
	I0512 01:23:40.340785    5648 cli_runner.go:164] Run: docker container inspect cilium-20220512010244-7184 --format={{.State.Status}}
	I0512 01:23:41.439630    5648 cli_runner.go:217] Completed: docker container inspect cilium-20220512010244-7184 --format={{.State.Status}}: (1.0987887s)
	I0512 01:23:44.461822    5648 cli_runner.go:164] Run: docker container inspect cilium-20220512010244-7184 --format={{.State.Status}}
	I0512 01:23:45.602236    5648 cli_runner.go:217] Completed: docker container inspect cilium-20220512010244-7184 --format={{.State.Status}}: (1.1398185s)
	I0512 01:23:48.634815    5648 cli_runner.go:164] Run: docker container inspect cilium-20220512010244-7184 --format={{.State.Status}}
	I0512 01:23:49.749434    5648 cli_runner.go:217] Completed: docker container inspect cilium-20220512010244-7184 --format={{.State.Status}}: (1.1145614s)
	I0512 01:23:52.767931    5648 cli_runner.go:164] Run: docker container inspect cilium-20220512010244-7184 --format={{.State.Status}}
	I0512 01:23:53.867860    5648 cli_runner.go:217] Completed: docker container inspect cilium-20220512010244-7184 --format={{.State.Status}}: (1.0998726s)
	I0512 01:23:56.900048    5648 cli_runner.go:164] Run: docker container inspect cilium-20220512010244-7184 --format={{.State.Status}}
	I0512 01:23:58.032579    5648 cli_runner.go:217] Completed: docker container inspect cilium-20220512010244-7184 --format={{.State.Status}}: (1.1324727s)
	I0512 01:24:01.062212    5648 cli_runner.go:164] Run: docker container inspect cilium-20220512010244-7184 --format={{.State.Status}}
	I0512 01:24:02.149284    5648 cli_runner.go:217] Completed: docker container inspect cilium-20220512010244-7184 --format={{.State.Status}}: (1.0868202s)
	I0512 01:24:05.168649    5648 cli_runner.go:164] Run: docker container inspect cilium-20220512010244-7184 --format={{.State.Status}}
	I0512 01:24:06.205352    5648 cli_runner.go:217] Completed: docker container inspect cilium-20220512010244-7184 --format={{.State.Status}}: (1.0363531s)
	I0512 01:24:09.237116    5648 cli_runner.go:164] Run: docker container inspect cilium-20220512010244-7184 --format={{.State.Status}}
	I0512 01:24:10.296283    5648 cli_runner.go:217] Completed: docker container inspect cilium-20220512010244-7184 --format={{.State.Status}}: (1.0590544s)
	I0512 01:24:13.311216    5648 cli_runner.go:164] Run: docker container inspect cilium-20220512010244-7184 --format={{.State.Status}}
	I0512 01:24:14.360006    5648 cli_runner.go:217] Completed: docker container inspect cilium-20220512010244-7184 --format={{.State.Status}}: (1.0487366s)
	I0512 01:24:17.383567    5648 cli_runner.go:164] Run: docker container inspect cilium-20220512010244-7184 --format={{.State.Status}}
	I0512 01:24:18.468014    5648 cli_runner.go:217] Completed: docker container inspect cilium-20220512010244-7184 --format={{.State.Status}}: (1.0843126s)
	I0512 01:24:21.489907    5648 cli_runner.go:164] Run: docker container inspect cilium-20220512010244-7184 --format={{.State.Status}}
	I0512 01:24:22.527100    5648 cli_runner.go:217] Completed: docker container inspect cilium-20220512010244-7184 --format={{.State.Status}}: (1.0365703s)
	I0512 01:24:25.547005    5648 cli_runner.go:164] Run: docker container inspect cilium-20220512010244-7184 --format={{.State.Status}}
	I0512 01:24:26.617179    5648 cli_runner.go:217] Completed: docker container inspect cilium-20220512010244-7184 --format={{.State.Status}}: (1.0701188s)
	I0512 01:24:29.646151    5648 cli_runner.go:164] Run: docker container inspect cilium-20220512010244-7184 --format={{.State.Status}}
	I0512 01:24:30.745072    5648 cli_runner.go:217] Completed: docker container inspect cilium-20220512010244-7184 --format={{.State.Status}}: (1.0988643s)
	I0512 01:24:33.764672    5648 cli_runner.go:164] Run: docker container inspect cilium-20220512010244-7184 --format={{.State.Status}}
	I0512 01:24:34.838619    5648 cli_runner.go:217] Completed: docker container inspect cilium-20220512010244-7184 --format={{.State.Status}}: (1.0738918s)
	I0512 01:24:37.856630    5648 cli_runner.go:164] Run: docker container inspect cilium-20220512010244-7184 --format={{.State.Status}}
	I0512 01:24:38.967452    5648 cli_runner.go:217] Completed: docker container inspect cilium-20220512010244-7184 --format={{.State.Status}}: (1.1106417s)
	I0512 01:24:41.989633    5648 cli_runner.go:164] Run: docker container inspect cilium-20220512010244-7184 --format={{.State.Status}}
	I0512 01:24:43.072258    5648 cli_runner.go:217] Completed: docker container inspect cilium-20220512010244-7184 --format={{.State.Status}}: (1.0825695s)
	I0512 01:24:46.094768    5648 cli_runner.go:164] Run: docker container inspect cilium-20220512010244-7184 --format={{.State.Status}}
	I0512 01:24:47.138683    5648 cli_runner.go:217] Completed: docker container inspect cilium-20220512010244-7184 --format={{.State.Status}}: (1.0438614s)
	I0512 01:24:50.167250    5648 cli_runner.go:164] Run: docker container inspect cilium-20220512010244-7184 --format={{.State.Status}}
	I0512 01:24:51.278917    5648 cli_runner.go:217] Completed: docker container inspect cilium-20220512010244-7184 --format={{.State.Status}}: (1.1116091s)
	I0512 01:24:54.296249    5648 cli_runner.go:164] Run: docker container inspect cilium-20220512010244-7184 --format={{.State.Status}}
	I0512 01:24:55.341907    5648 cli_runner.go:217] Completed: docker container inspect cilium-20220512010244-7184 --format={{.State.Status}}: (1.045604s)
	I0512 01:24:58.359838    5648 cli_runner.go:164] Run: docker container inspect cilium-20220512010244-7184 --format={{.State.Status}}
	I0512 01:24:59.413827    5648 cli_runner.go:217] Completed: docker container inspect cilium-20220512010244-7184 --format={{.State.Status}}: (1.0537333s)
	I0512 01:25:02.440699    5648 cli_runner.go:164] Run: docker container inspect cilium-20220512010244-7184 --format={{.State.Status}}
	I0512 01:25:03.570936    5648 cli_runner.go:217] Completed: docker container inspect cilium-20220512010244-7184 --format={{.State.Status}}: (1.1300338s)
	I0512 01:25:06.588518    5648 cli_runner.go:164] Run: docker container inspect cilium-20220512010244-7184 --format={{.State.Status}}
	I0512 01:25:07.714321    5648 cli_runner.go:217] Completed: docker container inspect cilium-20220512010244-7184 --format={{.State.Status}}: (1.1255928s)
	I0512 01:25:10.738760    5648 cli_runner.go:164] Run: docker container inspect cilium-20220512010244-7184 --format={{.State.Status}}
	I0512 01:25:11.802470    5648 cli_runner.go:217] Completed: docker container inspect cilium-20220512010244-7184 --format={{.State.Status}}: (1.0636556s)
	I0512 01:25:14.820936    5648 cli_runner.go:164] Run: docker container inspect cilium-20220512010244-7184 --format={{.State.Status}}
	I0512 01:25:15.917756    5648 cli_runner.go:217] Completed: docker container inspect cilium-20220512010244-7184 --format={{.State.Status}}: (1.0967642s)
	I0512 01:25:18.941550    5648 cli_runner.go:164] Run: docker container inspect cilium-20220512010244-7184 --format={{.State.Status}}
	I0512 01:25:20.031940    5648 cli_runner.go:217] Completed: docker container inspect cilium-20220512010244-7184 --format={{.State.Status}}: (1.0903335s)
	I0512 01:25:23.057500    5648 cli_runner.go:164] Run: docker container inspect cilium-20220512010244-7184 --format={{.State.Status}}
	I0512 01:25:24.171983    5648 cli_runner.go:217] Completed: docker container inspect cilium-20220512010244-7184 --format={{.State.Status}}: (1.1144257s)
	I0512 01:25:27.189926    5648 cli_runner.go:164] Run: docker container inspect cilium-20220512010244-7184 --format={{.State.Status}}
	I0512 01:25:28.338703    5648 cli_runner.go:217] Completed: docker container inspect cilium-20220512010244-7184 --format={{.State.Status}}: (1.1486433s)
	I0512 01:25:31.368762    5648 cli_runner.go:164] Run: docker container inspect cilium-20220512010244-7184 --format={{.State.Status}}
	I0512 01:25:32.450123    5648 cli_runner.go:217] Completed: docker container inspect cilium-20220512010244-7184 --format={{.State.Status}}: (1.0813052s)
	I0512 01:25:35.473910    5648 cli_runner.go:164] Run: docker container inspect cilium-20220512010244-7184 --format={{.State.Status}}
	I0512 01:25:36.563644    5648 cli_runner.go:217] Completed: docker container inspect cilium-20220512010244-7184 --format={{.State.Status}}: (1.0896775s)
	I0512 01:25:39.587099    5648 cli_runner.go:164] Run: docker container inspect cilium-20220512010244-7184 --format={{.State.Status}}
	I0512 01:25:40.651676    5648 cli_runner.go:217] Completed: docker container inspect cilium-20220512010244-7184 --format={{.State.Status}}: (1.0645221s)
	I0512 01:25:43.676261    5648 cli_runner.go:164] Run: docker container inspect cilium-20220512010244-7184 --format={{.State.Status}}
	I0512 01:25:44.730795    5648 cli_runner.go:217] Completed: docker container inspect cilium-20220512010244-7184 --format={{.State.Status}}: (1.0544795s)
	I0512 01:25:47.754025    5648 cli_runner.go:164] Run: docker container inspect cilium-20220512010244-7184 --format={{.State.Status}}
	I0512 01:25:48.867686    5648 cli_runner.go:217] Completed: docker container inspect cilium-20220512010244-7184 --format={{.State.Status}}: (1.1136046s)
	I0512 01:25:51.903555    5648 cli_runner.go:164] Run: docker container inspect cilium-20220512010244-7184 --format={{.State.Status}}
	I0512 01:25:52.984565    5648 cli_runner.go:217] Completed: docker container inspect cilium-20220512010244-7184 --format={{.State.Status}}: (1.0808686s)
	I0512 01:25:56.011016    5648 cli_runner.go:164] Run: docker container inspect cilium-20220512010244-7184 --format={{.State.Status}}
	I0512 01:25:57.100611    5648 cli_runner.go:217] Completed: docker container inspect cilium-20220512010244-7184 --format={{.State.Status}}: (1.0895385s)
	I0512 01:26:00.123096    5648 cli_runner.go:164] Run: docker container inspect cilium-20220512010244-7184 --format={{.State.Status}}
	I0512 01:26:01.252286    5648 cli_runner.go:217] Completed: docker container inspect cilium-20220512010244-7184 --format={{.State.Status}}: (1.1291327s)
	I0512 01:26:04.267079    5648 cli_runner.go:164] Run: docker container inspect cilium-20220512010244-7184 --format={{.State.Status}}
	I0512 01:26:05.372028    5648 cli_runner.go:217] Completed: docker container inspect cilium-20220512010244-7184 --format={{.State.Status}}: (1.1048917s)
	I0512 01:26:08.392088    5648 cli_runner.go:164] Run: docker container inspect cilium-20220512010244-7184 --format={{.State.Status}}
	I0512 01:26:09.509038    5648 cli_runner.go:217] Completed: docker container inspect cilium-20220512010244-7184 --format={{.State.Status}}: (1.1158639s)
	I0512 01:26:12.539501    5648 cli_runner.go:164] Run: docker container inspect cilium-20220512010244-7184 --format={{.State.Status}}
	I0512 01:26:13.591326    5648 cli_runner.go:217] Completed: docker container inspect cilium-20220512010244-7184 --format={{.State.Status}}: (1.0517211s)
	I0512 01:26:16.615476    5648 cli_runner.go:164] Run: docker container inspect cilium-20220512010244-7184 --format={{.State.Status}}
	I0512 01:26:17.702168    5648 cli_runner.go:217] Completed: docker container inspect cilium-20220512010244-7184 --format={{.State.Status}}: (1.0864788s)
	I0512 01:26:20.726503    5648 cli_runner.go:164] Run: docker container inspect cilium-20220512010244-7184 --format={{.State.Status}}
	I0512 01:26:21.904832    5648 cli_runner.go:217] Completed: docker container inspect cilium-20220512010244-7184 --format={{.State.Status}}: (1.1781247s)
	I0512 01:26:24.926644    5648 cli_runner.go:164] Run: docker container inspect cilium-20220512010244-7184 --format={{.State.Status}}
	I0512 01:26:26.118892    5648 cli_runner.go:217] Completed: docker container inspect cilium-20220512010244-7184 --format={{.State.Status}}: (1.1920023s)
	I0512 01:26:29.145266    5648 cli_runner.go:164] Run: docker container inspect cilium-20220512010244-7184 --format={{.State.Status}}
	I0512 01:26:30.306966    5648 cli_runner.go:217] Completed: docker container inspect cilium-20220512010244-7184 --format={{.State.Status}}: (1.1615464s)
	I0512 01:26:33.326102    5648 cli_runner.go:164] Run: docker container inspect cilium-20220512010244-7184 --format={{.State.Status}}
	I0512 01:26:34.396691    5648 cli_runner.go:217] Completed: docker container inspect cilium-20220512010244-7184 --format={{.State.Status}}: (1.0704178s)
	I0512 01:26:37.418676    5648 cli_runner.go:164] Run: docker container inspect cilium-20220512010244-7184 --format={{.State.Status}}
	I0512 01:26:38.540421    5648 cli_runner.go:217] Completed: docker container inspect cilium-20220512010244-7184 --format={{.State.Status}}: (1.1216878s)
	I0512 01:26:41.565495    5648 cli_runner.go:164] Run: docker container inspect cilium-20220512010244-7184 --format={{.State.Status}}
	I0512 01:26:42.886872    5648 cli_runner.go:217] Completed: docker container inspect cilium-20220512010244-7184 --format={{.State.Status}}: (1.3213097s)
	I0512 01:26:45.908875    5648 cli_runner.go:164] Run: docker container inspect cilium-20220512010244-7184 --format={{.State.Status}}
	I0512 01:26:47.012280    5648 cli_runner.go:217] Completed: docker container inspect cilium-20220512010244-7184 --format={{.State.Status}}: (1.10309s)
	I0512 01:26:50.033030    5648 cli_runner.go:164] Run: docker container inspect cilium-20220512010244-7184 --format={{.State.Status}}
	I0512 01:26:51.134077    5648 cli_runner.go:217] Completed: docker container inspect cilium-20220512010244-7184 --format={{.State.Status}}: (1.100991s)
	I0512 01:26:54.163179    5648 cli_runner.go:164] Run: docker container inspect cilium-20220512010244-7184 --format={{.State.Status}}
	I0512 01:26:55.250821    5648 cli_runner.go:217] Completed: docker container inspect cilium-20220512010244-7184 --format={{.State.Status}}: (1.0874049s)
	I0512 01:26:58.276391    5648 cli_runner.go:164] Run: docker container inspect cilium-20220512010244-7184 --format={{.State.Status}}
	I0512 01:26:59.359642    5648 cli_runner.go:217] Completed: docker container inspect cilium-20220512010244-7184 --format={{.State.Status}}: (1.0824822s)
	I0512 01:27:02.384824    5648 cli_runner.go:164] Run: docker container inspect cilium-20220512010244-7184 --format={{.State.Status}}
	I0512 01:27:03.462564    5648 cli_runner.go:217] Completed: docker container inspect cilium-20220512010244-7184 --format={{.State.Status}}: (1.0775297s)
	I0512 01:27:06.485414    5648 cli_runner.go:164] Run: docker container inspect cilium-20220512010244-7184 --format={{.State.Status}}
	I0512 01:27:07.604331    5648 cli_runner.go:217] Completed: docker container inspect cilium-20220512010244-7184 --format={{.State.Status}}: (1.1185536s)
	I0512 01:27:10.628400    5648 cli_runner.go:164] Run: docker container inspect cilium-20220512010244-7184 --format={{.State.Status}}
	I0512 01:27:11.830011    5648 cli_runner.go:217] Completed: docker container inspect cilium-20220512010244-7184 --format={{.State.Status}}: (1.2015496s)
	I0512 01:27:14.851464    5648 cli_runner.go:164] Run: docker container inspect cilium-20220512010244-7184 --format={{.State.Status}}
	I0512 01:27:15.910840    5648 cli_runner.go:217] Completed: docker container inspect cilium-20220512010244-7184 --format={{.State.Status}}: (1.0593219s)
	I0512 01:27:18.940837    5648 cli_runner.go:164] Run: docker container inspect cilium-20220512010244-7184 --format={{.State.Status}}
	I0512 01:27:20.029143    5648 cli_runner.go:217] Completed: docker container inspect cilium-20220512010244-7184 --format={{.State.Status}}: (1.0877176s)
	I0512 01:27:23.059575    5648 cli_runner.go:164] Run: docker container inspect cilium-20220512010244-7184 --format={{.State.Status}}
	I0512 01:27:24.179339    5648 cli_runner.go:217] Completed: docker container inspect cilium-20220512010244-7184 --format={{.State.Status}}: (1.1194212s)
	I0512 01:27:27.200025    5648 cli_runner.go:164] Run: docker container inspect cilium-20220512010244-7184 --format={{.State.Status}}
	I0512 01:27:28.325436    5648 cli_runner.go:217] Completed: docker container inspect cilium-20220512010244-7184 --format={{.State.Status}}: (1.1251909s)
	I0512 01:27:31.348967    5648 cli_runner.go:164] Run: docker container inspect cilium-20220512010244-7184 --format={{.State.Status}}
	I0512 01:27:32.403491    5648 cli_runner.go:217] Completed: docker container inspect cilium-20220512010244-7184 --format={{.State.Status}}: (1.0544698s)
	I0512 01:27:35.432120    5648 cli_runner.go:164] Run: docker container inspect cilium-20220512010244-7184 --format={{.State.Status}}
	I0512 01:27:36.521038    5648 cli_runner.go:217] Completed: docker container inspect cilium-20220512010244-7184 --format={{.State.Status}}: (1.0887066s)
	I0512 01:27:39.540067    5648 cli_runner.go:164] Run: docker container inspect cilium-20220512010244-7184 --format={{.State.Status}}
	I0512 01:27:40.609739    5648 cli_runner.go:217] Completed: docker container inspect cilium-20220512010244-7184 --format={{.State.Status}}: (1.0696177s)
	I0512 01:27:43.621766    5648 stop.go:59] stop err: Maximum number of retries (60) exceeded
	I0512 01:27:43.621766    5648 delete.go:129] stophost failed (probably ok): Temporary Error: stop: Maximum number of retries (60) exceeded
	I0512 01:27:43.636967    5648 cli_runner.go:164] Run: docker container inspect cilium-20220512010244-7184 --format={{.State.Status}}
	I0512 01:27:44.708459    5648 cli_runner.go:217] Completed: docker container inspect cilium-20220512010244-7184 --format={{.State.Status}}: (1.0714369s)
	W0512 01:27:44.708459    5648 delete.go:135] deletehost failed: Docker machine "cilium-20220512010244-7184" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I0512 01:27:44.716760    5648 cli_runner.go:164] Run: docker container inspect -f {{.Id}} cilium-20220512010244-7184
	I0512 01:27:45.812408    5648 cli_runner.go:217] Completed: docker container inspect -f {{.Id}} cilium-20220512010244-7184: (1.0955926s)
	I0512 01:27:45.819407    5648 cli_runner.go:164] Run: docker container inspect cilium-20220512010244-7184 --format={{.State.Status}}
	I0512 01:27:46.871730    5648 cli_runner.go:217] Completed: docker container inspect cilium-20220512010244-7184 --format={{.State.Status}}: (1.0522696s)
	I0512 01:27:46.880777    5648 cli_runner.go:164] Run: docker exec --privileged -t cilium-20220512010244-7184 /bin/bash -c "sudo init 0"
	W0512 01:27:48.011078    5648 cli_runner.go:211] docker exec --privileged -t cilium-20220512010244-7184 /bin/bash -c "sudo init 0" returned with exit code 1
	I0512 01:27:48.011134    5648 cli_runner.go:217] Completed: docker exec --privileged -t cilium-20220512010244-7184 /bin/bash -c "sudo init 0": (1.1301918s)
	I0512 01:27:48.011134    5648 oci.go:625] error shutdown cilium-20220512010244-7184: docker exec --privileged -t cilium-20220512010244-7184 /bin/bash -c "sudo init 0": exit status 1
	stdout:
	
	stderr:
	Error response from daemon: Container 0d3518a691174bcf808ea9bc52659184ba79f38e0aec31c42ea6281a82917e86 is not running
	I0512 01:27:49.031788    5648 cli_runner.go:164] Run: docker container inspect cilium-20220512010244-7184 --format={{.State.Status}}
	I0512 01:27:50.115459    5648 cli_runner.go:217] Completed: docker container inspect cilium-20220512010244-7184 --format={{.State.Status}}: (1.0835218s)
	I0512 01:27:50.115503    5648 oci.go:639] temporary error: container cilium-20220512010244-7184 status is  but expect it to be exited
	I0512 01:27:50.115503    5648 oci.go:645] Successfully shutdown container cilium-20220512010244-7184
	I0512 01:27:50.124556    5648 cli_runner.go:164] Run: docker rm -f -v cilium-20220512010244-7184
	I0512 01:27:51.207293    5648 cli_runner.go:217] Completed: docker rm -f -v cilium-20220512010244-7184: (1.0826812s)
	I0512 01:27:51.215963    5648 cli_runner.go:164] Run: docker container inspect -f {{.Id}} cilium-20220512010244-7184
	W0512 01:27:52.279201    5648 cli_runner.go:211] docker container inspect -f {{.Id}} cilium-20220512010244-7184 returned with exit code 1
	I0512 01:27:52.279201    5648 cli_runner.go:217] Completed: docker container inspect -f {{.Id}} cilium-20220512010244-7184: (1.0631839s)
	I0512 01:27:52.288305    5648 cli_runner.go:164] Run: docker network inspect cilium-20220512010244-7184 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0512 01:27:53.355560    5648 cli_runner.go:211] docker network inspect cilium-20220512010244-7184 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0512 01:27:53.355560    5648 cli_runner.go:217] Completed: docker network inspect cilium-20220512010244-7184 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}": (1.0670438s)
	I0512 01:27:53.363247    5648 network_create.go:272] running [docker network inspect cilium-20220512010244-7184] to gather additional debugging logs...
	I0512 01:27:53.363247    5648 cli_runner.go:164] Run: docker network inspect cilium-20220512010244-7184
	W0512 01:27:54.411808    5648 cli_runner.go:211] docker network inspect cilium-20220512010244-7184 returned with exit code 1
	I0512 01:27:54.411808    5648 cli_runner.go:217] Completed: docker network inspect cilium-20220512010244-7184: (1.0485077s)
	I0512 01:27:54.411808    5648 network_create.go:275] error running [docker network inspect cilium-20220512010244-7184]: docker network inspect cilium-20220512010244-7184: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: cilium-20220512010244-7184
	I0512 01:27:54.411808    5648 network_create.go:277] output of [docker network inspect cilium-20220512010244-7184]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: cilium-20220512010244-7184
	
	** /stderr **
	W0512 01:27:54.412705    5648 delete.go:139] delete failed (probably ok) <nil>
	I0512 01:27:54.412705    5648 fix.go:115] Sleeping 1 second for extra luck!
	I0512 01:27:55.423319    5648 start.go:131] createHost starting for "" (driver="docker")
	I0512 01:27:55.426979    5648 out.go:204] * Creating docker container (CPUs=2, Memory=2048MB) ...
	I0512 01:27:55.426979    5648 start.go:165] libmachine.API.Create for "cilium-20220512010244-7184" (driver="docker")
	I0512 01:27:55.426979    5648 client.go:168] LocalClient.Create starting
	I0512 01:27:55.428009    5648 main.go:134] libmachine: Reading certificate data from C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem
	I0512 01:27:55.428257    5648 main.go:134] libmachine: Decoding PEM data...
	I0512 01:27:55.428322    5648 main.go:134] libmachine: Parsing certificate...
	I0512 01:27:55.428478    5648 main.go:134] libmachine: Reading certificate data from C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem
	I0512 01:27:55.428670    5648 main.go:134] libmachine: Decoding PEM data...
	I0512 01:27:55.428726    5648 main.go:134] libmachine: Parsing certificate...
	I0512 01:27:55.438194    5648 cli_runner.go:164] Run: docker network inspect cilium-20220512010244-7184 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0512 01:27:56.507696    5648 cli_runner.go:211] docker network inspect cilium-20220512010244-7184 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0512 01:27:56.507696    5648 cli_runner.go:217] Completed: docker network inspect cilium-20220512010244-7184 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}": (1.0694466s)
	I0512 01:27:56.514694    5648 network_create.go:272] running [docker network inspect cilium-20220512010244-7184] to gather additional debugging logs...
	I0512 01:27:56.514694    5648 cli_runner.go:164] Run: docker network inspect cilium-20220512010244-7184
	W0512 01:27:57.582069    5648 cli_runner.go:211] docker network inspect cilium-20220512010244-7184 returned with exit code 1
	I0512 01:27:57.582069    5648 cli_runner.go:217] Completed: docker network inspect cilium-20220512010244-7184: (1.0673206s)
	I0512 01:27:57.582069    5648 network_create.go:275] error running [docker network inspect cilium-20220512010244-7184]: docker network inspect cilium-20220512010244-7184: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: cilium-20220512010244-7184
	I0512 01:27:57.582069    5648 network_create.go:277] output of [docker network inspect cilium-20220512010244-7184]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: cilium-20220512010244-7184
	
	** /stderr **
	I0512 01:27:57.589914    5648 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0512 01:27:58.618171    5648 cli_runner.go:217] Completed: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}": (1.0282037s)
	I0512 01:27:58.637238    5648 network.go:284] reusing subnet 192.168.49.0 that has expired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc0005a90e0] amended:false}} dirty:map[] misses:0}
	I0512 01:27:58.637238    5648 network.go:235] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0512 01:27:58.637238    5648 network_create.go:115] attempt to create docker network cilium-20220512010244-7184 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0512 01:27:58.644349    5648 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true cilium-20220512010244-7184
	W0512 01:27:59.678184    5648 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true cilium-20220512010244-7184 returned with exit code 1
	I0512 01:27:59.678184    5648 cli_runner.go:217] Completed: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true cilium-20220512010244-7184: (1.033783s)
	W0512 01:27:59.678184    5648 network_create.go:107] failed to create docker network cilium-20220512010244-7184 192.168.49.0/24, will retry: subnet is taken
	I0512 01:27:59.695766    5648 network.go:279] skipping subnet 192.168.49.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc0005a90e0] amended:false}} dirty:map[] misses:0}
	I0512 01:27:59.695766    5648 network.go:238] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0512 01:27:59.711768    5648 network.go:288] reserving subnet 192.168.58.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc0005a90e0] amended:true}} dirty:map[192.168.49.0:0xc0005a90e0 192.168.58.0:0xc000014108] misses:0}
	I0512 01:27:59.711768    5648 network.go:235] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0512 01:27:59.711768    5648 network_create.go:115] attempt to create docker network cilium-20220512010244-7184 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
	I0512 01:27:59.718762    5648 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true cilium-20220512010244-7184
	I0512 01:28:01.352633    5648 cli_runner.go:217] Completed: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true cilium-20220512010244-7184: (1.6337882s)
	I0512 01:28:01.352633    5648 network_create.go:99] docker network cilium-20220512010244-7184 192.168.58.0/24 created
	I0512 01:28:01.352633    5648 kic.go:106] calculated static IP "192.168.58.2" for the "cilium-20220512010244-7184" container
	I0512 01:28:01.369634    5648 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0512 01:28:02.398976    5648 cli_runner.go:217] Completed: docker ps -a --format {{.Names}}: (1.0292891s)
	I0512 01:28:02.407991    5648 cli_runner.go:164] Run: docker volume create cilium-20220512010244-7184 --label name.minikube.sigs.k8s.io=cilium-20220512010244-7184 --label created_by.minikube.sigs.k8s.io=true
	I0512 01:28:03.443955    5648 cli_runner.go:217] Completed: docker volume create cilium-20220512010244-7184 --label name.minikube.sigs.k8s.io=cilium-20220512010244-7184 --label created_by.minikube.sigs.k8s.io=true: (1.0359117s)
	I0512 01:28:03.443955    5648 oci.go:103] Successfully created a docker volume cilium-20220512010244-7184
	I0512 01:28:03.451502    5648 cli_runner.go:164] Run: docker run --rm --name cilium-20220512010244-7184-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=cilium-20220512010244-7184 --entrypoint /usr/bin/test -v cilium-20220512010244-7184:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1652251400-14138@sha256:8c847a4aa2afc5a7fc659f9731046bf9cc7e788283deecc83c8633014fb0828a -d /var/lib
	I0512 01:28:16.800620    5648 cli_runner.go:217] Completed: docker run --rm --name cilium-20220512010244-7184-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=cilium-20220512010244-7184 --entrypoint /usr/bin/test -v cilium-20220512010244-7184:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1652251400-14138@sha256:8c847a4aa2afc5a7fc659f9731046bf9cc7e788283deecc83c8633014fb0828a -d /var/lib: (13.3483194s)
	I0512 01:28:16.800700    5648 oci.go:107] Successfully prepared a docker volume cilium-20220512010244-7184
	I0512 01:28:16.800728    5648 preload.go:132] Checking if preload exists for k8s version v1.23.5 and runtime docker
	I0512 01:28:16.800874    5648 kic.go:179] Starting extracting preloaded images to volume ...
	I0512 01:28:16.809395    5648 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.23.5-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v cilium-20220512010244-7184:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1652251400-14138@sha256:8c847a4aa2afc5a7fc659f9731046bf9cc7e788283deecc83c8633014fb0828a -I lz4 -xf /preloaded.tar -C /extractDir
	I0512 01:28:41.304018    5648 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.23.5-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v cilium-20220512010244-7184:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1652251400-14138@sha256:8c847a4aa2afc5a7fc659f9731046bf9cc7e788283deecc83c8633014fb0828a -I lz4 -xf /preloaded.tar -C /extractDir: (24.4933742s)
	I0512 01:28:41.304283    5648 kic.go:188] duration metric: took 24.501894 seconds to extract preloaded images to volume
	I0512 01:28:41.314418    5648 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0512 01:28:43.489018    5648 cli_runner.go:217] Completed: docker system info --format "{{json .}}": (2.1743715s)
	I0512 01:28:43.489018    5648 info.go:265] docker info: {ID:5MOX:W55Z:6RSS:V5PU:46KT:D723:NTM4:N7FK:USOO:URA3:TW6J:2PNT Containers:3 ContainersRunning:1 ContainersPaused:0 ContainersStopped:2 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:65 OomKillDisable:true NGoroutines:63 SystemTime:2022-05-12 01:28:42.344637 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.102.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_6
4 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:53902323712 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.14 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3df54a852345ae127d1fa3092b95168e4a88e2f8 Expected:3df54a852345ae127d1fa3092b95168e4a88e2f8} RuncCommit:{ID:v1.0.3-0-gf46b6ba Expected:v1.0.3-0-gf46b6ba} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,p
rofile=default] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.5.0] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:C:\Program Files\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc.
Version:v0.17.0]] Warnings:<nil>}}
	I0512 01:28:43.496761    5648 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0512 01:28:45.556152    5648 cli_runner.go:217] Completed: docker info --format "'{{json .SecurityOptions}}'": (2.0592858s)
	I0512 01:28:45.563754    5648 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname cilium-20220512010244-7184 --name cilium-20220512010244-7184 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=cilium-20220512010244-7184 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=cilium-20220512010244-7184 --network cilium-20220512010244-7184 --ip 192.168.58.2 --volume cilium-20220512010244-7184:/var --security-opt apparmor=unconfined --memory=2048mb --memory-swap=2048mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1652251400-14138@sha256:8c847a4aa2afc5a7fc659f9731046bf9cc7e788283deecc83c8633014fb0828a
	I0512 01:28:49.314283    5648 cli_runner.go:217] Completed: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname cilium-20220512010244-7184 --name cilium-20220512010244-7184 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=cilium-20220512010244-7184 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=cilium-20220512010244-7184 --network cilium-20220512010244-7184 --ip 192.168.58.2 --volume cilium-20220512010244-7184:/var --security-opt apparmor=unconfined --memory=2048mb --memory-swap=2048mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1652251400-14138@sha256:8c847a4aa2afc5a7fc659f9731046bf9cc7e788283deecc83c8633014fb0828a: (3.7503375s)
	I0512 01:28:49.325268    5648 cli_runner.go:164] Run: docker container inspect cilium-20220512010244-7184 --format={{.State.Running}}
	I0512 01:28:50.438680    5648 cli_runner.go:217] Completed: docker container inspect cilium-20220512010244-7184 --format={{.State.Running}}: (1.1133547s)
	I0512 01:28:50.445680    5648 cli_runner.go:164] Run: docker container inspect cilium-20220512010244-7184 --format={{.State.Status}}
	I0512 01:28:51.634151    5648 cli_runner.go:217] Completed: docker container inspect cilium-20220512010244-7184 --format={{.State.Status}}: (1.1884101s)
	I0512 01:28:51.641157    5648 cli_runner.go:164] Run: docker exec cilium-20220512010244-7184 stat /var/lib/dpkg/alternatives/iptables
	I0512 01:28:52.918378    5648 cli_runner.go:217] Completed: docker exec cilium-20220512010244-7184 stat /var/lib/dpkg/alternatives/iptables: (1.2771551s)
	I0512 01:28:52.918849    5648 oci.go:247] the created container "cilium-20220512010244-7184" has a running status.
	I0512 01:28:52.918917    5648 kic.go:210] Creating ssh key for kic: C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\cilium-20220512010244-7184\id_rsa...
	I0512 01:28:53.366694    5648 kic_runner.go:191] docker (temp): C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\cilium-20220512010244-7184\id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0512 01:28:54.614734    5648 cli_runner.go:164] Run: docker container inspect cilium-20220512010244-7184 --format={{.State.Status}}
	I0512 01:28:55.731181    5648 cli_runner.go:217] Completed: docker container inspect cilium-20220512010244-7184 --format={{.State.Status}}: (1.1163904s)
	I0512 01:28:55.749088    5648 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0512 01:28:55.749088    5648 kic_runner.go:114] Args: [docker exec --privileged cilium-20220512010244-7184 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0512 01:28:56.973775    5648 kic_runner.go:123] Done: [docker exec --privileged cilium-20220512010244-7184 chown docker:docker /home/docker/.ssh/authorized_keys]: (1.224624s)
	I0512 01:28:56.976795    5648 kic.go:250] ensuring only current user has permissions to key file located at : C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\cilium-20220512010244-7184\id_rsa...
	I0512 01:28:57.493064    5648 cli_runner.go:164] Run: docker container inspect cilium-20220512010244-7184 --format={{.State.Status}}
	I0512 01:28:58.549656    5648 cli_runner.go:217] Completed: docker container inspect cilium-20220512010244-7184 --format={{.State.Status}}: (1.0563756s)
	I0512 01:28:58.549720    5648 machine.go:88] provisioning docker machine ...
	I0512 01:28:58.549781    5648 ubuntu.go:169] provisioning hostname "cilium-20220512010244-7184"
	I0512 01:28:58.558264    5648 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20220512010244-7184
	I0512 01:28:59.639046    5648 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20220512010244-7184: (1.0802124s)
	I0512 01:28:59.649931    5648 main.go:134] libmachine: Using SSH client type: native
	I0512 01:28:59.656938    5648 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x422e20] 0x425c80 <nil>  [] 0s} 127.0.0.1 51226 <nil> <nil>}
	I0512 01:28:59.656938    5648 main.go:134] libmachine: About to run SSH command:
	sudo hostname cilium-20220512010244-7184 && echo "cilium-20220512010244-7184" | sudo tee /etc/hostname
	I0512 01:28:59.831549    5648 main.go:134] libmachine: SSH cmd err, output: <nil>: cilium-20220512010244-7184
	
	I0512 01:28:59.839540    5648 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20220512010244-7184
	I0512 01:29:00.911960    5648 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20220512010244-7184: (1.0723649s)
	I0512 01:29:00.917056    5648 main.go:134] libmachine: Using SSH client type: native
	I0512 01:29:00.917493    5648 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x422e20] 0x425c80 <nil>  [] 0s} 127.0.0.1 51226 <nil> <nil>}
	I0512 01:29:00.917552    5648 main.go:134] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\scilium-20220512010244-7184' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 cilium-20220512010244-7184/g' /etc/hosts;
				else 
					echo '127.0.1.1 cilium-20220512010244-7184' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0512 01:29:01.042397    5648 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I0512 01:29:01.042397    5648 ubuntu.go:175] set auth options {CertDir:C:\Users\jenkins.minikube4\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube4\minikube-integration\.minikube}
	I0512 01:29:01.042397    5648 ubuntu.go:177] setting up certificates
	I0512 01:29:01.042397    5648 provision.go:83] configureAuth start
	I0512 01:29:01.050487    5648 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" cilium-20220512010244-7184
	I0512 01:29:02.148103    5648 cli_runner.go:217] Completed: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" cilium-20220512010244-7184: (1.097228s)
	I0512 01:29:02.148192    5648 provision.go:138] copyHostCerts
	I0512 01:29:02.148612    5648 exec_runner.go:144] found C:\Users\jenkins.minikube4\minikube-integration\.minikube/cert.pem, removing ...
	I0512 01:29:02.148612    5648 exec_runner.go:207] rm: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cert.pem
	I0512 01:29:02.149077    5648 exec_runner.go:151] cp: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube4\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0512 01:29:02.150564    5648 exec_runner.go:144] found C:\Users\jenkins.minikube4\minikube-integration\.minikube/key.pem, removing ...
	I0512 01:29:02.150564    5648 exec_runner.go:207] rm: C:\Users\jenkins.minikube4\minikube-integration\.minikube\key.pem
	I0512 01:29:02.150938    5648 exec_runner.go:151] cp: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube4\minikube-integration\.minikube/key.pem (1675 bytes)
	I0512 01:29:02.151821    5648 exec_runner.go:144] found C:\Users\jenkins.minikube4\minikube-integration\.minikube/ca.pem, removing ...
	I0512 01:29:02.151821    5648 exec_runner.go:207] rm: C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.pem
	I0512 01:29:02.151821    5648 exec_runner.go:151] cp: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube4\minikube-integration\.minikube/ca.pem (1078 bytes)
	I0512 01:29:02.152821    5648 provision.go:112] generating server cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.cilium-20220512010244-7184 san=[192.168.58.2 127.0.0.1 localhost 127.0.0.1 minikube cilium-20220512010244-7184]
	I0512 01:29:02.726191    5648 provision.go:172] copyRemoteCerts
	I0512 01:29:02.738547    5648 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0512 01:29:02.746628    5648 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20220512010244-7184
	I0512 01:29:03.830865    5648 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20220512010244-7184: (1.0841816s)
	I0512 01:29:03.831446    5648 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:51226 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\cilium-20220512010244-7184\id_rsa Username:docker}
	I0512 01:29:04.084515    5648 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (1.3458995s)
	I0512 01:29:04.085406    5648 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0512 01:29:04.162676    5648 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0512 01:29:04.223090    5648 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1253 bytes)
	I0512 01:29:04.274003    5648 provision.go:86] duration metric: configureAuth took 3.2314414s
	I0512 01:29:04.274039    5648 ubuntu.go:193] setting minikube options for container-runtime
	I0512 01:29:04.274778    5648 config.go:178] Loaded profile config "cilium-20220512010244-7184": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.5
	I0512 01:29:04.285485    5648 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20220512010244-7184
	I0512 01:29:05.377581    5648 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20220512010244-7184: (1.0918992s)
	I0512 01:29:05.469834    5648 main.go:134] libmachine: Using SSH client type: native
	I0512 01:29:05.469834    5648 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x422e20] 0x425c80 <nil>  [] 0s} 127.0.0.1 51226 <nil> <nil>}
	I0512 01:29:05.469834    5648 main.go:134] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0512 01:29:05.704553    5648 main.go:134] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0512 01:29:05.704626    5648 ubuntu.go:71] root file system type: overlay
	I0512 01:29:05.705026    5648 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0512 01:29:05.713278    5648 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20220512010244-7184
	I0512 01:29:06.819187    5648 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20220512010244-7184: (1.1058526s)
	I0512 01:29:06.822193    5648 main.go:134] libmachine: Using SSH client type: native
	I0512 01:29:06.823195    5648 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x422e20] 0x425c80 <nil>  [] 0s} 127.0.0.1 51226 <nil> <nil>}
	I0512 01:29:06.823195    5648 main.go:134] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0512 01:29:07.046141    5648 main.go:134] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0512 01:29:07.055803    5648 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20220512010244-7184
	I0512 01:29:08.167580    5648 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20220512010244-7184: (1.1116634s)
	I0512 01:29:08.172132    5648 main.go:134] libmachine: Using SSH client type: native
	I0512 01:29:08.173017    5648 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x422e20] 0x425c80 <nil>  [] 0s} 127.0.0.1 51226 <nil> <nil>}
	I0512 01:29:08.173106    5648 main.go:134] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0512 01:29:09.774797    5648 main.go:134] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2022-05-05 13:17:28.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2022-05-12 01:29:07.021609000 +0000
	@@ -1,30 +1,32 @@
	 [Unit]
	 Description=Docker Application Container Engine
	 Documentation=https://docs.docker.com
	-After=network-online.target docker.socket firewalld.service containerd.service
	+BindsTo=containerd.service
	+After=network-online.target firewalld.service containerd.service
	 Wants=network-online.target
	-Requires=docker.socket containerd.service
	+Requires=docker.socket
	+StartLimitBurst=3
	+StartLimitIntervalSec=60
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutSec=0
	-RestartSec=2
	-Restart=always
	-
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	+Restart=on-failure
	 
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP $MAINPID
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	@@ -32,16 +34,16 @@
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	 
	 # kill only the docker process, not all processes in the cgroup
	 KillMode=process
	-OOMScoreAdjust=-500
	 
	 [Install]
	 WantedBy=multi-user.target
	Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install enable docker
	
	I0512 01:29:09.774797    5648 machine.go:91] provisioned docker machine in 11.2245052s
	I0512 01:29:09.774797    5648 client.go:171] LocalClient.Create took 1m14.3440272s
	I0512 01:29:09.774797    5648 start.go:173] duration metric: libmachine.API.Create for "cilium-20220512010244-7184" took 1m14.3440272s
	I0512 01:29:09.774797    5648 start.go:306] post-start starting for "cilium-20220512010244-7184" (driver="docker")
	I0512 01:29:09.774797    5648 start.go:316] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0512 01:29:09.788619    5648 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0512 01:29:09.798547    5648 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20220512010244-7184
	I0512 01:29:10.970457    5648 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20220512010244-7184: (1.1716699s)
	I0512 01:29:10.970689    5648 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:51226 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\cilium-20220512010244-7184\id_rsa Username:docker}
	I0512 01:29:11.050188    5648 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (1.2615048s)
	I0512 01:29:11.058912    5648 ssh_runner.go:195] Run: cat /etc/os-release
	I0512 01:29:11.077018    5648 main.go:134] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0512 01:29:11.077018    5648 main.go:134] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0512 01:29:11.077018    5648 main.go:134] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0512 01:29:11.077018    5648 info.go:137] Remote host: Ubuntu 20.04.4 LTS
	I0512 01:29:11.077018    5648 filesync.go:126] Scanning C:\Users\jenkins.minikube4\minikube-integration\.minikube\addons for local assets ...
	I0512 01:29:11.077593    5648 filesync.go:126] Scanning C:\Users\jenkins.minikube4\minikube-integration\.minikube\files for local assets ...
	I0512 01:29:11.078433    5648 filesync.go:149] local asset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\71842.pem -> 71842.pem in /etc/ssl/certs
	I0512 01:29:11.089906    5648 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0512 01:29:11.128870    5648 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\71842.pem --> /etc/ssl/certs/71842.pem (1708 bytes)
	I0512 01:29:11.184358    5648 start.go:309] post-start completed in 1.4094894s
	I0512 01:29:11.201664    5648 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" cilium-20220512010244-7184
	I0512 01:29:12.330678    5648 cli_runner.go:217] Completed: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" cilium-20220512010244-7184: (1.1287832s)
	I0512 01:29:12.330678    5648 profile.go:148] Saving config to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\cilium-20220512010244-7184\config.json ...
	I0512 01:29:12.349466    5648 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0512 01:29:12.356590    5648 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20220512010244-7184
	I0512 01:29:13.598432    5648 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20220512010244-7184: (1.2417793s)
	I0512 01:29:13.598924    5648 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:51226 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\cilium-20220512010244-7184\id_rsa Username:docker}
	I0512 01:29:13.739254    5648 ssh_runner.go:235] Completed: sh -c "df -h /var | awk 'NR==2{print $5}'": (1.3897182s)
	I0512 01:29:13.749263    5648 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0512 01:29:13.763983    5648 start.go:134] duration metric: createHost completed in 1m18.3366697s
	I0512 01:29:13.781898    5648 cli_runner.go:164] Run: docker container inspect cilium-20220512010244-7184 --format={{.State.Status}}
	I0512 01:29:14.975180    5648 cli_runner.go:217] Completed: docker container inspect cilium-20220512010244-7184 --format={{.State.Status}}: (1.1932205s)
	W0512 01:29:14.975180    5648 fix.go:129] unexpected machine state, will restart: <nil>
	I0512 01:29:14.975180    5648 machine.go:88] provisioning docker machine ...
	I0512 01:29:14.975180    5648 ubuntu.go:169] provisioning hostname "cilium-20220512010244-7184"
	I0512 01:29:14.982169    5648 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20220512010244-7184
	I0512 01:29:16.198689    5648 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20220512010244-7184: (1.2163148s)
	I0512 01:29:16.204272    5648 main.go:134] libmachine: Using SSH client type: native
	I0512 01:29:16.204838    5648 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x422e20] 0x425c80 <nil>  [] 0s} 127.0.0.1 51226 <nil> <nil>}
	I0512 01:29:16.204838    5648 main.go:134] libmachine: About to run SSH command:
	sudo hostname cilium-20220512010244-7184 && echo "cilium-20220512010244-7184" | sudo tee /etc/hostname
	I0512 01:29:16.430721    5648 main.go:134] libmachine: SSH cmd err, output: <nil>: cilium-20220512010244-7184
	
	I0512 01:29:16.443263    5648 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20220512010244-7184
	I0512 01:29:17.527110    5648 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20220512010244-7184: (1.083595s)
	I0512 01:29:17.532277    5648 main.go:134] libmachine: Using SSH client type: native
	I0512 01:29:17.532277    5648 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x422e20] 0x425c80 <nil>  [] 0s} 127.0.0.1 51226 <nil> <nil>}
	I0512 01:29:17.532818    5648 main.go:134] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\scilium-20220512010244-7184' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 cilium-20220512010244-7184/g' /etc/hosts;
				else 
					echo '127.0.1.1 cilium-20220512010244-7184' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0512 01:29:17.713620    5648 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I0512 01:29:17.713620    5648 ubuntu.go:175] set auth options {CertDir:C:\Users\jenkins.minikube4\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube4\minikube-integration\.minikube}
	I0512 01:29:17.713620    5648 ubuntu.go:177] setting up certificates
	I0512 01:29:17.713620    5648 provision.go:83] configureAuth start
	I0512 01:29:17.723607    5648 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" cilium-20220512010244-7184
	I0512 01:29:18.888843    5648 cli_runner.go:217] Completed: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" cilium-20220512010244-7184: (1.1651765s)
	I0512 01:29:18.888843    5648 provision.go:138] copyHostCerts
	I0512 01:29:18.888843    5648 exec_runner.go:144] found C:\Users\jenkins.minikube4\minikube-integration\.minikube/ca.pem, removing ...
	I0512 01:29:18.888843    5648 exec_runner.go:207] rm: C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.pem
	I0512 01:29:18.888843    5648 exec_runner.go:151] cp: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube4\minikube-integration\.minikube/ca.pem (1078 bytes)
	I0512 01:29:18.889923    5648 exec_runner.go:144] found C:\Users\jenkins.minikube4\minikube-integration\.minikube/cert.pem, removing ...
	I0512 01:29:18.889923    5648 exec_runner.go:207] rm: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cert.pem
	I0512 01:29:18.890835    5648 exec_runner.go:151] cp: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube4\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0512 01:29:18.891826    5648 exec_runner.go:144] found C:\Users\jenkins.minikube4\minikube-integration\.minikube/key.pem, removing ...
	I0512 01:29:18.891826    5648 exec_runner.go:207] rm: C:\Users\jenkins.minikube4\minikube-integration\.minikube\key.pem
	I0512 01:29:18.891826    5648 exec_runner.go:151] cp: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube4\minikube-integration\.minikube/key.pem (1675 bytes)
	I0512 01:29:18.892836    5648 provision.go:112] generating server cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.cilium-20220512010244-7184 san=[192.168.58.2 127.0.0.1 localhost 127.0.0.1 minikube cilium-20220512010244-7184]
	I0512 01:29:19.098070    5648 provision.go:172] copyRemoteCerts
	I0512 01:29:19.120425    5648 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0512 01:29:19.128127    5648 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20220512010244-7184
	I0512 01:29:20.350647    5648 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20220512010244-7184: (1.2224574s)
	I0512 01:29:20.350647    5648 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:51226 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\cilium-20220512010244-7184\id_rsa Username:docker}
	I0512 01:29:20.493807    5648 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (1.3732188s)
	I0512 01:29:20.494150    5648 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0512 01:29:20.556127    5648 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1253 bytes)
	I0512 01:29:20.627200    5648 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0512 01:29:20.680622    5648 provision.go:86] duration metric: configureAuth took 2.9668513s
	I0512 01:29:20.681622    5648 ubuntu.go:193] setting minikube options for container-runtime
	I0512 01:29:20.681622    5648 config.go:178] Loaded profile config "cilium-20220512010244-7184": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.5
	I0512 01:29:20.688625    5648 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20220512010244-7184
	I0512 01:29:21.866080    5648 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20220512010244-7184: (1.1773952s)
	I0512 01:29:21.870082    5648 main.go:134] libmachine: Using SSH client type: native
	I0512 01:29:21.870082    5648 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x422e20] 0x425c80 <nil>  [] 0s} 127.0.0.1 51226 <nil> <nil>}
	I0512 01:29:21.870082    5648 main.go:134] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0512 01:29:22.090027    5648 main.go:134] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0512 01:29:22.090027    5648 ubuntu.go:71] root file system type: overlay
	I0512 01:29:22.090027    5648 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0512 01:29:22.100315    5648 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20220512010244-7184
	I0512 01:29:23.306719    5648 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20220512010244-7184: (1.2063426s)
	I0512 01:29:23.312177    5648 main.go:134] libmachine: Using SSH client type: native
	I0512 01:29:23.312877    5648 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x422e20] 0x425c80 <nil>  [] 0s} 127.0.0.1 51226 <nil> <nil>}
	I0512 01:29:23.313002    5648 main.go:134] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0512 01:29:23.456339    5648 main.go:134] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0512 01:29:23.470333    5648 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20220512010244-7184
	I0512 01:29:24.669798    5648 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20220512010244-7184: (1.1992143s)
	I0512 01:29:24.674300    5648 main.go:134] libmachine: Using SSH client type: native
	I0512 01:29:24.675115    5648 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x422e20] 0x425c80 <nil>  [] 0s} 127.0.0.1 51226 <nil> <nil>}
	I0512 01:29:24.675170    5648 main.go:134] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0512 01:29:24.830514    5648 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I0512 01:29:24.830514    5648 machine.go:91] provisioned docker machine in 9.8548328s
	I0512 01:29:24.830514    5648 start.go:306] post-start starting for "cilium-20220512010244-7184" (driver="docker")
	I0512 01:29:24.830514    5648 start.go:316] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0512 01:29:24.843272    5648 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0512 01:29:24.852286    5648 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20220512010244-7184
	I0512 01:29:26.024918    5648 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20220512010244-7184: (1.1725717s)
	I0512 01:29:26.024918    5648 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:51226 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\cilium-20220512010244-7184\id_rsa Username:docker}
	I0512 01:29:26.149610    5648 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (1.3052695s)
	I0512 01:29:26.160601    5648 ssh_runner.go:195] Run: cat /etc/os-release
	I0512 01:29:26.174825    5648 main.go:134] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0512 01:29:26.174982    5648 main.go:134] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0512 01:29:26.175027    5648 main.go:134] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0512 01:29:26.175027    5648 info.go:137] Remote host: Ubuntu 20.04.4 LTS
	I0512 01:29:26.175027    5648 filesync.go:126] Scanning C:\Users\jenkins.minikube4\minikube-integration\.minikube\addons for local assets ...
	I0512 01:29:26.175850    5648 filesync.go:126] Scanning C:\Users\jenkins.minikube4\minikube-integration\.minikube\files for local assets ...
	I0512 01:29:26.177743    5648 filesync.go:149] local asset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\71842.pem -> 71842.pem in /etc/ssl/certs
	I0512 01:29:26.193062    5648 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0512 01:29:26.224854    5648 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\71842.pem --> /etc/ssl/certs/71842.pem (1708 bytes)
	I0512 01:29:26.272858    5648 start.go:309] post-start completed in 1.4422709s
	I0512 01:29:26.286530    5648 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0512 01:29:26.293612    5648 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20220512010244-7184
	I0512 01:29:27.464093    5648 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20220512010244-7184: (1.1704212s)
	I0512 01:29:27.464093    5648 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:51226 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\cilium-20220512010244-7184\id_rsa Username:docker}
	I0512 01:29:27.581663    5648 ssh_runner.go:235] Completed: sh -c "df -h /var | awk 'NR==2{print $5}'": (1.2950676s)
	I0512 01:29:27.592669    5648 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0512 01:29:27.603677    5648 fix.go:57] fixHost completed within 6m2.7786923s
	I0512 01:29:27.604648    5648 start.go:81] releasing machines lock for "cilium-20220512010244-7184", held for 6m2.7796631s
	I0512 01:29:27.613670    5648 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" cilium-20220512010244-7184
	I0512 01:29:28.840243    5648 cli_runner.go:217] Completed: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" cilium-20220512010244-7184: (1.2265107s)
	I0512 01:29:28.842237    5648 ssh_runner.go:195] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0512 01:29:28.851237    5648 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20220512010244-7184
	I0512 01:29:28.852235    5648 ssh_runner.go:195] Run: sudo service containerd status
	I0512 01:29:28.860241    5648 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20220512010244-7184
	I0512 01:29:30.033156    5648 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20220512010244-7184: (1.1817521s)
	I0512 01:29:30.033221    5648 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:51226 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\cilium-20220512010244-7184\id_rsa Username:docker}
	I0512 01:29:30.048907    5648 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20220512010244-7184: (1.1886053s)
	I0512 01:29:30.048907    5648 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:51226 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\cilium-20220512010244-7184\id_rsa Username:docker}
	I0512 01:29:30.230888    5648 ssh_runner.go:235] Completed: curl -sS -m 2 https://k8s.gcr.io/: (1.3885801s)
	I0512 01:29:30.230888    5648 ssh_runner.go:235] Completed: sudo service containerd status: (1.3785825s)
	I0512 01:29:30.240884    5648 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0512 01:29:30.296362    5648 cruntime.go:273] skipping containerd shutdown because we are bound to it
	I0512 01:29:30.308062    5648 ssh_runner.go:195] Run: sudo service crio status
	I0512 01:29:30.362155    5648 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/dockershim.sock
	image-endpoint: unix:///var/run/dockershim.sock
	" | sudo tee /etc/crictl.yaml"
	I0512 01:29:30.408210    5648 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0512 01:29:30.460857    5648 ssh_runner.go:195] Run: sudo service docker status
	I0512 01:29:30.507743    5648 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0512 01:29:30.638206    5648 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0512 01:29:30.732224    5648 out.go:204] * Preparing Kubernetes v1.23.5 on Docker 20.10.15 ...
	I0512 01:29:30.739101    5648 cli_runner.go:164] Run: docker exec -t cilium-20220512010244-7184 dig +short host.docker.internal
	I0512 01:29:32.028141    5648 cli_runner.go:217] Completed: docker exec -t cilium-20220512010244-7184 dig +short host.docker.internal: (1.2889738s)
	I0512 01:29:32.028141    5648 network.go:96] got host ip for mount in container by digging dns: 192.168.65.2
	I0512 01:29:32.038146    5648 ssh_runner.go:195] Run: grep 192.168.65.2	host.minikube.internal$ /etc/hosts
	I0512 01:29:32.057679    5648 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.65.2	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0512 01:29:32.116043    5648 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" cilium-20220512010244-7184
	I0512 01:29:33.188340    5648 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" cilium-20220512010244-7184: (1.0722426s)
	I0512 01:29:33.188340    5648 preload.go:132] Checking if preload exists for k8s version v1.23.5 and runtime docker
	I0512 01:29:33.201356    5648 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0512 01:29:33.276348    5648 docker.go:610] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.23.5
	k8s.gcr.io/kube-proxy:v1.23.5
	k8s.gcr.io/kube-controller-manager:v1.23.5
	k8s.gcr.io/kube-scheduler:v1.23.5
	k8s.gcr.io/etcd:3.5.1-0
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0512 01:29:33.276348    5648 docker.go:541] Images already preloaded, skipping extraction
	I0512 01:29:33.284340    5648 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0512 01:29:33.350830    5648 docker.go:610] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.23.5
	k8s.gcr.io/kube-proxy:v1.23.5
	k8s.gcr.io/kube-scheduler:v1.23.5
	k8s.gcr.io/kube-controller-manager:v1.23.5
	k8s.gcr.io/etcd:3.5.1-0
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0512 01:29:33.350945    5648 cache_images.go:84] Images are preloaded, skipping loading
	I0512 01:29:33.360953    5648 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0512 01:29:33.571671    5648 cni.go:95] Creating CNI manager for "cilium"
	I0512 01:29:33.571735    5648 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0512 01:29:33.571735    5648 kubeadm.go:158] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.58.2 APIServerPort:8443 KubernetesVersion:v1.23.5 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:cilium-20220512010244-7184 NodeName:cilium-20220512010244-7184 DNSDomain:cluster.local CRISocket:/var/run/dockershim.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.58.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.58.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/mi
nikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	I0512 01:29:33.572000    5648 kubeadm.go:162] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.58.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/dockershim.sock
	  name: "cilium-20220512010244-7184"
	  kubeletExtraArgs:
	    node-ip: 192.168.58.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.58.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.23.5
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0512 01:29:33.572105    5648 kubeadm.go:936] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.23.5/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=docker --hostname-override=cilium-20220512010244-7184 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.58.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.23.5 ClusterName:cilium-20220512010244-7184 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:cilium NodeIP: NodePort:8443 NodeName:}
	I0512 01:29:33.583303    5648 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.23.5
	I0512 01:29:33.616716    5648 binaries.go:44] Found k8s binaries, skipping transfer
	I0512 01:29:33.628064    5648 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /var/lib/minikube /etc/init.d
	I0512 01:29:33.655685    5648 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (373 bytes)
	I0512 01:29:33.700431    5648 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0512 01:29:33.733808    5648 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2048 bytes)
	I0512 01:29:33.773313    5648 ssh_runner.go:362] scp memory --> /var/lib/minikube/openrc-restart-wrapper.sh (233 bytes)
	I0512 01:29:33.821037    5648 ssh_runner.go:362] scp memory --> /etc/init.d/kubelet (839 bytes)
	I0512 01:29:33.866167    5648 ssh_runner.go:195] Run: grep 192.168.58.2	control-plane.minikube.internal$ /etc/hosts
	I0512 01:29:33.880469    5648 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.58.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0512 01:29:33.907631    5648 certs.go:54] Setting up C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\cilium-20220512010244-7184 for IP: 192.168.58.2
	I0512 01:29:33.908180    5648 certs.go:182] skipping minikubeCA CA generation: C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.key
	I0512 01:29:33.908357    5648 certs.go:182] skipping proxyClientCA CA generation: C:\Users\jenkins.minikube4\minikube-integration\.minikube\proxy-client-ca.key
	I0512 01:29:33.908912    5648 certs.go:302] generating minikube-user signed cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\cilium-20220512010244-7184\client.key
	I0512 01:29:33.909170    5648 crypto.go:68] Generating cert C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\cilium-20220512010244-7184\client.crt with IP's: []
	I0512 01:29:34.275475    5648 crypto.go:156] Writing cert to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\cilium-20220512010244-7184\client.crt ...
	I0512 01:29:34.275475    5648 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\cilium-20220512010244-7184\client.crt: {Name:mk509d15f693e3e8b355d87fa3a21168fa230d04 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0512 01:29:34.276477    5648 crypto.go:164] Writing key to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\cilium-20220512010244-7184\client.key ...
	I0512 01:29:34.276477    5648 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\cilium-20220512010244-7184\client.key: {Name:mk888122803f39bcf3882dab17246ee642f8550e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0512 01:29:34.278495    5648 certs.go:302] generating minikube signed cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\cilium-20220512010244-7184\apiserver.key.cee25041
	I0512 01:29:34.278495    5648 crypto.go:68] Generating cert C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\cilium-20220512010244-7184\apiserver.crt.cee25041 with IP's: [192.168.58.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I0512 01:29:34.517811    5648 crypto.go:156] Writing cert to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\cilium-20220512010244-7184\apiserver.crt.cee25041 ...
	I0512 01:29:34.517811    5648 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\cilium-20220512010244-7184\apiserver.crt.cee25041: {Name:mk5a8e01434fc53ba1309c634c112b7fad86ec60 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0512 01:29:34.518941    5648 crypto.go:164] Writing key to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\cilium-20220512010244-7184\apiserver.key.cee25041 ...
	I0512 01:29:34.518941    5648 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\cilium-20220512010244-7184\apiserver.key.cee25041: {Name:mk48767e55ffcc834aa9c015b26ea5865a27222c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0512 01:29:34.520092    5648 certs.go:320] copying C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\cilium-20220512010244-7184\apiserver.crt.cee25041 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\cilium-20220512010244-7184\apiserver.crt
	I0512 01:29:34.529194    5648 certs.go:324] copying C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\cilium-20220512010244-7184\apiserver.key.cee25041 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\cilium-20220512010244-7184\apiserver.key
	I0512 01:29:34.530815    5648 certs.go:302] generating aggregator signed cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\cilium-20220512010244-7184\proxy-client.key
	I0512 01:29:34.530815    5648 crypto.go:68] Generating cert C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\cilium-20220512010244-7184\proxy-client.crt with IP's: []
	I0512 01:29:34.640954    5648 crypto.go:156] Writing cert to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\cilium-20220512010244-7184\proxy-client.crt ...
	I0512 01:29:34.640954    5648 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\cilium-20220512010244-7184\proxy-client.crt: {Name:mkbd949256964213ca682e8f817f4068dab3f565 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0512 01:29:34.642744    5648 crypto.go:164] Writing key to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\cilium-20220512010244-7184\proxy-client.key ...
	I0512 01:29:34.642744    5648 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\cilium-20220512010244-7184\proxy-client.key: {Name:mkdc4b4002d4c0f47f0d72129375d09a57d47f99 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0512 01:29:34.653546    5648 certs.go:388] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\7184.pem (1338 bytes)
	W0512 01:29:34.653546    5648 certs.go:384] ignoring C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\7184_empty.pem, impossibly tiny 0 bytes
	I0512 01:29:34.653546    5648 certs.go:388] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca-key.pem (1675 bytes)
	I0512 01:29:34.653546    5648 certs.go:388] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem (1078 bytes)
	I0512 01:29:34.653546    5648 certs.go:388] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0512 01:29:34.654545    5648 certs.go:388] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\key.pem (1675 bytes)
	I0512 01:29:34.654545    5648 certs.go:388] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\71842.pem (1708 bytes)
	I0512 01:29:34.657550    5648 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\cilium-20220512010244-7184\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0512 01:29:34.710878    5648 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\cilium-20220512010244-7184\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0512 01:29:34.765845    5648 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\cilium-20220512010244-7184\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0512 01:29:34.823886    5648 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\cilium-20220512010244-7184\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0512 01:29:34.869456    5648 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0512 01:29:34.925650    5648 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0512 01:29:34.984780    5648 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0512 01:29:35.046141    5648 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0512 01:29:35.095258    5648 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\71842.pem --> /usr/share/ca-certificates/71842.pem (1708 bytes)
	I0512 01:29:35.153811    5648 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0512 01:29:35.206700    5648 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\7184.pem --> /usr/share/ca-certificates/7184.pem (1338 bytes)
	I0512 01:29:35.259696    5648 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0512 01:29:35.331958    5648 ssh_runner.go:195] Run: openssl version
	I0512 01:29:35.356960    5648 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/7184.pem && ln -fs /usr/share/ca-certificates/7184.pem /etc/ssl/certs/7184.pem"
	I0512 01:29:35.390956    5648 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/7184.pem
	I0512 01:29:35.403967    5648 certs.go:431] hashing: -rw-r--r-- 1 root root 1338 May 11 23:10 /usr/share/ca-certificates/7184.pem
	I0512 01:29:35.414963    5648 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/7184.pem
	I0512 01:29:35.451598    5648 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/7184.pem /etc/ssl/certs/51391683.0"
	I0512 01:29:35.491603    5648 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/71842.pem && ln -fs /usr/share/ca-certificates/71842.pem /etc/ssl/certs/71842.pem"
	I0512 01:29:35.536606    5648 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/71842.pem
	I0512 01:29:35.547598    5648 certs.go:431] hashing: -rw-r--r-- 1 root root 1708 May 11 23:10 /usr/share/ca-certificates/71842.pem
	I0512 01:29:35.560637    5648 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/71842.pem
	I0512 01:29:35.582721    5648 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/71842.pem /etc/ssl/certs/3ec20f2e.0"
	I0512 01:29:35.628604    5648 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0512 01:29:35.662631    5648 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0512 01:29:35.677019    5648 certs.go:431] hashing: -rw-r--r-- 1 root root 1111 May 11 23:00 /usr/share/ca-certificates/minikubeCA.pem
	I0512 01:29:35.688031    5648 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0512 01:29:35.718830    5648 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0512 01:29:35.742837    5648 kubeadm.go:391] StartCluster: {Name:cilium-20220512010244-7184 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1652251400-14138@sha256:8c847a4aa2afc5a7fc659f9731046bf9cc7e788283deecc83c8633014fb0828a Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.5 ClusterName:cilium-20220512010244-7184 Namespace:default APIServerName:minikube
CA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:cilium NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.23.5 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:5m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube4:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOpti
mizations:false DisableMetrics:false}
	I0512 01:29:35.750826    5648 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0512 01:29:35.840456    5648 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0512 01:29:35.873119    5648 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0512 01:29:35.892796    5648 kubeadm.go:221] ignoring SystemVerification for kubeadm because of docker driver
	I0512 01:29:35.914547    5648 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0512 01:29:35.943422    5648 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0512 01:29:35.943422    5648 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.5:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0512 01:30:15.538643    5648 out.go:204]   - Generating certificates and keys ...
	I0512 01:30:15.544904    5648 out.go:204]   - Booting up control plane ...
	I0512 01:30:15.551028    5648 out.go:204]   - Configuring RBAC rules ...
	I0512 01:30:15.555136    5648 cni.go:95] Creating CNI manager for "cilium"
	I0512 01:30:15.558538    5648 out.go:177] * Configuring Cilium (Container Networking Interface) ...
	I0512 01:30:15.572337    5648 ssh_runner.go:195] Run: sudo /bin/bash -c "grep 'bpffs /sys/fs/bpf' /proc/mounts || sudo mount bpffs -t bpf /sys/fs/bpf"
	I0512 01:30:15.641765    5648 cilium.go:816] Using pod CIDR: 10.244.0.0/16
	I0512 01:30:15.641765    5648 cilium.go:827] cilium options: {PodSubnet:10.244.0.0/16}
	I0512 01:30:15.641765    5648 cilium.go:831] cilium config:
	---
	# Source: cilium/templates/cilium-agent-serviceaccount.yaml
	apiVersion: v1
	kind: ServiceAccount
	metadata:
	  name: cilium
	  namespace: kube-system
	---
	# Source: cilium/templates/cilium-operator-serviceaccount.yaml
	apiVersion: v1
	kind: ServiceAccount
	metadata:
	  name: cilium-operator
	  namespace: kube-system
	---
	# Source: cilium/templates/cilium-configmap.yaml
	apiVersion: v1
	kind: ConfigMap
	metadata:
	  name: cilium-config
	  namespace: kube-system
	data:
	
	  # Identity allocation mode selects how identities are shared between cilium
	  # nodes by setting how they are stored. The options are "crd" or "kvstore".
	  # - "crd" stores identities in kubernetes as CRDs (custom resource definition).
	  #   These can be queried with:
	  #     kubectl get ciliumid
	  # - "kvstore" stores identities in a kvstore, etcd or consul, that is
	  #   configured below. Cilium versions before 1.6 supported only the kvstore
	  #   backend. Upgrades from these older cilium versions should continue using
	  #   the kvstore by commenting out the identity-allocation-mode below, or
	  #   setting it to "kvstore".
	  identity-allocation-mode: crd
	  cilium-endpoint-gc-interval: "5m0s"
	
	  # If you want to run cilium in debug mode change this value to true
	  debug: "false"
	  # The agent can be put into the following three policy enforcement modes
	  # default, always and never.
	  # https://docs.cilium.io/en/latest/policy/intro/#policy-enforcement-modes
	  enable-policy: "default"
	
	  # Enable IPv4 addressing. If enabled, all endpoints are allocated an IPv4
	  # address.
	  enable-ipv4: "true"
	
	  # Enable IPv6 addressing. If enabled, all endpoints are allocated an IPv6
	  # address.
	  enable-ipv6: "false"
	  # Users who wish to specify their own custom CNI configuration file must set
	  # custom-cni-conf to "true", otherwise Cilium may overwrite the configuration.
	  custom-cni-conf: "false"
	  enable-bpf-clock-probe: "true"
	  # If you want cilium monitor to aggregate tracing for packets, set this level
	  # to "low", "medium", or "maximum". The higher the level, the less packets
	  # that will be seen in monitor output.
	  monitor-aggregation: medium
	
	  # The monitor aggregation interval governs the typical time between monitor
	  # notification events for each allowed connection.
	  #
	  # Only effective when monitor aggregation is set to "medium" or higher.
	  monitor-aggregation-interval: 5s
	
	  # The monitor aggregation flags determine which TCP flags which, upon the
	  # first observation, cause monitor notifications to be generated.
	  #
	  # Only effective when monitor aggregation is set to "medium" or higher.
	  monitor-aggregation-flags: all
	  # Specifies the ratio (0.0-1.0) of total system memory to use for dynamic
	  # sizing of the TCP CT, non-TCP CT, NAT and policy BPF maps.
	  bpf-map-dynamic-size-ratio: "0.0025"
	  # bpf-policy-map-max specifies the maximum number of entries in endpoint
	  # policy map (per endpoint)
	  bpf-policy-map-max: "16384"
	  # bpf-lb-map-max specifies the maximum number of entries in bpf lb service,
	  # backend and affinity maps.
	  bpf-lb-map-max: "65536"
	  # Pre-allocation of map entries allows per-packet latency to be reduced, at
	  # the expense of up-front memory allocation for the entries in the maps. The
	  # default value below will minimize memory usage in the default installation;
	  # users who are sensitive to latency may consider setting this to "true".
	  #
	  # This option was introduced in Cilium 1.4. Cilium 1.3 and earlier ignore
	  # this option and behave as though it is set to "true".
	  #
	  # If this value is modified, then during the next Cilium startup the restore
	  # of existing endpoints and tracking of ongoing connections may be disrupted.
	  # As a result, reply packets may be dropped and the load-balancing decisions
	  # for established connections may change.
	  #
	  # If this option is set to "false" during an upgrade from 1.3 or earlier to
	  # 1.4 or later, then it may cause one-time disruptions during the upgrade.
	  preallocate-bpf-maps: "false"
	
	  # Regular expression matching compatible Istio sidecar istio-proxy
	  # container image names
	  sidecar-istio-proxy-image: "cilium/istio_proxy"
	
	  # Name of the cluster. Only relevant when building a mesh of clusters.
	  cluster-name: default
	  # Unique ID of the cluster. Must be unique across all conneted clusters and
	  # in the range of 1 and 255. Only relevant when building a mesh of clusters.
	  cluster-id: ""
	
	  # Encapsulation mode for communication between nodes
	  # Possible values:
	  #   - disabled
	  #   - vxlan (default)
	  #   - geneve
	  tunnel: vxlan
	  # Enables L7 proxy for L7 policy enforcement and visibility
	  enable-l7-proxy: "true"
	
	  # wait-bpf-mount makes init container wait until bpf filesystem is mounted
	  wait-bpf-mount: "false"
	
	  masquerade: "true"
	  enable-bpf-masquerade: "true"
	
	  enable-xt-socket-fallback: "true"
	  install-iptables-rules: "true"
	
	  auto-direct-node-routes: "false"
	  enable-bandwidth-manager: "false"
	  enable-local-redirect-policy: "false"
	  kube-proxy-replacement:  "probe"
	  kube-proxy-replacement-healthz-bind-address: ""
	  enable-health-check-nodeport: "true"
	  node-port-bind-protection: "true"
	  enable-auto-protect-node-port-range: "true"
	  enable-session-affinity: "true"
	  k8s-require-ipv4-pod-cidr: "true"
	  k8s-require-ipv6-pod-cidr: "false"
	  enable-endpoint-health-checking: "true"
	  enable-health-checking: "true"
	  enable-well-known-identities: "false"
	  enable-remote-node-identity: "true"
	  operator-api-serve-addr: "127.0.0.1:9234"
	  # Enable Hubble gRPC service.
	  enable-hubble: "true"
	  # UNIX domain socket for Hubble server to listen to.
	  hubble-socket-path:  "/var/run/cilium/hubble.sock"
	  # An additional address for Hubble server to listen to (e.g. ":4244").
	  hubble-listen-address: ":4244"
	  hubble-disable-tls: "false"
	  hubble-tls-cert-file: /var/lib/cilium/tls/hubble/server.crt
	  hubble-tls-key-file: /var/lib/cilium/tls/hubble/server.key
	  hubble-tls-client-ca-files: /var/lib/cilium/tls/hubble/client-ca.crt
	  ipam: "cluster-pool"
	  cluster-pool-ipv4-cidr: "10.244.0.0/16"
	  cluster-pool-ipv4-mask-size: "24"
	  disable-cnp-status-updates: "true"
	  cgroup-root: "/run/cilium/cgroupv2"
	---
	# Source: cilium/templates/cilium-agent-clusterrole.yaml
	apiVersion: rbac.authorization.k8s.io/v1
	kind: ClusterRole
	metadata:
	  name: cilium
	rules:
	- apiGroups:
	  - networking.k8s.io
	  resources:
	  - networkpolicies
	  verbs:
	  - get
	  - list
	  - watch
	- apiGroups:
	  - discovery.k8s.io
	  resources:
	  - endpointslices
	  verbs:
	  - get
	  - list
	  - watch
	- apiGroups:
	  - ""
	  resources:
	  - namespaces
	  - services
	  - nodes
	  - endpoints
	  verbs:
	  - get
	  - list
	  - watch
	- apiGroups:
	  - ""
	  resources:
	  - pods
	  - pods/finalizers
	  verbs:
	  - get
	  - list
	  - watch
	  - update
	  - delete
	- apiGroups:
	  - ""
	  resources:
	  - nodes
	  verbs:
	  - get
	  - list
	  - watch
	  - update
	- apiGroups:
	  - ""
	  resources:
	  - nodes
	  - nodes/status
	  verbs:
	  - patch
	- apiGroups:
	  - apiextensions.k8s.io
	  resources:
	  - customresourcedefinitions
	  verbs:
	  # Deprecated for removal in v1.10
	  - create
	  - list
	  - watch
	  - update
	
	  # This is used when validating policies in preflight. This will need to stay
	  # until we figure out how to avoid "get" inside the preflight, and then
	  # should be removed ideally.
	  - get
	- apiGroups:
	  - cilium.io
	  resources:
	  - ciliumnetworkpolicies
	  - ciliumnetworkpolicies/status
	  - ciliumnetworkpolicies/finalizers
	  - ciliumclusterwidenetworkpolicies
	  - ciliumclusterwidenetworkpolicies/status
	  - ciliumclusterwidenetworkpolicies/finalizers
	  - ciliumendpoints
	  - ciliumendpoints/status
	  - ciliumendpoints/finalizers
	  - ciliumnodes
	  - ciliumnodes/status
	  - ciliumnodes/finalizers
	  - ciliumidentities
	  - ciliumidentities/finalizers
	  - ciliumlocalredirectpolicies
	  - ciliumlocalredirectpolicies/status
	  - ciliumlocalredirectpolicies/finalizers
	  verbs:
	  - '*'
	---
	# Source: cilium/templates/cilium-operator-clusterrole.yaml
	apiVersion: rbac.authorization.k8s.io/v1
	kind: ClusterRole
	metadata:
	  name: cilium-operator
	rules:
	- apiGroups:
	  - ""
	  resources:
	  # to automatically delete [core|kube]dns pods so that are starting to being
	  # managed by Cilium
	  - pods
	  verbs:
	  - get
	  - list
	  - watch
	  - delete
	- apiGroups:
	  - discovery.k8s.io
	  resources:
	  - endpointslices
	  verbs:
	  - get
	  - list
	  - watch
	- apiGroups:
	  - ""
	  resources:
	  # to perform the translation of a CNP that contains 'ToGroup' to its endpoints
	  - services
	  - endpoints
	  # to check apiserver connectivity
	  - namespaces
	  verbs:
	  - get
	  - list
	  - watch
	- apiGroups:
	  - cilium.io
	  resources:
	  - ciliumnetworkpolicies
	  - ciliumnetworkpolicies/status
	  - ciliumnetworkpolicies/finalizers
	  - ciliumclusterwidenetworkpolicies
	  - ciliumclusterwidenetworkpolicies/status
	  - ciliumclusterwidenetworkpolicies/finalizers
	  - ciliumendpoints
	  - ciliumendpoints/status
	  - ciliumendpoints/finalizers
	  - ciliumnodes
	  - ciliumnodes/status
	  - ciliumnodes/finalizers
	  - ciliumidentities
	  - ciliumidentities/status
	  - ciliumidentities/finalizers
	  - ciliumlocalredirectpolicies
	  - ciliumlocalredirectpolicies/status
	  - ciliumlocalredirectpolicies/finalizers
	  verbs:
	  - '*'
	- apiGroups:
	  - apiextensions.k8s.io
	  resources:
	  - customresourcedefinitions
	  verbs:
	  - create
	  - get
	  - list
	  - update
	  - watch
	# For cilium-operator running in HA mode.
	#
	# Cilium operator running in HA mode requires the use of ResourceLock for Leader Election
	# between multiple running instances.
	# The preferred way of doing this is to use LeasesResourceLock as edits to Leases are less
	# common and fewer objects in the cluster watch "all Leases".
	# The support for leases was introduced in coordination.k8s.io/v1 during Kubernetes 1.14 release.
	# In Cilium we currently don't support HA mode for K8s version < 1.14. This condition make sure
	# that we only authorize access to leases resources in supported K8s versions.
	- apiGroups:
	  - coordination.k8s.io
	  resources:
	  - leases
	  verbs:
	  - create
	  - get
	  - update
	---
	# Source: cilium/templates/cilium-agent-clusterrolebinding.yaml
	apiVersion: rbac.authorization.k8s.io/v1
	kind: ClusterRoleBinding
	metadata:
	  name: cilium
	roleRef:
	  apiGroup: rbac.authorization.k8s.io
	  kind: ClusterRole
	  name: cilium
	subjects:
	- kind: ServiceAccount
	  name: cilium
	  namespace: kube-system
	---
	# Source: cilium/templates/cilium-operator-clusterrolebinding.yaml
	apiVersion: rbac.authorization.k8s.io/v1
	kind: ClusterRoleBinding
	metadata:
	  name: cilium-operator
	roleRef:
	  apiGroup: rbac.authorization.k8s.io
	  kind: ClusterRole
	  name: cilium-operator
	subjects:
	- kind: ServiceAccount
	  name: cilium-operator
	  namespace: kube-system
	---
	# Source: cilium/templates/cilium-agent-daemonset.yaml
	apiVersion: apps/v1
	kind: DaemonSet
	metadata:
	  labels:
	    k8s-app: cilium
	  name: cilium
	  namespace: kube-system
	spec:
	  selector:
	    matchLabels:
	      k8s-app: cilium
	  updateStrategy:
	    rollingUpdate:
	      maxUnavailable: 2
	    type: RollingUpdate
	  template:
	    metadata:
	      annotations:
	        # This annotation plus the CriticalAddonsOnly toleration makes
	        # cilium to be a critical pod in the cluster, which ensures cilium
	        # gets priority scheduling.
	        # https://kubernetes.io/docs/tasks/administer-cluster/guaranteed-scheduling-critical-addon-pods/
	        scheduler.alpha.kubernetes.io/critical-pod: ""
	      labels:
	        k8s-app: cilium
	    spec:
	      affinity:
	        podAntiAffinity:
	          requiredDuringSchedulingIgnoredDuringExecution:
	          - labelSelector:
	              matchExpressions:
	              - key: k8s-app
	                operator: In
	                values:
	                - cilium
	            topologyKey: kubernetes.io/hostname
	      containers:
	      - args:
	        - --config-dir=/tmp/cilium/config-map
	        command:
	        - cilium-agent
	        livenessProbe:
	          httpGet:
	            host: '127.0.0.1'
	            path: /healthz
	            port: 9876
	            scheme: HTTP
	            httpHeaders:
	            - name: "brief"
	              value: "true"
	          failureThreshold: 10
	          # The initial delay for the liveness probe is intentionally large to
	          # avoid an endless kill & restart cycle if in the event that the initial
	          # bootstrapping takes longer than expected.
	          initialDelaySeconds: 120
	          periodSeconds: 30
	          successThreshold: 1
	          timeoutSeconds: 5
	        readinessProbe:
	          httpGet:
	            host: '127.0.0.1'
	            path: /healthz
	            port: 9876
	            scheme: HTTP
	            httpHeaders:
	            - name: "brief"
	              value: "true"
	          failureThreshold: 3
	          initialDelaySeconds: 5
	          periodSeconds: 30
	          successThreshold: 1
	          timeoutSeconds: 5
	        env:
	        - name: K8S_NODE_NAME
	          valueFrom:
	            fieldRef:
	              apiVersion: v1
	              fieldPath: spec.nodeName
	        - name: CILIUM_K8S_NAMESPACE
	          valueFrom:
	            fieldRef:
	              apiVersion: v1
	              fieldPath: metadata.namespace
	        - name: CILIUM_FLANNEL_MASTER_DEVICE
	          valueFrom:
	            configMapKeyRef:
	              key: flannel-master-device
	              name: cilium-config
	              optional: true
	        - name: CILIUM_FLANNEL_UNINSTALL_ON_EXIT
	          valueFrom:
	            configMapKeyRef:
	              key: flannel-uninstall-on-exit
	              name: cilium-config
	              optional: true
	        - name: CILIUM_CLUSTERMESH_CONFIG
	          value: /var/lib/cilium/clustermesh/
	        - name: CILIUM_CNI_CHAINING_MODE
	          valueFrom:
	            configMapKeyRef:
	              key: cni-chaining-mode
	              name: cilium-config
	              optional: true
	        - name: CILIUM_CUSTOM_CNI_CONF
	          valueFrom:
	            configMapKeyRef:
	              key: custom-cni-conf
	              name: cilium-config
	              optional: true
	        image: "quay.io/cilium/cilium:v1.9.9@sha256:a85d5cff13f8231c2e267d9fc3c6e43d24be4a75dac9f641c11ec46e7f17624d"
	        imagePullPolicy: IfNotPresent
	        lifecycle:
	          postStart:
	            exec:
	              command:
	              - "/cni-install.sh"
	              - "--enable-debug=false"
	          preStop:
	            exec:
	              command:
	              - /cni-uninstall.sh
	        name: cilium-agent
	        securityContext:
	          capabilities:
	            add:
	            - NET_ADMIN
	            - SYS_MODULE
	          privileged: true
	        volumeMounts:
	        - mountPath: /sys/fs/bpf
	          name: bpf-maps
	        - mountPath: /var/run/cilium
	          name: cilium-run
	        - mountPath: /host/opt/cni/bin
	          name: cni-path
	        - mountPath: /host/etc/cni/net.d
	          name: etc-cni-netd
	        - mountPath: /var/lib/cilium/clustermesh
	          name: clustermesh-secrets
	          readOnly: true
	        - mountPath: /tmp/cilium/config-map
	          name: cilium-config-path
	          readOnly: true
	          # Needed to be able to load kernel modules
	        - mountPath: /lib/modules
	          name: lib-modules
	          readOnly: true
	        - mountPath: /run/xtables.lock
	          name: xtables-lock
	        - mountPath: /var/lib/cilium/tls/hubble
	          name: hubble-tls
	          readOnly: true
	      hostNetwork: true
	      initContainers:
	      # Required to mount cgroup2 filesystem on the underlying Kubernetes node.
	      # We use nsenter command with host's cgroup and mount namespaces enabled.
	      - name: mount-cgroup
	        env:
	          - name: CGROUP_ROOT
	            value: /run/cilium/cgroupv2
	          - name: BIN_PATH
	            value: /opt/cni/bin
	        command:
	          - sh
	          - -c
	          # The statically linked Go program binary is invoked to avoid any
	          # dependency on utilities like sh and mount that can be missing on certain
	          # distros installed on the underlying host. Copy the binary to the
	          # same directory where we install cilium cni plugin so that exec permissions
	          # are available.
	          - 'cp /usr/bin/cilium-mount /hostbin/cilium-mount && nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; rm /hostbin/cilium-mount'
	        image: "quay.io/cilium/cilium:v1.9.9@sha256:a85d5cff13f8231c2e267d9fc3c6e43d24be4a75dac9f641c11ec46e7f17624d"
	        imagePullPolicy: IfNotPresent
	        volumeMounts:
	          - mountPath: /hostproc
	            name: hostproc
	          - mountPath: /hostbin
	            name: cni-path
	        securityContext:
	          privileged: true
	      - command:
	        - /init-container.sh
	        env:
	        - name: CILIUM_ALL_STATE
	          valueFrom:
	            configMapKeyRef:
	              key: clean-cilium-state
	              name: cilium-config
	              optional: true
	        - name: CILIUM_BPF_STATE
	          valueFrom:
	            configMapKeyRef:
	              key: clean-cilium-bpf-state
	              name: cilium-config
	              optional: true
	        - name: CILIUM_WAIT_BPF_MOUNT
	          valueFrom:
	            configMapKeyRef:
	              key: wait-bpf-mount
	              name: cilium-config
	              optional: true
	        image: "quay.io/cilium/cilium:v1.9.9@sha256:a85d5cff13f8231c2e267d9fc3c6e43d24be4a75dac9f641c11ec46e7f17624d"
	        imagePullPolicy: IfNotPresent
	        name: clean-cilium-state
	        securityContext:
	          capabilities:
	            add:
	            - NET_ADMIN
	          privileged: true
	        volumeMounts:
	        - mountPath: /sys/fs/bpf
	          name: bpf-maps
	          mountPropagation: HostToContainer
	          # Required to mount cgroup filesystem from the host to cilium agent pod
	        - mountPath: /run/cilium/cgroupv2
	          name: cilium-cgroup
	          mountPropagation: HostToContainer
	        - mountPath: /var/run/cilium
	          name: cilium-run
	        resources:
	          requests:
	            cpu: 100m
	            memory: 100Mi
	      restartPolicy: Always
	      priorityClassName: system-node-critical
	      serviceAccount: cilium
	      serviceAccountName: cilium
	      terminationGracePeriodSeconds: 1
	      tolerations:
	      - operator: Exists
	      volumes:
	        # To keep state between restarts / upgrades
	      - hostPath:
	          path: /var/run/cilium
	          type: DirectoryOrCreate
	        name: cilium-run
	        # To keep state between restarts / upgrades for bpf maps
	      - hostPath:
	          path: /sys/fs/bpf
	          type: DirectoryOrCreate
	        name: bpf-maps
	      # To mount cgroup2 filesystem on the host
	      - hostPath:
	          path: /proc
	          type: Directory
	        name: hostproc
	      # To keep state between restarts / upgrades for cgroup2 filesystem
	      - hostPath:
	          path: /run/cilium/cgroupv2
	          type: DirectoryOrCreate
	        name: cilium-cgroup
	      # To install cilium cni plugin in the host
	      - hostPath:
	          path:  /opt/cni/bin
	          type: DirectoryOrCreate
	        name: cni-path
	        # To install cilium cni configuration in the host
	      - hostPath:
	          path: /etc/cni/net.d
	          type: DirectoryOrCreate
	        name: etc-cni-netd
	        # To be able to load kernel modules
	      - hostPath:
	          path: /lib/modules
	        name: lib-modules
	        # To access iptables concurrently with other processes (e.g. kube-proxy)
	      - hostPath:
	          path: /run/xtables.lock
	          type: FileOrCreate
	        name: xtables-lock
	        # To read the clustermesh configuration
	      - name: clustermesh-secrets
	        secret:
	          defaultMode: 420
	          optional: true
	          secretName: cilium-clustermesh
	        # To read the configuration from the config map
	      - configMap:
	          name: cilium-config
	        name: cilium-config-path
	      - name: hubble-tls
	        projected:
	          sources:
	          - secret:
	              name: hubble-server-certs
	              items:
	                - key: tls.crt
	                  path: server.crt
	                - key: tls.key
	                  path: server.key
	              optional: true
	          - configMap:
	              name: hubble-ca-cert
	              items:
	                - key: ca.crt
	                  path: client-ca.crt
	              optional: true
	---
	# Source: cilium/templates/cilium-operator-deployment.yaml
	apiVersion: apps/v1
	kind: Deployment
	metadata:
	  labels:
	    io.cilium/app: operator
	    name: cilium-operator
	  name: cilium-operator
	  namespace: kube-system
	spec:
	  # We support HA mode only for Kubernetes version > 1.14
	  # See docs on ServerCapabilities.LeasesResourceLock in file pkg/k8s/version/version.go
	  # for more details.
	  replicas: 1
	  selector:
	    matchLabels:
	      io.cilium/app: operator
	      name: cilium-operator
	  strategy:
	    rollingUpdate:
	      maxSurge: 1
	      maxUnavailable: 1
	    type: RollingUpdate
	  template:
	    metadata:
	      annotations:
	      labels:
	        io.cilium/app: operator
	        name: cilium-operator
	    spec:
	      # In HA mode, cilium-operator pods must not be scheduled on the same
	      # node as they will clash with each other.
	      affinity:
	        podAntiAffinity:
	          requiredDuringSchedulingIgnoredDuringExecution:
	          - labelSelector:
	              matchExpressions:
	              - key: io.cilium/app
	                operator: In
	                values:
	                - operator
	            topologyKey: kubernetes.io/hostname
	      containers:
	      - args:
	        - --config-dir=/tmp/cilium/config-map
	        - --debug=$(CILIUM_DEBUG)
	        command:
	        - cilium-operator-generic
	        env:
	        - name: K8S_NODE_NAME
	          valueFrom:
	            fieldRef:
	              apiVersion: v1
	              fieldPath: spec.nodeName
	        - name: CILIUM_K8S_NAMESPACE
	          valueFrom:
	            fieldRef:
	              apiVersion: v1
	              fieldPath: metadata.namespace
	        - name: CILIUM_DEBUG
	          valueFrom:
	            configMapKeyRef:
	              key: debug
	              name: cilium-config
	              optional: true
	        image: "quay.io/cilium/operator-generic:v1.9.9@sha256:3726a965cd960295ca3c5e7f2b543c02096c0912c6652eb8bbb9ce54bcaa99d8"
	        imagePullPolicy: IfNotPresent
	        name: cilium-operator
	        livenessProbe:
	          httpGet:
	            host: '127.0.0.1'
	            path: /healthz
	            port: 9234
	            scheme: HTTP
	          initialDelaySeconds: 60
	          periodSeconds: 10
	          timeoutSeconds: 3
	        volumeMounts:
	        - mountPath: /tmp/cilium/config-map
	          name: cilium-config-path
	          readOnly: true
	      hostNetwork: true
	      restartPolicy: Always
	      priorityClassName: system-cluster-critical
	      serviceAccount: cilium-operator
	      serviceAccountName: cilium-operator
	      tolerations:
	      - operator: Exists
	      volumes:
	        # To read the configuration from the config map
	      - configMap:
	          name: cilium-config
	        name: cilium-config-path
	
	I0512 01:30:15.641765    5648 cni.go:189] applying CNI manifest using /var/lib/minikube/binaries/v1.23.5/kubectl ...
	I0512 01:30:15.641765    5648 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (23204 bytes)
	I0512 01:30:15.753805    5648 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0512 01:30:19.341694    5648 ssh_runner.go:235] Completed: sudo /var/lib/minikube/binaries/v1.23.5/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml: (3.5877072s)
	I0512 01:30:19.341694    5648 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0512 01:30:19.354689    5648 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl label nodes minikube.k8s.io/version=v1.25.2 minikube.k8s.io/commit=50a7977b568d2ad3e04003527a57f4502d6177a0 minikube.k8s.io/name=cilium-20220512010244-7184 minikube.k8s.io/updated_at=2022_05_12T01_30_19_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0512 01:30:19.355680    5648 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0512 01:30:19.361690    5648 ops.go:34] apiserver oom_adj: -16
	I0512 01:30:19.649953    5648 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0512 01:30:20.383329    5648 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0512 01:30:20.887719    5648 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0512 01:30:21.394020    5648 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0512 01:30:21.884335    5648 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0512 01:30:22.378452    5648 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0512 01:30:22.890644    5648 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0512 01:30:23.390615    5648 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0512 01:30:23.881399    5648 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0512 01:30:24.383357    5648 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0512 01:30:24.886462    5648 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0512 01:30:25.401229    5648 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0512 01:30:25.880011    5648 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0512 01:30:26.384904    5648 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0512 01:30:27.390887    5648 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0512 01:30:28.327323    5648 kubeadm.go:1020] duration metric: took 8.9851728s to wait for elevateKubeSystemPrivileges.
	I0512 01:30:28.327323    5648 kubeadm.go:393] StartCluster complete in 52.5818202s
	I0512 01:30:28.328356    5648 settings.go:142] acquiring lock: {Name:mk5d8710830d010adb6db61f855b0ef766a8622c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0512 01:30:28.328356    5648 settings.go:150] Updating kubeconfig:  C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	I0512 01:30:28.331335    5648 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\kubeconfig: {Name:mk97c09b788e5010ffd4c9dd9525f9245d5edd25 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0512 01:30:29.063454    5648 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "cilium-20220512010244-7184" rescaled to 1
	I0512 01:30:29.064435    5648 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.23.5/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0512 01:30:29.064435    5648 start.go:208] Will wait 5m0s for node &{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.23.5 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0512 01:30:29.064435    5648 addons.go:415] enableAddons start: toEnable=map[], additional=[]
	I0512 01:30:29.067445    5648 out.go:177] * Verifying Kubernetes components...
	I0512 01:30:29.064435    5648 addons.go:65] Setting storage-provisioner=true in profile "cilium-20220512010244-7184"
	I0512 01:30:29.064435    5648 addons.go:65] Setting default-storageclass=true in profile "cilium-20220512010244-7184"
	I0512 01:30:29.065443    5648 config.go:178] Loaded profile config "cilium-20220512010244-7184": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.5
	I0512 01:30:29.067445    5648 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "cilium-20220512010244-7184"
	I0512 01:30:29.067445    5648 addons.go:153] Setting addon storage-provisioner=true in "cilium-20220512010244-7184"
	W0512 01:30:29.071475    5648 addons.go:165] addon storage-provisioner should already be in state true
	I0512 01:30:29.071475    5648 host.go:66] Checking if "cilium-20220512010244-7184" exists ...
	I0512 01:30:29.085450    5648 ssh_runner.go:195] Run: sudo service kubelet status
	I0512 01:30:29.093457    5648 cli_runner.go:164] Run: docker container inspect cilium-20220512010244-7184 --format={{.State.Status}}
	I0512 01:30:29.094461    5648 cli_runner.go:164] Run: docker container inspect cilium-20220512010244-7184 --format={{.State.Status}}
	I0512 01:30:29.408057    5648 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.23.5/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.65.2 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.23.5/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0512 01:30:29.431001    5648 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" cilium-20220512010244-7184
	I0512 01:30:30.337178    5648 start.go:815] {"host.minikube.internal": 192.168.65.2} host record injected into CoreDNS
	I0512 01:30:30.676886    5648 cli_runner.go:217] Completed: docker container inspect cilium-20220512010244-7184 --format={{.State.Status}}: (1.5832985s)
	I0512 01:30:30.679857    5648 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0512 01:30:30.683052    5648 addons.go:348] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0512 01:30:30.683052    5648 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0512 01:30:30.694378    5648 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20220512010244-7184
	I0512 01:30:30.706850    5648 cli_runner.go:217] Completed: docker container inspect cilium-20220512010244-7184 --format={{.State.Status}}: (1.6123076s)
	I0512 01:30:30.727218    5648 addons.go:153] Setting addon default-storageclass=true in "cilium-20220512010244-7184"
	W0512 01:30:30.727515    5648 addons.go:165] addon default-storageclass should already be in state true
	I0512 01:30:30.727676    5648 host.go:66] Checking if "cilium-20220512010244-7184" exists ...
	I0512 01:30:30.754492    5648 cli_runner.go:164] Run: docker container inspect cilium-20220512010244-7184 --format={{.State.Status}}
	I0512 01:30:30.977840    5648 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" cilium-20220512010244-7184: (1.5467608s)
	I0512 01:30:30.979853    5648 node_ready.go:35] waiting up to 5m0s for node "cilium-20220512010244-7184" to be "Ready" ...
	I0512 01:30:31.009764    5648 node_ready.go:49] node "cilium-20220512010244-7184" has status "Ready":"True"
	I0512 01:30:31.010316    5648 node_ready.go:38] duration metric: took 30.4618ms waiting for node "cilium-20220512010244-7184" to be "Ready" ...
	I0512 01:30:31.010316    5648 pod_ready.go:35] extra waiting up to 5m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0512 01:30:31.035552    5648 pod_ready.go:78] waiting up to 5m0s for pod "cilium-jkssv" in "kube-system" namespace to be "Ready" ...
	I0512 01:30:32.163691    5648 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20220512010244-7184: (1.469238s)
	I0512 01:30:32.164698    5648 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:51226 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\cilium-20220512010244-7184\id_rsa Username:docker}
	I0512 01:30:32.227230    5648 cli_runner.go:217] Completed: docker container inspect cilium-20220512010244-7184 --format={{.State.Status}}: (1.4726638s)
	I0512 01:30:32.227230    5648 addons.go:348] installing /etc/kubernetes/addons/storageclass.yaml
	I0512 01:30:32.227230    5648 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0512 01:30:32.244239    5648 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20220512010244-7184
	I0512 01:30:32.438519    5648 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.5/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0512 01:30:33.219428    5648 pod_ready.go:102] pod "cilium-jkssv" in "kube-system" namespace has status "Ready":"False"
	I0512 01:30:33.762231    5648 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20220512010244-7184: (1.517915s)
	I0512 01:30:33.762231    5648 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:51226 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\cilium-20220512010244-7184\id_rsa Username:docker}
	I0512 01:30:34.226320    5648 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.5/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.7877111s)
	I0512 01:30:34.337569    5648 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.5/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0512 01:30:35.716865    5648 pod_ready.go:102] pod "cilium-jkssv" in "kube-system" namespace has status "Ready":"False"
	I0512 01:30:35.826422    5648 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.5/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.488777s)
	I0512 01:30:35.831773    5648 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0512 01:30:35.837783    5648 addons.go:417] enableAddons completed in 6.7730047s
	I0512 01:30:37.717834    5648 pod_ready.go:102] pod "cilium-jkssv" in "kube-system" namespace has status "Ready":"False"
	I0512 01:30:40.160058    5648 pod_ready.go:102] pod "cilium-jkssv" in "kube-system" namespace has status "Ready":"False"
	I0512 01:30:42.332960    5648 pod_ready.go:102] pod "cilium-jkssv" in "kube-system" namespace has status "Ready":"False"
	I0512 01:30:44.809962    5648 pod_ready.go:102] pod "cilium-jkssv" in "kube-system" namespace has status "Ready":"False"
	I0512 01:30:47.316139    5648 pod_ready.go:102] pod "cilium-jkssv" in "kube-system" namespace has status "Ready":"False"
	I0512 01:30:49.707841    5648 pod_ready.go:102] pod "cilium-jkssv" in "kube-system" namespace has status "Ready":"False"
	I0512 01:30:52.527962    5648 pod_ready.go:102] pod "cilium-jkssv" in "kube-system" namespace has status "Ready":"False"
	I0512 01:30:54.909784    5648 pod_ready.go:102] pod "cilium-jkssv" in "kube-system" namespace has status "Ready":"False"
	I0512 01:30:57.204878    5648 pod_ready.go:102] pod "cilium-jkssv" in "kube-system" namespace has status "Ready":"False"
	I0512 01:30:59.321568    5648 pod_ready.go:102] pod "cilium-jkssv" in "kube-system" namespace has status "Ready":"False"
	I0512 01:31:01.717424    5648 pod_ready.go:102] pod "cilium-jkssv" in "kube-system" namespace has status "Ready":"False"
	I0512 01:31:03.718951    5648 pod_ready.go:102] pod "cilium-jkssv" in "kube-system" namespace has status "Ready":"False"
	I0512 01:31:11.090895    5648 pod_ready.go:102] pod "cilium-jkssv" in "kube-system" namespace has status "Ready":"False"
	I0512 01:31:13.222461    5648 pod_ready.go:102] pod "cilium-jkssv" in "kube-system" namespace has status "Ready":"False"
	I0512 01:31:15.224295    5648 pod_ready.go:102] pod "cilium-jkssv" in "kube-system" namespace has status "Ready":"False"
	I0512 01:31:17.905193    5648 pod_ready.go:102] pod "cilium-jkssv" in "kube-system" namespace has status "Ready":"False"
	I0512 01:31:20.225730    5648 pod_ready.go:102] pod "cilium-jkssv" in "kube-system" namespace has status "Ready":"False"
	I0512 01:31:22.656714    5648 pod_ready.go:102] pod "cilium-jkssv" in "kube-system" namespace has status "Ready":"False"
	I0512 01:31:24.717828    5648 pod_ready.go:102] pod "cilium-jkssv" in "kube-system" namespace has status "Ready":"False"
	I0512 01:31:27.158035    5648 pod_ready.go:102] pod "cilium-jkssv" in "kube-system" namespace has status "Ready":"False"
	I0512 01:31:29.672602    5648 pod_ready.go:102] pod "cilium-jkssv" in "kube-system" namespace has status "Ready":"False"
	I0512 01:31:32.160111    5648 pod_ready.go:102] pod "cilium-jkssv" in "kube-system" namespace has status "Ready":"False"
	I0512 01:31:34.161990    5648 pod_ready.go:102] pod "cilium-jkssv" in "kube-system" namespace has status "Ready":"False"
	I0512 01:31:36.712431    5648 pod_ready.go:102] pod "cilium-jkssv" in "kube-system" namespace has status "Ready":"False"
	I0512 01:31:38.715781    5648 pod_ready.go:102] pod "cilium-jkssv" in "kube-system" namespace has status "Ready":"False"
	I0512 01:31:41.160791    5648 pod_ready.go:102] pod "cilium-jkssv" in "kube-system" namespace has status "Ready":"False"
	I0512 01:31:43.218148    5648 pod_ready.go:102] pod "cilium-jkssv" in "kube-system" namespace has status "Ready":"False"
	I0512 01:31:45.654279    5648 pod_ready.go:102] pod "cilium-jkssv" in "kube-system" namespace has status "Ready":"False"
	I0512 01:31:48.454711    5648 pod_ready.go:102] pod "cilium-jkssv" in "kube-system" namespace has status "Ready":"False"
	I0512 01:31:50.742101    5648 pod_ready.go:102] pod "cilium-jkssv" in "kube-system" namespace has status "Ready":"False"
	I0512 01:31:53.309279    5648 pod_ready.go:102] pod "cilium-jkssv" in "kube-system" namespace has status "Ready":"False"
	I0512 01:31:55.656344    5648 pod_ready.go:102] pod "cilium-jkssv" in "kube-system" namespace has status "Ready":"False"
	I0512 01:31:57.656852    5648 pod_ready.go:102] pod "cilium-jkssv" in "kube-system" namespace has status "Ready":"False"
	I0512 01:31:59.658569    5648 pod_ready.go:102] pod "cilium-jkssv" in "kube-system" namespace has status "Ready":"False"
	I0512 01:32:01.659543    5648 pod_ready.go:102] pod "cilium-jkssv" in "kube-system" namespace has status "Ready":"False"
	I0512 01:32:03.721801    5648 pod_ready.go:102] pod "cilium-jkssv" in "kube-system" namespace has status "Ready":"False"
	I0512 01:32:06.155915    5648 pod_ready.go:102] pod "cilium-jkssv" in "kube-system" namespace has status "Ready":"False"
	I0512 01:32:08.157594    5648 pod_ready.go:102] pod "cilium-jkssv" in "kube-system" namespace has status "Ready":"False"
	I0512 01:32:10.169166    5648 pod_ready.go:102] pod "cilium-jkssv" in "kube-system" namespace has status "Ready":"False"
	I0512 01:32:12.720650    5648 pod_ready.go:102] pod "cilium-jkssv" in "kube-system" namespace has status "Ready":"False"
	I0512 01:32:15.158927    5648 pod_ready.go:102] pod "cilium-jkssv" in "kube-system" namespace has status "Ready":"False"
	I0512 01:32:17.162839    5648 pod_ready.go:102] pod "cilium-jkssv" in "kube-system" namespace has status "Ready":"False"
	I0512 01:32:19.163082    5648 pod_ready.go:102] pod "cilium-jkssv" in "kube-system" namespace has status "Ready":"False"
	I0512 01:32:21.165824    5648 pod_ready.go:102] pod "cilium-jkssv" in "kube-system" namespace has status "Ready":"False"
	I0512 01:32:23.171331    5648 pod_ready.go:102] pod "cilium-jkssv" in "kube-system" namespace has status "Ready":"False"
	I0512 01:32:25.662916    5648 pod_ready.go:102] pod "cilium-jkssv" in "kube-system" namespace has status "Ready":"False"
	I0512 01:32:28.163182    5648 pod_ready.go:102] pod "cilium-jkssv" in "kube-system" namespace has status "Ready":"False"
	I0512 01:32:30.164129    5648 pod_ready.go:102] pod "cilium-jkssv" in "kube-system" namespace has status "Ready":"False"
	I0512 01:32:32.713188    5648 pod_ready.go:102] pod "cilium-jkssv" in "kube-system" namespace has status "Ready":"False"
	I0512 01:32:34.721281    5648 pod_ready.go:102] pod "cilium-jkssv" in "kube-system" namespace has status "Ready":"False"
	I0512 01:32:37.155819    5648 pod_ready.go:102] pod "cilium-jkssv" in "kube-system" namespace has status "Ready":"False"
	I0512 01:32:39.159346    5648 pod_ready.go:102] pod "cilium-jkssv" in "kube-system" namespace has status "Ready":"False"
	I0512 01:32:41.664496    5648 pod_ready.go:102] pod "cilium-jkssv" in "kube-system" namespace has status "Ready":"False"
	I0512 01:32:44.165521    5648 pod_ready.go:102] pod "cilium-jkssv" in "kube-system" namespace has status "Ready":"False"
	I0512 01:32:46.712865    5648 pod_ready.go:102] pod "cilium-jkssv" in "kube-system" namespace has status "Ready":"False"
	I0512 01:32:49.167012    5648 pod_ready.go:102] pod "cilium-jkssv" in "kube-system" namespace has status "Ready":"False"
	I0512 01:32:51.660744    5648 pod_ready.go:102] pod "cilium-jkssv" in "kube-system" namespace has status "Ready":"False"
	I0512 01:32:53.668133    5648 pod_ready.go:102] pod "cilium-jkssv" in "kube-system" namespace has status "Ready":"False"
	I0512 01:32:55.673364    5648 pod_ready.go:102] pod "cilium-jkssv" in "kube-system" namespace has status "Ready":"False"
	I0512 01:32:58.158650    5648 pod_ready.go:102] pod "cilium-jkssv" in "kube-system" namespace has status "Ready":"False"
	I0512 01:33:00.161087    5648 pod_ready.go:102] pod "cilium-jkssv" in "kube-system" namespace has status "Ready":"False"
	I0512 01:33:02.175646    5648 pod_ready.go:102] pod "cilium-jkssv" in "kube-system" namespace has status "Ready":"False"
	I0512 01:33:04.713888    5648 pod_ready.go:102] pod "cilium-jkssv" in "kube-system" namespace has status "Ready":"False"
	I0512 01:33:06.716854    5648 pod_ready.go:102] pod "cilium-jkssv" in "kube-system" namespace has status "Ready":"False"
	I0512 01:33:09.168409    5648 pod_ready.go:102] pod "cilium-jkssv" in "kube-system" namespace has status "Ready":"False"
	I0512 01:33:11.670636    5648 pod_ready.go:102] pod "cilium-jkssv" in "kube-system" namespace has status "Ready":"False"
	I0512 01:33:14.161956    5648 pod_ready.go:102] pod "cilium-jkssv" in "kube-system" namespace has status "Ready":"False"
	I0512 01:33:16.163181    5648 pod_ready.go:102] pod "cilium-jkssv" in "kube-system" namespace has status "Ready":"False"
	I0512 01:33:18.218898    5648 pod_ready.go:102] pod "cilium-jkssv" in "kube-system" namespace has status "Ready":"False"
	I0512 01:33:20.662025    5648 pod_ready.go:102] pod "cilium-jkssv" in "kube-system" namespace has status "Ready":"False"
	I0512 01:33:23.210297    5648 pod_ready.go:102] pod "cilium-jkssv" in "kube-system" namespace has status "Ready":"False"
	I0512 01:33:25.658977    5648 pod_ready.go:102] pod "cilium-jkssv" in "kube-system" namespace has status "Ready":"False"
	I0512 01:33:28.167573    5648 pod_ready.go:102] pod "cilium-jkssv" in "kube-system" namespace has status "Ready":"False"
	I0512 01:33:30.170136    5648 pod_ready.go:102] pod "cilium-jkssv" in "kube-system" namespace has status "Ready":"False"
	I0512 01:33:32.191678    5648 pod_ready.go:102] pod "cilium-jkssv" in "kube-system" namespace has status "Ready":"False"
	I0512 01:33:34.666840    5648 pod_ready.go:102] pod "cilium-jkssv" in "kube-system" namespace has status "Ready":"False"
	I0512 01:33:37.163901    5648 pod_ready.go:102] pod "cilium-jkssv" in "kube-system" namespace has status "Ready":"False"
	I0512 01:33:39.174319    5648 pod_ready.go:102] pod "cilium-jkssv" in "kube-system" namespace has status "Ready":"False"
	I0512 01:33:41.673978    5648 pod_ready.go:102] pod "cilium-jkssv" in "kube-system" namespace has status "Ready":"False"
	I0512 01:33:44.159620    5648 pod_ready.go:102] pod "cilium-jkssv" in "kube-system" namespace has status "Ready":"False"
	I0512 01:33:46.169661    5648 pod_ready.go:102] pod "cilium-jkssv" in "kube-system" namespace has status "Ready":"False"
	I0512 01:33:48.658835    5648 pod_ready.go:102] pod "cilium-jkssv" in "kube-system" namespace has status "Ready":"False"
	I0512 01:33:50.662449    5648 pod_ready.go:102] pod "cilium-jkssv" in "kube-system" namespace has status "Ready":"False"
	I0512 01:33:52.714016    5648 pod_ready.go:102] pod "cilium-jkssv" in "kube-system" namespace has status "Ready":"False"
	I0512 01:33:54.719659    5648 pod_ready.go:102] pod "cilium-jkssv" in "kube-system" namespace has status "Ready":"False"
	I0512 01:33:57.161750    5648 pod_ready.go:102] pod "cilium-jkssv" in "kube-system" namespace has status "Ready":"False"
	I0512 01:33:59.657913    5648 pod_ready.go:102] pod "cilium-jkssv" in "kube-system" namespace has status "Ready":"False"
	I0512 01:34:01.668537    5648 pod_ready.go:102] pod "cilium-jkssv" in "kube-system" namespace has status "Ready":"False"
	I0512 01:34:04.177440    5648 pod_ready.go:102] pod "cilium-jkssv" in "kube-system" namespace has status "Ready":"False"
	I0512 01:34:06.728895    5648 pod_ready.go:102] pod "cilium-jkssv" in "kube-system" namespace has status "Ready":"False"
	I0512 01:34:09.172396    5648 pod_ready.go:102] pod "cilium-jkssv" in "kube-system" namespace has status "Ready":"False"
	I0512 01:34:11.180827    5648 pod_ready.go:102] pod "cilium-jkssv" in "kube-system" namespace has status "Ready":"False"
	I0512 01:34:13.676635    5648 pod_ready.go:102] pod "cilium-jkssv" in "kube-system" namespace has status "Ready":"False"
	I0512 01:34:16.166667    5648 pod_ready.go:102] pod "cilium-jkssv" in "kube-system" namespace has status "Ready":"False"
	I0512 01:34:18.231079    5648 pod_ready.go:102] pod "cilium-jkssv" in "kube-system" namespace has status "Ready":"False"
	I0512 01:34:20.670198    5648 pod_ready.go:102] pod "cilium-jkssv" in "kube-system" namespace has status "Ready":"False"
	I0512 01:34:22.720227    5648 pod_ready.go:102] pod "cilium-jkssv" in "kube-system" namespace has status "Ready":"False"
	I0512 01:34:25.170073    5648 pod_ready.go:102] pod "cilium-jkssv" in "kube-system" namespace has status "Ready":"False"
	I0512 01:34:27.658283    5648 pod_ready.go:102] pod "cilium-jkssv" in "kube-system" namespace has status "Ready":"False"
	I0512 01:34:29.668023    5648 pod_ready.go:102] pod "cilium-jkssv" in "kube-system" namespace has status "Ready":"False"
	I0512 01:34:31.186614    5648 pod_ready.go:81] duration metric: took 4m0.1388943s waiting for pod "cilium-jkssv" in "kube-system" namespace to be "Ready" ...
	E0512 01:34:31.186614    5648 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting for the condition
	I0512 01:34:31.186614    5648 pod_ready.go:78] waiting up to 5m0s for pod "cilium-operator-78f49c47f-2l2kl" in "kube-system" namespace to be "Ready" ...
	I0512 01:34:31.227780    5648 pod_ready.go:92] pod "cilium-operator-78f49c47f-2l2kl" in "kube-system" namespace has status "Ready":"True"
	I0512 01:34:31.227780    5648 pod_ready.go:81] duration metric: took 41.1637ms waiting for pod "cilium-operator-78f49c47f-2l2kl" in "kube-system" namespace to be "Ready" ...
	I0512 01:34:31.227780    5648 pod_ready.go:78] waiting up to 5m0s for pod "coredns-64897985d-b2472" in "kube-system" namespace to be "Ready" ...
	I0512 01:34:31.236789    5648 pod_ready.go:97] error getting pod "coredns-64897985d-b2472" in "kube-system" namespace (skipping!): pods "coredns-64897985d-b2472" not found
	I0512 01:34:31.236789    5648 pod_ready.go:81] duration metric: took 9.0082ms waiting for pod "coredns-64897985d-b2472" in "kube-system" namespace to be "Ready" ...
	E0512 01:34:31.236789    5648 pod_ready.go:66] WaitExtra: waitPodCondition: error getting pod "coredns-64897985d-b2472" in "kube-system" namespace (skipping!): pods "coredns-64897985d-b2472" not found
	I0512 01:34:31.236789    5648 pod_ready.go:78] waiting up to 5m0s for pod "coredns-64897985d-tknx6" in "kube-system" namespace to be "Ready" ...
	I0512 01:34:33.286698    5648 pod_ready.go:102] pod "coredns-64897985d-tknx6" in "kube-system" namespace has status "Ready":"False"
	I0512 01:34:35.784614    5648 pod_ready.go:102] pod "coredns-64897985d-tknx6" in "kube-system" namespace has status "Ready":"False"
	I0512 01:34:37.788112    5648 pod_ready.go:102] pod "coredns-64897985d-tknx6" in "kube-system" namespace has status "Ready":"False"
	I0512 01:34:39.788422    5648 pod_ready.go:102] pod "coredns-64897985d-tknx6" in "kube-system" namespace has status "Ready":"False"
	I0512 01:34:42.284194    5648 pod_ready.go:102] pod "coredns-64897985d-tknx6" in "kube-system" namespace has status "Ready":"False"
	I0512 01:34:44.289352    5648 pod_ready.go:102] pod "coredns-64897985d-tknx6" in "kube-system" namespace has status "Ready":"False"
	I0512 01:34:46.291154    5648 pod_ready.go:102] pod "coredns-64897985d-tknx6" in "kube-system" namespace has status "Ready":"False"
	I0512 01:34:48.293264    5648 pod_ready.go:102] pod "coredns-64897985d-tknx6" in "kube-system" namespace has status "Ready":"False"
	I0512 01:34:50.294223    5648 pod_ready.go:102] pod "coredns-64897985d-tknx6" in "kube-system" namespace has status "Ready":"False"
	I0512 01:34:52.786941    5648 pod_ready.go:102] pod "coredns-64897985d-tknx6" in "kube-system" namespace has status "Ready":"False"
	I0512 01:34:55.287235    5648 pod_ready.go:102] pod "coredns-64897985d-tknx6" in "kube-system" namespace has status "Ready":"False"
	I0512 01:34:57.781493    5648 pod_ready.go:102] pod "coredns-64897985d-tknx6" in "kube-system" namespace has status "Ready":"False"
	I0512 01:34:59.789547    5648 pod_ready.go:102] pod "coredns-64897985d-tknx6" in "kube-system" namespace has status "Ready":"False"
	I0512 01:35:01.802955    5648 pod_ready.go:102] pod "coredns-64897985d-tknx6" in "kube-system" namespace has status "Ready":"False"
	I0512 01:35:04.292669    5648 pod_ready.go:102] pod "coredns-64897985d-tknx6" in "kube-system" namespace has status "Ready":"False"
	I0512 01:35:06.779619    5648 pod_ready.go:102] pod "coredns-64897985d-tknx6" in "kube-system" namespace has status "Ready":"False"
	I0512 01:35:08.796778    5648 pod_ready.go:102] pod "coredns-64897985d-tknx6" in "kube-system" namespace has status "Ready":"False"
	I0512 01:35:11.285448    5648 pod_ready.go:102] pod "coredns-64897985d-tknx6" in "kube-system" namespace has status "Ready":"False"
	I0512 01:35:13.788454    5648 pod_ready.go:102] pod "coredns-64897985d-tknx6" in "kube-system" namespace has status "Ready":"False"
	I0512 01:35:16.287540    5648 pod_ready.go:102] pod "coredns-64897985d-tknx6" in "kube-system" namespace has status "Ready":"False"
	I0512 01:35:18.784537    5648 pod_ready.go:102] pod "coredns-64897985d-tknx6" in "kube-system" namespace has status "Ready":"False"
	I0512 01:35:20.794581    5648 pod_ready.go:102] pod "coredns-64897985d-tknx6" in "kube-system" namespace has status "Ready":"False"
	I0512 01:35:23.299104    5648 pod_ready.go:102] pod "coredns-64897985d-tknx6" in "kube-system" namespace has status "Ready":"False"
	I0512 01:35:25.785684    5648 pod_ready.go:102] pod "coredns-64897985d-tknx6" in "kube-system" namespace has status "Ready":"False"
	I0512 01:35:27.795358    5648 pod_ready.go:102] pod "coredns-64897985d-tknx6" in "kube-system" namespace has status "Ready":"False"
	I0512 01:35:30.290138    5648 pod_ready.go:102] pod "coredns-64897985d-tknx6" in "kube-system" namespace has status "Ready":"False"
	I0512 01:35:32.786624    5648 pod_ready.go:102] pod "coredns-64897985d-tknx6" in "kube-system" namespace has status "Ready":"False"
	I0512 01:35:35.287924    5648 pod_ready.go:102] pod "coredns-64897985d-tknx6" in "kube-system" namespace has status "Ready":"False"
	I0512 01:35:37.298443    5648 pod_ready.go:102] pod "coredns-64897985d-tknx6" in "kube-system" namespace has status "Ready":"False"
	I0512 01:35:39.820306    5648 pod_ready.go:102] pod "coredns-64897985d-tknx6" in "kube-system" namespace has status "Ready":"False"
	I0512 01:35:42.296727    5648 pod_ready.go:102] pod "coredns-64897985d-tknx6" in "kube-system" namespace has status "Ready":"False"
	I0512 01:35:44.796707    5648 pod_ready.go:102] pod "coredns-64897985d-tknx6" in "kube-system" namespace has status "Ready":"False"
	I0512 01:35:47.288164    5648 pod_ready.go:102] pod "coredns-64897985d-tknx6" in "kube-system" namespace has status "Ready":"False"
	I0512 01:35:49.295212    5648 pod_ready.go:102] pod "coredns-64897985d-tknx6" in "kube-system" namespace has status "Ready":"False"
	I0512 01:35:51.783681    5648 pod_ready.go:102] pod "coredns-64897985d-tknx6" in "kube-system" namespace has status "Ready":"False"
	I0512 01:35:53.789362    5648 pod_ready.go:102] pod "coredns-64897985d-tknx6" in "kube-system" namespace has status "Ready":"False"
	I0512 01:35:56.292841    5648 pod_ready.go:102] pod "coredns-64897985d-tknx6" in "kube-system" namespace has status "Ready":"False"
	I0512 01:35:58.328976    5648 pod_ready.go:102] pod "coredns-64897985d-tknx6" in "kube-system" namespace has status "Ready":"False"
	I0512 01:36:00.788195    5648 pod_ready.go:102] pod "coredns-64897985d-tknx6" in "kube-system" namespace has status "Ready":"False"
	I0512 01:36:02.822202    5648 pod_ready.go:102] pod "coredns-64897985d-tknx6" in "kube-system" namespace has status "Ready":"False"
	I0512 01:36:05.295729    5648 pod_ready.go:102] pod "coredns-64897985d-tknx6" in "kube-system" namespace has status "Ready":"False"
	I0512 01:36:07.790255    5648 pod_ready.go:102] pod "coredns-64897985d-tknx6" in "kube-system" namespace has status "Ready":"False"
	I0512 01:36:10.283281    5648 pod_ready.go:102] pod "coredns-64897985d-tknx6" in "kube-system" namespace has status "Ready":"False"
	I0512 01:36:12.294289    5648 pod_ready.go:102] pod "coredns-64897985d-tknx6" in "kube-system" namespace has status "Ready":"False"
	I0512 01:36:14.798316    5648 pod_ready.go:102] pod "coredns-64897985d-tknx6" in "kube-system" namespace has status "Ready":"False"
	I0512 01:36:16.834794    5648 pod_ready.go:102] pod "coredns-64897985d-tknx6" in "kube-system" namespace has status "Ready":"False"
	I0512 01:36:19.295398    5648 pod_ready.go:102] pod "coredns-64897985d-tknx6" in "kube-system" namespace has status "Ready":"False"
	I0512 01:36:21.796002    5648 pod_ready.go:102] pod "coredns-64897985d-tknx6" in "kube-system" namespace has status "Ready":"False"
	I0512 01:36:24.298649    5648 pod_ready.go:102] pod "coredns-64897985d-tknx6" in "kube-system" namespace has status "Ready":"False"
	I0512 01:36:26.792491    5648 pod_ready.go:102] pod "coredns-64897985d-tknx6" in "kube-system" namespace has status "Ready":"False"
	I0512 01:36:28.793998    5648 pod_ready.go:102] pod "coredns-64897985d-tknx6" in "kube-system" namespace has status "Ready":"False"
	I0512 01:36:31.288578    5648 pod_ready.go:102] pod "coredns-64897985d-tknx6" in "kube-system" namespace has status "Ready":"False"
	I0512 01:36:33.290305    5648 pod_ready.go:102] pod "coredns-64897985d-tknx6" in "kube-system" namespace has status "Ready":"False"
	I0512 01:36:35.785144    5648 pod_ready.go:102] pod "coredns-64897985d-tknx6" in "kube-system" namespace has status "Ready":"False"
	I0512 01:36:37.790516    5648 pod_ready.go:102] pod "coredns-64897985d-tknx6" in "kube-system" namespace has status "Ready":"False"
	I0512 01:36:39.794825    5648 pod_ready.go:102] pod "coredns-64897985d-tknx6" in "kube-system" namespace has status "Ready":"False"
	I0512 01:36:42.291067    5648 pod_ready.go:102] pod "coredns-64897985d-tknx6" in "kube-system" namespace has status "Ready":"False"
	I0512 01:36:44.789171    5648 pod_ready.go:102] pod "coredns-64897985d-tknx6" in "kube-system" namespace has status "Ready":"False"
	I0512 01:36:46.793718    5648 pod_ready.go:102] pod "coredns-64897985d-tknx6" in "kube-system" namespace has status "Ready":"False"
	I0512 01:36:48.798077    5648 pod_ready.go:102] pod "coredns-64897985d-tknx6" in "kube-system" namespace has status "Ready":"False"
	I0512 01:36:50.820540    5648 pod_ready.go:102] pod "coredns-64897985d-tknx6" in "kube-system" namespace has status "Ready":"False"
	I0512 01:36:53.290972    5648 pod_ready.go:102] pod "coredns-64897985d-tknx6" in "kube-system" namespace has status "Ready":"False"
	I0512 01:36:55.306859    5648 pod_ready.go:102] pod "coredns-64897985d-tknx6" in "kube-system" namespace has status "Ready":"False"
	I0512 01:36:57.797495    5648 pod_ready.go:102] pod "coredns-64897985d-tknx6" in "kube-system" namespace has status "Ready":"False"
	I0512 01:37:00.284928    5648 pod_ready.go:102] pod "coredns-64897985d-tknx6" in "kube-system" namespace has status "Ready":"False"
	I0512 01:37:02.292519    5648 pod_ready.go:102] pod "coredns-64897985d-tknx6" in "kube-system" namespace has status "Ready":"False"
	I0512 01:37:04.793848    5648 pod_ready.go:102] pod "coredns-64897985d-tknx6" in "kube-system" namespace has status "Ready":"False"
	I0512 01:37:07.297938    5648 pod_ready.go:102] pod "coredns-64897985d-tknx6" in "kube-system" namespace has status "Ready":"False"
	I0512 01:37:09.301825    5648 pod_ready.go:102] pod "coredns-64897985d-tknx6" in "kube-system" namespace has status "Ready":"False"
	I0512 01:37:11.784022    5648 pod_ready.go:102] pod "coredns-64897985d-tknx6" in "kube-system" namespace has status "Ready":"False"
	I0512 01:37:13.795784    5648 pod_ready.go:102] pod "coredns-64897985d-tknx6" in "kube-system" namespace has status "Ready":"False"
	I0512 01:37:15.797130    5648 pod_ready.go:102] pod "coredns-64897985d-tknx6" in "kube-system" namespace has status "Ready":"False"
	I0512 01:37:17.800611    5648 pod_ready.go:102] pod "coredns-64897985d-tknx6" in "kube-system" namespace has status "Ready":"False"
	I0512 01:37:20.284298    5648 pod_ready.go:102] pod "coredns-64897985d-tknx6" in "kube-system" namespace has status "Ready":"False"
	I0512 01:37:22.297580    5648 pod_ready.go:102] pod "coredns-64897985d-tknx6" in "kube-system" namespace has status "Ready":"False"
	I0512 01:37:24.788066    5648 pod_ready.go:102] pod "coredns-64897985d-tknx6" in "kube-system" namespace has status "Ready":"False"
	I0512 01:37:26.790082    5648 pod_ready.go:102] pod "coredns-64897985d-tknx6" in "kube-system" namespace has status "Ready":"False"
	I0512 01:37:28.792048    5648 pod_ready.go:102] pod "coredns-64897985d-tknx6" in "kube-system" namespace has status "Ready":"False"
	I0512 01:37:31.289023    5648 pod_ready.go:102] pod "coredns-64897985d-tknx6" in "kube-system" namespace has status "Ready":"False"
	I0512 01:37:33.300324    5648 pod_ready.go:102] pod "coredns-64897985d-tknx6" in "kube-system" namespace has status "Ready":"False"
	I0512 01:37:35.782305    5648 pod_ready.go:102] pod "coredns-64897985d-tknx6" in "kube-system" namespace has status "Ready":"False"
	I0512 01:37:37.783753    5648 pod_ready.go:102] pod "coredns-64897985d-tknx6" in "kube-system" namespace has status "Ready":"False"
	I0512 01:37:39.798310    5648 pod_ready.go:102] pod "coredns-64897985d-tknx6" in "kube-system" namespace has status "Ready":"False"
	I0512 01:37:41.804684    5648 pod_ready.go:102] pod "coredns-64897985d-tknx6" in "kube-system" namespace has status "Ready":"False"
	I0512 01:37:44.282616    5648 pod_ready.go:102] pod "coredns-64897985d-tknx6" in "kube-system" namespace has status "Ready":"False"
	I0512 01:37:46.290507    5648 pod_ready.go:102] pod "coredns-64897985d-tknx6" in "kube-system" namespace has status "Ready":"False"
	I0512 01:37:48.300829    5648 pod_ready.go:102] pod "coredns-64897985d-tknx6" in "kube-system" namespace has status "Ready":"False"
	I0512 01:37:50.795108    5648 pod_ready.go:102] pod "coredns-64897985d-tknx6" in "kube-system" namespace has status "Ready":"False"
	I0512 01:37:53.289915    5648 pod_ready.go:102] pod "coredns-64897985d-tknx6" in "kube-system" namespace has status "Ready":"False"
	I0512 01:37:55.290349    5648 pod_ready.go:102] pod "coredns-64897985d-tknx6" in "kube-system" namespace has status "Ready":"False"
	I0512 01:37:57.292209    5648 pod_ready.go:102] pod "coredns-64897985d-tknx6" in "kube-system" namespace has status "Ready":"False"
	I0512 01:37:59.411193    5648 pod_ready.go:102] pod "coredns-64897985d-tknx6" in "kube-system" namespace has status "Ready":"False"
	I0512 01:38:01.798312    5648 pod_ready.go:102] pod "coredns-64897985d-tknx6" in "kube-system" namespace has status "Ready":"False"
	I0512 01:38:04.286683    5648 pod_ready.go:102] pod "coredns-64897985d-tknx6" in "kube-system" namespace has status "Ready":"False"
	I0512 01:38:06.293740    5648 pod_ready.go:102] pod "coredns-64897985d-tknx6" in "kube-system" namespace has status "Ready":"False"
	I0512 01:38:08.790592    5648 pod_ready.go:102] pod "coredns-64897985d-tknx6" in "kube-system" namespace has status "Ready":"False"
	I0512 01:38:10.791181    5648 pod_ready.go:102] pod "coredns-64897985d-tknx6" in "kube-system" namespace has status "Ready":"False"
	I0512 01:38:12.793629    5648 pod_ready.go:102] pod "coredns-64897985d-tknx6" in "kube-system" namespace has status "Ready":"False"
	I0512 01:38:14.799911    5648 pod_ready.go:102] pod "coredns-64897985d-tknx6" in "kube-system" namespace has status "Ready":"False"
	I0512 01:38:17.294573    5648 pod_ready.go:102] pod "coredns-64897985d-tknx6" in "kube-system" namespace has status "Ready":"False"
	I0512 01:38:19.798043    5648 pod_ready.go:102] pod "coredns-64897985d-tknx6" in "kube-system" namespace has status "Ready":"False"
	I0512 01:38:21.805942    5648 pod_ready.go:102] pod "coredns-64897985d-tknx6" in "kube-system" namespace has status "Ready":"False"
	I0512 01:38:24.294211    5648 pod_ready.go:102] pod "coredns-64897985d-tknx6" in "kube-system" namespace has status "Ready":"False"
	I0512 01:38:26.298205    5648 pod_ready.go:102] pod "coredns-64897985d-tknx6" in "kube-system" namespace has status "Ready":"False"
	I0512 01:38:28.794262    5648 pod_ready.go:102] pod "coredns-64897985d-tknx6" in "kube-system" namespace has status "Ready":"False"
	I0512 01:38:30.800840    5648 pod_ready.go:102] pod "coredns-64897985d-tknx6" in "kube-system" namespace has status "Ready":"False"
	I0512 01:38:31.325453    5648 pod_ready.go:81] duration metric: took 4m0.07648s waiting for pod "coredns-64897985d-tknx6" in "kube-system" namespace to be "Ready" ...
	E0512 01:38:31.325453    5648 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting for the condition
	I0512 01:38:31.325453    5648 pod_ready.go:38] duration metric: took 8m0.2907814s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0512 01:38:31.329391    5648 out.go:177] 
	W0512 01:38:31.333174    5648 out.go:239] X Exiting due to GUEST_START: wait 5m0s for node: extra waiting: timed out waiting 5m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	X Exiting due to GUEST_START: wait 5m0s for node: extra waiting: timed out waiting 5m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	W0512 01:38:31.333174    5648 out.go:239] * 
	* 
	W0512 01:38:31.335350    5648 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0512 01:38:31.340039    5648 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:103: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/cilium/Start (977.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (957.61s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:101: (dbg) Run:  out/minikube-windows-amd64.exe start -p calico-20220512010244-7184 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=calico --driver=docker

                                                
                                                
=== CONT  TestNetworkPlugins/group/calico/Start
net_test.go:101: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p calico-20220512010244-7184 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=calico --driver=docker: exit status 80 (15m57.073779s)

                                                
                                                
-- stdout --
	* [calico-20220512010244-7184] minikube v1.25.2 on Microsoft Windows 10 Enterprise N 10.0.19042 Build 19042
	  - KUBECONFIG=C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube4\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=13639
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	* Using the docker driver based on user configuration
	* Using Docker Desktop driver with the root privilege
	* Starting control plane node calico-20220512010244-7184 in cluster calico-20220512010244-7184
	* Pulling base image ...
	* Creating docker container (CPUs=2, Memory=2048MB) ...
	* docker "calico-20220512010244-7184" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=2048MB) ...
	* Preparing Kubernetes v1.23.5 on Docker 20.10.15 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Configuring RBAC rules ...
	* Configuring Calico (Container Networking Interface) ...
	* Verifying Kubernetes components...
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	* Enabled addons: storage-provisioner, default-storageclass
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0512 01:22:49.255926    8652 out.go:296] Setting OutFile to fd 1800 ...
	I0512 01:22:49.334939    8652 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0512 01:22:49.334939    8652 out.go:309] Setting ErrFile to fd 1752...
	I0512 01:22:49.334939    8652 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0512 01:22:49.349961    8652 out.go:303] Setting JSON to false
	I0512 01:22:49.352947    8652 start.go:115] hostinfo: {"hostname":"minikube4","uptime":17022,"bootTime":1652301547,"procs":169,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19042 Build 19042","kernelVersion":"10.0.19042 Build 19042","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"df6bfb5f-73f2-4acb-9365-df7854ecbb28"}
	W0512 01:22:49.352947    8652 start.go:123] gopshost.Virtualization returned error: not implemented yet
	I0512 01:22:49.357939    8652 out.go:177] * [calico-20220512010244-7184] minikube v1.25.2 on Microsoft Windows 10 Enterprise N 10.0.19042 Build 19042
	I0512 01:22:49.360943    8652 notify.go:193] Checking for updates...
	I0512 01:22:49.363948    8652 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	I0512 01:22:49.366946    8652 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube4\minikube-integration\.minikube
	I0512 01:22:49.369944    8652 out.go:177]   - MINIKUBE_LOCATION=13639
	I0512 01:22:49.372935    8652 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0512 01:22:49.376947    8652 config.go:178] Loaded profile config "auto-20220512010229-7184": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.5
	I0512 01:22:49.376947    8652 config.go:178] Loaded profile config "cilium-20220512010244-7184": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.5
	I0512 01:22:49.377951    8652 config.go:178] Loaded profile config "default-k8s-different-port-20220512011148-7184": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.5
	I0512 01:22:49.377951    8652 driver.go:358] Setting default libvirt URI to qemu:///system
	I0512 01:22:52.267972    8652 docker.go:137] docker version: linux-20.10.14
	I0512 01:22:52.283570    8652 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0512 01:22:54.453055    8652 cli_runner.go:217] Completed: docker system info --format "{{json .}}": (2.1693461s)
	I0512 01:22:54.453707    8652 info.go:265] docker info: {ID:5MOX:W55Z:6RSS:V5PU:46KT:D723:NTM4:N7FK:USOO:URA3:TW6J:2PNT Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:80 OomKillDisable:true NGoroutines:70 SystemTime:2022-05-12 01:22:53.3631805 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.102.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_
64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:53902323712 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.14 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3df54a852345ae127d1fa3092b95168e4a88e2f8 Expected:3df54a852345ae127d1fa3092b95168e4a88e2f8} RuncCommit:{ID:v1.0.3-0-gf46b6ba Expected:v1.0.3-0-gf46b6ba} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,
profile=default] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.5.0] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:C:\Program Files\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc.
Version:v0.17.0]] Warnings:<nil>}}
	I0512 01:22:54.457932    8652 out.go:177] * Using the docker driver based on user configuration
	I0512 01:22:54.460104    8652 start.go:284] selected driver: docker
	I0512 01:22:54.460104    8652 start.go:801] validating driver "docker" against <nil>
	I0512 01:22:54.460104    8652 start.go:812] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0512 01:22:54.535467    8652 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0512 01:22:56.732271    8652 cli_runner.go:217] Completed: docker system info --format "{{json .}}": (2.1963949s)
	I0512 01:22:56.732530    8652 info.go:265] docker info: {ID:5MOX:W55Z:6RSS:V5PU:46KT:D723:NTM4:N7FK:USOO:URA3:TW6J:2PNT Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:80 OomKillDisable:true NGoroutines:70 SystemTime:2022-05-12 01:22:55.650902 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.102.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_6
4 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:53902323712 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.14 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3df54a852345ae127d1fa3092b95168e4a88e2f8 Expected:3df54a852345ae127d1fa3092b95168e4a88e2f8} RuncCommit:{ID:v1.0.3-0-gf46b6ba Expected:v1.0.3-0-gf46b6ba} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,p
rofile=default] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.5.0] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:C:\Program Files\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc.
Version:v0.17.0]] Warnings:<nil>}}
	I0512 01:22:56.732530    8652 start_flags.go:292] no existing cluster config was found, will generate one from the flags 
	I0512 01:22:56.733303    8652 start_flags.go:847] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0512 01:22:56.736797    8652 out.go:177] * Using Docker Desktop driver with the root privilege
	I0512 01:22:56.738997    8652 cni.go:95] Creating CNI manager for "calico"
	I0512 01:22:56.739052    8652 start_flags.go:301] Found "Calico" CNI - setting NetworkPlugin=cni
	I0512 01:22:56.739052    8652 start_flags.go:306] config:
	{Name:calico-20220512010244-7184 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1652251400-14138@sha256:8c847a4aa2afc5a7fc659f9731046bf9cc7e788283deecc83c8633014fb0828a Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.5 ClusterName:calico-20220512010244-7184 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:5m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube4:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0512 01:22:56.742009    8652 out.go:177] * Starting control plane node calico-20220512010244-7184 in cluster calico-20220512010244-7184
	I0512 01:22:56.745671    8652 cache.go:120] Beginning downloading kic base image for docker with docker
	I0512 01:22:56.748727    8652 out.go:177] * Pulling base image ...
	I0512 01:22:56.750810    8652 preload.go:132] Checking if preload exists for k8s version v1.23.5 and runtime docker
	I0512 01:22:56.750810    8652 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1652251400-14138@sha256:8c847a4aa2afc5a7fc659f9731046bf9cc7e788283deecc83c8633014fb0828a in local docker daemon
	I0512 01:22:56.751827    8652 preload.go:148] Found local preload: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.23.5-docker-overlay2-amd64.tar.lz4
	I0512 01:22:56.751827    8652 cache.go:57] Caching tarball of preloaded images
	I0512 01:22:56.751827    8652 preload.go:174] Found C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.23.5-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0512 01:22:56.751827    8652 cache.go:60] Finished verifying existence of preloaded tar for  v1.23.5 on docker
	I0512 01:22:56.751827    8652 profile.go:148] Saving config to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\calico-20220512010244-7184\config.json ...
	I0512 01:22:56.752818    8652 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\calico-20220512010244-7184\config.json: {Name:mkcf8051d840dd4284fb6a9264c17ea3431f32ef Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0512 01:22:57.852905    8652 image.go:79] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1652251400-14138@sha256:8c847a4aa2afc5a7fc659f9731046bf9cc7e788283deecc83c8633014fb0828a in local docker daemon, skipping pull
	I0512 01:22:57.852967    8652 cache.go:141] gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1652251400-14138@sha256:8c847a4aa2afc5a7fc659f9731046bf9cc7e788283deecc83c8633014fb0828a exists in daemon, skipping load
	I0512 01:22:57.853021    8652 cache.go:206] Successfully downloaded all kic artifacts
	I0512 01:22:57.853203    8652 start.go:352] acquiring machines lock for calico-20220512010244-7184: {Name:mk4c565448c25667cc580dea7eba672b7794a8a5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0512 01:22:57.853353    8652 start.go:356] acquired machines lock for "calico-20220512010244-7184" in 107.2µs
	I0512 01:22:57.853353    8652 start.go:91] Provisioning new machine with config: &{Name:calico-20220512010244-7184 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1652251400-14138@sha256:8c847a4aa2afc5a7fc659f9731046bf9cc7e788283deecc83c8633014fb0828a Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.5 ClusterName:calico-20220512010244-7184 Namespace:default
APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.23.5 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:5m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube4:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: D
isableOptimizations:false DisableMetrics:false} &{Name: IP: Port:8443 KubernetesVersion:v1.23.5 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0512 01:22:57.853978    8652 start.go:131] createHost starting for "" (driver="docker")
	I0512 01:22:57.858696    8652 out.go:204] * Creating docker container (CPUs=2, Memory=2048MB) ...
	I0512 01:22:57.858696    8652 start.go:165] libmachine.API.Create for "calico-20220512010244-7184" (driver="docker")
	I0512 01:22:57.858696    8652 client.go:168] LocalClient.Create starting
	I0512 01:22:57.859696    8652 main.go:134] libmachine: Reading certificate data from C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem
	I0512 01:22:57.859696    8652 main.go:134] libmachine: Decoding PEM data...
	I0512 01:22:57.859696    8652 main.go:134] libmachine: Parsing certificate...
	I0512 01:22:57.859696    8652 main.go:134] libmachine: Reading certificate data from C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem
	I0512 01:22:57.859696    8652 main.go:134] libmachine: Decoding PEM data...
	I0512 01:22:57.859696    8652 main.go:134] libmachine: Parsing certificate...
	I0512 01:22:57.869705    8652 cli_runner.go:164] Run: docker network inspect calico-20220512010244-7184 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0512 01:22:59.018885    8652 cli_runner.go:211] docker network inspect calico-20220512010244-7184 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0512 01:22:59.018885    8652 cli_runner.go:217] Completed: docker network inspect calico-20220512010244-7184 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}": (1.14912s)
	I0512 01:22:59.025886    8652 network_create.go:272] running [docker network inspect calico-20220512010244-7184] to gather additional debugging logs...
	I0512 01:22:59.026886    8652 cli_runner.go:164] Run: docker network inspect calico-20220512010244-7184
	W0512 01:23:00.227139    8652 cli_runner.go:211] docker network inspect calico-20220512010244-7184 returned with exit code 1
	I0512 01:23:00.227204    8652 cli_runner.go:217] Completed: docker network inspect calico-20220512010244-7184: (1.2000075s)
	I0512 01:23:00.227204    8652 network_create.go:275] error running [docker network inspect calico-20220512010244-7184]: docker network inspect calico-20220512010244-7184: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: calico-20220512010244-7184
	I0512 01:23:00.227204    8652 network_create.go:277] output of [docker network inspect calico-20220512010244-7184]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: calico-20220512010244-7184
	
	** /stderr **
	I0512 01:23:00.240342    8652 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0512 01:23:01.347301    8652 cli_runner.go:217] Completed: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}": (1.1066943s)
	I0512 01:23:01.368268    8652 network.go:288] reserving subnet 192.168.49.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[] amended:true}} dirty:map[192.168.49.0:0xc00014e698] misses:0}
	I0512 01:23:01.369262    8652 network.go:235] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0512 01:23:01.369292    8652 network_create.go:115] attempt to create docker network calico-20220512010244-7184 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0512 01:23:01.377313    8652 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true calico-20220512010244-7184
	I0512 01:23:04.700017    8652 cli_runner.go:217] Completed: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true calico-20220512010244-7184: (3.3225329s)
	I0512 01:23:04.700017    8652 network_create.go:99] docker network calico-20220512010244-7184 192.168.49.0/24 created
	I0512 01:23:04.700017    8652 kic.go:106] calculated static IP "192.168.49.2" for the "calico-20220512010244-7184" container
	I0512 01:23:04.713014    8652 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0512 01:23:05.848280    8652 cli_runner.go:217] Completed: docker ps -a --format {{.Names}}: (1.1352075s)
	I0512 01:23:05.856270    8652 cli_runner.go:164] Run: docker volume create calico-20220512010244-7184 --label name.minikube.sigs.k8s.io=calico-20220512010244-7184 --label created_by.minikube.sigs.k8s.io=true
	I0512 01:23:06.961524    8652 cli_runner.go:217] Completed: docker volume create calico-20220512010244-7184 --label name.minikube.sigs.k8s.io=calico-20220512010244-7184 --label created_by.minikube.sigs.k8s.io=true: (1.1051966s)
	I0512 01:23:06.961524    8652 oci.go:103] Successfully created a docker volume calico-20220512010244-7184
	I0512 01:23:06.968519    8652 cli_runner.go:164] Run: docker run --rm --name calico-20220512010244-7184-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=calico-20220512010244-7184 --entrypoint /usr/bin/test -v calico-20220512010244-7184:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1652251400-14138@sha256:8c847a4aa2afc5a7fc659f9731046bf9cc7e788283deecc83c8633014fb0828a -d /var/lib
	I0512 01:23:09.590199    8652 cli_runner.go:217] Completed: docker run --rm --name calico-20220512010244-7184-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=calico-20220512010244-7184 --entrypoint /usr/bin/test -v calico-20220512010244-7184:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1652251400-14138@sha256:8c847a4aa2afc5a7fc659f9731046bf9cc7e788283deecc83c8633014fb0828a -d /var/lib: (2.6214634s)
	I0512 01:23:09.590425    8652 oci.go:107] Successfully prepared a docker volume calico-20220512010244-7184
	I0512 01:23:09.590425    8652 preload.go:132] Checking if preload exists for k8s version v1.23.5 and runtime docker
	I0512 01:23:09.590425    8652 kic.go:179] Starting extracting preloaded images to volume ...
	I0512 01:23:09.599988    8652 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.23.5-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v calico-20220512010244-7184:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1652251400-14138@sha256:8c847a4aa2afc5a7fc659f9731046bf9cc7e788283deecc83c8633014fb0828a -I lz4 -xf /preloaded.tar -C /extractDir
	I0512 01:23:33.361175    8652 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.23.5-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v calico-20220512010244-7184:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1652251400-14138@sha256:8c847a4aa2afc5a7fc659f9731046bf9cc7e788283deecc83c8633014fb0828a -I lz4 -xf /preloaded.tar -C /extractDir: (23.7598998s)
	I0512 01:23:33.361175    8652 kic.go:188] duration metric: took 23.769527 seconds to extract preloaded images to volume
	I0512 01:23:33.369229    8652 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0512 01:23:35.530814    8652 cli_runner.go:217] Completed: docker system info --format "{{json .}}": (2.1614082s)
	I0512 01:23:35.531715    8652 info.go:265] docker info: {ID:5MOX:W55Z:6RSS:V5PU:46KT:D723:NTM4:N7FK:USOO:URA3:TW6J:2PNT Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:60 OomKillDisable:true NGoroutines:53 SystemTime:2022-05-12 01:23:34.4465271 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.102.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_
64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:53902323712 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.14 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3df54a852345ae127d1fa3092b95168e4a88e2f8 Expected:3df54a852345ae127d1fa3092b95168e4a88e2f8} RuncCommit:{ID:v1.0.3-0-gf46b6ba Expected:v1.0.3-0-gf46b6ba} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,
profile=default] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.5.0] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:C:\Program Files\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc.
Version:v0.17.0]] Warnings:<nil>}}
	I0512 01:23:35.546625    8652 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0512 01:23:37.652387    8652 cli_runner.go:217] Completed: docker info --format "'{{json .SecurityOptions}}'": (2.1046551s)
	I0512 01:23:37.660279    8652 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname calico-20220512010244-7184 --name calico-20220512010244-7184 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=calico-20220512010244-7184 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=calico-20220512010244-7184 --network calico-20220512010244-7184 --ip 192.168.49.2 --volume calico-20220512010244-7184:/var --security-opt apparmor=unconfined --memory=2048mb --memory-swap=2048mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1652251400-14138@sha256:8c847a4aa2afc5a7fc659f9731046bf9cc7e788283deecc83c8633014fb0828a
	W0512 01:23:38.891475    8652 cli_runner.go:211] docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname calico-20220512010244-7184 --name calico-20220512010244-7184 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=calico-20220512010244-7184 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=calico-20220512010244-7184 --network calico-20220512010244-7184 --ip 192.168.49.2 --volume calico-20220512010244-7184:/var --security-opt apparmor=unconfined --memory=2048mb --memory-swap=2048mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1652251400-14138@sha256:8c847a4aa2afc5a7fc659f9731046bf9cc7e788283deecc83c8633014fb0828a returned with exit code 125
	I0512 01:23:38.891475    8652 cli_runner.go:217] Completed: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname calico-20220512010244-7184 --name calico-20220512010244-7184 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=calico-20220512010244-7184 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=calico-20220512010244-7184 --network calico-20220512010244-7184 --ip 192.168.49.2 --volume calico-20220512010244-7184:/var --security-opt apparmor=unconfined --memory=2048mb --memory-swap=2048mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1652251400-14138@sha256:8c847a4aa2afc5a7fc659f9731046bf9cc7e788283deecc83c8633014fb0828a: (1.2311332s)
	I0512 01:23:38.891475    8652 client.go:171] LocalClient.Create took 41.0306676s
	I0512 01:23:40.908223    8652 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0512 01:23:40.919370    8652 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220512010244-7184
	W0512 01:23:42.009897    8652 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220512010244-7184 returned with exit code 1
	I0512 01:23:42.009897    8652 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220512010244-7184: (1.0904711s)
	I0512 01:23:42.009897    8652 retry.go:31] will retry after 276.165072ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0512 01:23:42.297492    8652 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220512010244-7184
	W0512 01:23:43.343565    8652 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220512010244-7184 returned with exit code 1
	I0512 01:23:43.343565    8652 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220512010244-7184: (1.0460196s)
	W0512 01:23:43.343565    8652 start.go:281] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	W0512 01:23:43.344558    8652 start.go:248] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0512 01:23:43.357564    8652 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0512 01:23:43.364561    8652 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220512010244-7184
	W0512 01:23:44.586238    8652 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220512010244-7184 returned with exit code 1
	I0512 01:23:44.586238    8652 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220512010244-7184: (1.2216137s)
	I0512 01:23:44.586238    8652 retry.go:31] will retry after 291.140013ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0512 01:23:44.894692    8652 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220512010244-7184
	W0512 01:23:45.997530    8652 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220512010244-7184 returned with exit code 1
	I0512 01:23:45.997530    8652 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220512010244-7184: (1.1027811s)
	W0512 01:23:45.997530    8652 start.go:296] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	W0512 01:23:45.997530    8652 start.go:253] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0512 01:23:45.997530    8652 start.go:134] duration metric: createHost completed in 48.141074s
	I0512 01:23:45.997530    8652 start.go:81] releasing machines lock for "calico-20220512010244-7184", held for 48.1416993s
	W0512 01:23:45.997530    8652 start.go:608] error starting host: creating host: create: creating: create kic node: create container: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname calico-20220512010244-7184 --name calico-20220512010244-7184 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=calico-20220512010244-7184 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=calico-20220512010244-7184 --network calico-20220512010244-7184 --ip 192.168.49.2 --volume calico-20220512010244-7184:/var --security-opt apparmor=unconfined --memory=2048mb --memory-swap=2048mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1652251400-14138@sha256:8c847a4aa2afc5a7fc659f9731046bf9cc7e788283deecc83c8633014fb0828a: exit status 1
25
	stdout:
	42260fb35b9f6c8dc51706a11a6662b410a0fafb217c805e626ae317096e4c99
	
	stderr:
	docker: Error response from daemon: network calico-20220512010244-7184 not found.
	I0512 01:23:46.011530    8652 cli_runner.go:164] Run: docker container inspect calico-20220512010244-7184 --format={{.State.Status}}
	I0512 01:23:47.139047    8652 cli_runner.go:217] Completed: docker container inspect calico-20220512010244-7184 --format={{.State.Status}}: (1.1274589s)
	W0512 01:23:47.139047    8652 start.go:613] delete host: Docker machine "calico-20220512010244-7184" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	W0512 01:23:47.139047    8652 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: create kic node: create container: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname calico-20220512010244-7184 --name calico-20220512010244-7184 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=calico-20220512010244-7184 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=calico-20220512010244-7184 --network calico-20220512010244-7184 --ip 192.168.49.2 --volume calico-20220512010244-7184:/var --security-opt apparmor=unconfined --memory=2048mb --memory-swap=2048mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1652251400-14138@sha256:8c847a4aa2afc5a7fc659f9731046bf9cc7e788283deecc83c8633014fb082
8a: exit status 125
	stdout:
	42260fb35b9f6c8dc51706a11a6662b410a0fafb217c805e626ae317096e4c99
	
	stderr:
	docker: Error response from daemon: network calico-20220512010244-7184 not found.
	
	! StartHost failed, but will try again: creating host: create: creating: create kic node: create container: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname calico-20220512010244-7184 --name calico-20220512010244-7184 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=calico-20220512010244-7184 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=calico-20220512010244-7184 --network calico-20220512010244-7184 --ip 192.168.49.2 --volume calico-20220512010244-7184:/var --security-opt apparmor=unconfined --memory=2048mb --memory-swap=2048mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1652251400-14138@sha256:8c847a4aa2afc5a7fc659f9731046bf9cc7e788283deecc83c8633014fb0828a: exit status 125
	stdout:
	42260fb35b9f6c8dc51706a11a6662b410a0fafb217c805e626ae317096e4c99
	
	stderr:
	docker: Error response from daemon: network calico-20220512010244-7184 not found.
	
	I0512 01:23:47.139047    8652 start.go:623] Will try again in 5 seconds ...
	I0512 01:23:52.152654    8652 start.go:352] acquiring machines lock for calico-20220512010244-7184: {Name:mk4c565448c25667cc580dea7eba672b7794a8a5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0512 01:23:52.153263    8652 start.go:356] acquired machines lock for "calico-20220512010244-7184" in 0s
	I0512 01:23:52.153363    8652 start.go:94] Skipping create...Using existing machine configuration
	I0512 01:23:52.153363    8652 fix.go:55] fixHost starting: 
	I0512 01:23:52.166351    8652 cli_runner.go:164] Run: docker container inspect calico-20220512010244-7184 --format={{.State.Status}}
	I0512 01:23:53.222139    8652 cli_runner.go:217] Completed: docker container inspect calico-20220512010244-7184 --format={{.State.Status}}: (1.0547335s)
	I0512 01:23:53.222328    8652 fix.go:103] recreateIfNeeded on calico-20220512010244-7184: state= err=<nil>
	I0512 01:23:53.222328    8652 fix.go:108] machineExists: false. err=machine does not exist
	I0512 01:23:53.228320    8652 out.go:177] * docker "calico-20220512010244-7184" container is missing, will recreate.
	I0512 01:23:53.233505    8652 delete.go:124] DEMOLISHING calico-20220512010244-7184 ...
	I0512 01:23:53.247505    8652 cli_runner.go:164] Run: docker container inspect calico-20220512010244-7184 --format={{.State.Status}}
	I0512 01:23:54.322501    8652 cli_runner.go:217] Completed: docker container inspect calico-20220512010244-7184 --format={{.State.Status}}: (1.0749413s)
	I0512 01:23:54.322501    8652 stop.go:79] host is in state 
	I0512 01:23:54.322501    8652 main.go:134] libmachine: Stopping "calico-20220512010244-7184"...
	I0512 01:23:54.338938    8652 cli_runner.go:164] Run: docker container inspect calico-20220512010244-7184 --format={{.State.Status}}
	I0512 01:23:55.445901    8652 cli_runner.go:217] Completed: docker container inspect calico-20220512010244-7184 --format={{.State.Status}}: (1.1034085s)
	I0512 01:23:55.464543    8652 kic_runner.go:93] Run: systemctl --version
	I0512 01:23:55.464543    8652 kic_runner.go:114] Args: [docker exec --privileged calico-20220512010244-7184 systemctl --version]
	I0512 01:23:56.673439    8652 kic_runner.go:93] Run: sudo service kubelet stop
	I0512 01:23:56.673439    8652 kic_runner.go:114] Args: [docker exec --privileged calico-20220512010244-7184 sudo service kubelet stop]
	I0512 01:23:57.815578    8652 openrc.go:165] stop output: 
	** stderr ** 
	Error response from daemon: Container 42260fb35b9f6c8dc51706a11a6662b410a0fafb217c805e626ae317096e4c99 is not running
	
	** /stderr **
	W0512 01:23:57.815578    8652 kic.go:439] couldn't stop kubelet. will continue with stop anyways: sudo service kubelet stop: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: Container 42260fb35b9f6c8dc51706a11a6662b410a0fafb217c805e626ae317096e4c99 is not running
	I0512 01:23:57.834646    8652 kic_runner.go:93] Run: sudo service kubelet stop
	I0512 01:23:57.834646    8652 kic_runner.go:114] Args: [docker exec --privileged calico-20220512010244-7184 sudo service kubelet stop]
	I0512 01:23:58.946194    8652 openrc.go:165] stop output: 
	** stderr ** 
	Error response from daemon: Container 42260fb35b9f6c8dc51706a11a6662b410a0fafb217c805e626ae317096e4c99 is not running
	
	** /stderr **
	W0512 01:23:58.946194    8652 kic.go:441] couldn't force stop kubelet. will continue with stop anyways: sudo service kubelet stop: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: Container 42260fb35b9f6c8dc51706a11a6662b410a0fafb217c805e626ae317096e4c99 is not running
	I0512 01:23:58.960177    8652 kic_runner.go:93] Run: docker ps -a --filter=name=k8s_.*_(kube-system|kubernetes-dashboard|storage-gluster|istio-operator)_ --format={{.ID}}
	I0512 01:23:58.960177    8652 kic_runner.go:114] Args: [docker exec --privileged calico-20220512010244-7184 docker ps -a --filter=name=k8s_.*_(kube-system|kubernetes-dashboard|storage-gluster|istio-operator)_ --format={{.ID}}]
	I0512 01:24:00.055116    8652 kic.go:452] unable list containers : docker: docker ps -a --filter=name=k8s_.*_(kube-system|kubernetes-dashboard|storage-gluster|istio-operator)_ --format={{.ID}}: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: Container 42260fb35b9f6c8dc51706a11a6662b410a0fafb217c805e626ae317096e4c99 is not running
	I0512 01:24:00.058309    8652 kic.go:462] successfully stopped kubernetes!
	I0512 01:24:00.079181    8652 kic_runner.go:93] Run: pgrep kube-apiserver
	I0512 01:24:00.079181    8652 kic_runner.go:114] Args: [docker exec --privileged calico-20220512010244-7184 pgrep kube-apiserver]
	I0512 01:24:02.287672    8652 cli_runner.go:164] Run: docker container inspect calico-20220512010244-7184 --format={{.State.Status}}
	I0512 01:24:03.399245    8652 cli_runner.go:217] Completed: docker container inspect calico-20220512010244-7184 --format={{.State.Status}}: (1.1115164s)
	I0512 01:24:06.423810    8652 cli_runner.go:164] Run: docker container inspect calico-20220512010244-7184 --format={{.State.Status}}
	I0512 01:24:07.503456    8652 cli_runner.go:217] Completed: docker container inspect calico-20220512010244-7184 --format={{.State.Status}}: (1.0795901s)
	I0512 01:24:10.536792    8652 cli_runner.go:164] Run: docker container inspect calico-20220512010244-7184 --format={{.State.Status}}
	I0512 01:24:11.593031    8652 cli_runner.go:217] Completed: docker container inspect calico-20220512010244-7184 --format={{.State.Status}}: (1.056114s)
	I0512 01:24:14.611739    8652 cli_runner.go:164] Run: docker container inspect calico-20220512010244-7184 --format={{.State.Status}}
	I0512 01:24:15.643719    8652 cli_runner.go:217] Completed: docker container inspect calico-20220512010244-7184 --format={{.State.Status}}: (1.0311993s)
	I0512 01:24:18.672870    8652 cli_runner.go:164] Run: docker container inspect calico-20220512010244-7184 --format={{.State.Status}}
	I0512 01:24:19.773176    8652 cli_runner.go:217] Completed: docker container inspect calico-20220512010244-7184 --format={{.State.Status}}: (1.1001283s)
	I0512 01:24:22.796192    8652 cli_runner.go:164] Run: docker container inspect calico-20220512010244-7184 --format={{.State.Status}}
	I0512 01:24:23.945358    8652 cli_runner.go:217] Completed: docker container inspect calico-20220512010244-7184 --format={{.State.Status}}: (1.1491065s)
	I0512 01:24:26.966937    8652 cli_runner.go:164] Run: docker container inspect calico-20220512010244-7184 --format={{.State.Status}}
	I0512 01:24:28.064703    8652 cli_runner.go:217] Completed: docker container inspect calico-20220512010244-7184 --format={{.State.Status}}: (1.0975292s)
	I0512 01:24:31.089243    8652 cli_runner.go:164] Run: docker container inspect calico-20220512010244-7184 --format={{.State.Status}}
	I0512 01:24:32.162392    8652 cli_runner.go:217] Completed: docker container inspect calico-20220512010244-7184 --format={{.State.Status}}: (1.0718363s)
	I0512 01:24:35.184743    8652 cli_runner.go:164] Run: docker container inspect calico-20220512010244-7184 --format={{.State.Status}}
	I0512 01:24:36.251927    8652 cli_runner.go:217] Completed: docker container inspect calico-20220512010244-7184 --format={{.State.Status}}: (1.067011s)
	I0512 01:24:39.287510    8652 cli_runner.go:164] Run: docker container inspect calico-20220512010244-7184 --format={{.State.Status}}
	I0512 01:24:40.399447    8652 cli_runner.go:217] Completed: docker container inspect calico-20220512010244-7184 --format={{.State.Status}}: (1.1118799s)
	I0512 01:24:43.418993    8652 cli_runner.go:164] Run: docker container inspect calico-20220512010244-7184 --format={{.State.Status}}
	I0512 01:24:44.447131    8652 cli_runner.go:217] Completed: docker container inspect calico-20220512010244-7184 --format={{.State.Status}}: (1.0280851s)
	I0512 01:24:47.467493    8652 cli_runner.go:164] Run: docker container inspect calico-20220512010244-7184 --format={{.State.Status}}
	I0512 01:24:48.534108    8652 cli_runner.go:217] Completed: docker container inspect calico-20220512010244-7184 --format={{.State.Status}}: (1.066437s)
	I0512 01:24:51.562020    8652 cli_runner.go:164] Run: docker container inspect calico-20220512010244-7184 --format={{.State.Status}}
	I0512 01:24:52.650172    8652 cli_runner.go:217] Completed: docker container inspect calico-20220512010244-7184 --format={{.State.Status}}: (1.0871062s)
	I0512 01:24:55.677246    8652 cli_runner.go:164] Run: docker container inspect calico-20220512010244-7184 --format={{.State.Status}}
	I0512 01:24:56.777736    8652 cli_runner.go:217] Completed: docker container inspect calico-20220512010244-7184 --format={{.State.Status}}: (1.1003421s)
	I0512 01:24:59.810634    8652 cli_runner.go:164] Run: docker container inspect calico-20220512010244-7184 --format={{.State.Status}}
	I0512 01:25:01.034775    8652 cli_runner.go:217] Completed: docker container inspect calico-20220512010244-7184 --format={{.State.Status}}: (1.0899918s)
	I0512 01:25:04.050891    8652 cli_runner.go:164] Run: docker container inspect calico-20220512010244-7184 --format={{.State.Status}}
	I0512 01:25:05.156161    8652 cli_runner.go:217] Completed: docker container inspect calico-20220512010244-7184 --format={{.State.Status}}: (1.1052129s)
	I0512 01:25:08.182473    8652 cli_runner.go:164] Run: docker container inspect calico-20220512010244-7184 --format={{.State.Status}}
	I0512 01:25:09.266191    8652 cli_runner.go:217] Completed: docker container inspect calico-20220512010244-7184 --format={{.State.Status}}: (1.0836013s)
	I0512 01:25:12.295235    8652 cli_runner.go:164] Run: docker container inspect calico-20220512010244-7184 --format={{.State.Status}}
	I0512 01:25:13.351006    8652 cli_runner.go:217] Completed: docker container inspect calico-20220512010244-7184 --format={{.State.Status}}: (1.0557164s)
	I0512 01:25:16.383011    8652 cli_runner.go:164] Run: docker container inspect calico-20220512010244-7184 --format={{.State.Status}}
	I0512 01:25:17.510662    8652 cli_runner.go:217] Completed: docker container inspect calico-20220512010244-7184 --format={{.State.Status}}: (1.1273327s)
	I0512 01:25:20.543327    8652 cli_runner.go:164] Run: docker container inspect calico-20220512010244-7184 --format={{.State.Status}}
	I0512 01:25:21.611295    8652 cli_runner.go:217] Completed: docker container inspect calico-20220512010244-7184 --format={{.State.Status}}: (1.0679131s)
	I0512 01:25:24.634597    8652 cli_runner.go:164] Run: docker container inspect calico-20220512010244-7184 --format={{.State.Status}}
	I0512 01:25:25.701116    8652 cli_runner.go:217] Completed: docker container inspect calico-20220512010244-7184 --format={{.State.Status}}: (1.0662162s)
	I0512 01:25:28.720895    8652 cli_runner.go:164] Run: docker container inspect calico-20220512010244-7184 --format={{.State.Status}}
	I0512 01:25:29.872032    8652 cli_runner.go:217] Completed: docker container inspect calico-20220512010244-7184 --format={{.State.Status}}: (1.1510776s)
	I0512 01:25:32.899636    8652 cli_runner.go:164] Run: docker container inspect calico-20220512010244-7184 --format={{.State.Status}}
	I0512 01:25:33.977231    8652 cli_runner.go:217] Completed: docker container inspect calico-20220512010244-7184 --format={{.State.Status}}: (1.0772739s)
	I0512 01:25:36.996628    8652 cli_runner.go:164] Run: docker container inspect calico-20220512010244-7184 --format={{.State.Status}}
	I0512 01:25:38.090030    8652 cli_runner.go:217] Completed: docker container inspect calico-20220512010244-7184 --format={{.State.Status}}: (1.0932391s)
	I0512 01:25:41.106951    8652 cli_runner.go:164] Run: docker container inspect calico-20220512010244-7184 --format={{.State.Status}}
	I0512 01:25:42.166783    8652 cli_runner.go:217] Completed: docker container inspect calico-20220512010244-7184 --format={{.State.Status}}: (1.0586869s)
	I0512 01:25:45.196262    8652 cli_runner.go:164] Run: docker container inspect calico-20220512010244-7184 --format={{.State.Status}}
	I0512 01:25:46.317203    8652 cli_runner.go:217] Completed: docker container inspect calico-20220512010244-7184 --format={{.State.Status}}: (1.1203017s)
	I0512 01:25:49.340457    8652 cli_runner.go:164] Run: docker container inspect calico-20220512010244-7184 --format={{.State.Status}}
	I0512 01:25:50.388496    8652 cli_runner.go:217] Completed: docker container inspect calico-20220512010244-7184 --format={{.State.Status}}: (1.047985s)
	I0512 01:25:53.419307    8652 cli_runner.go:164] Run: docker container inspect calico-20220512010244-7184 --format={{.State.Status}}
	I0512 01:25:54.469982    8652 cli_runner.go:217] Completed: docker container inspect calico-20220512010244-7184 --format={{.State.Status}}: (1.050621s)
	I0512 01:25:57.502157    8652 cli_runner.go:164] Run: docker container inspect calico-20220512010244-7184 --format={{.State.Status}}
	I0512 01:25:58.603170    8652 cli_runner.go:217] Completed: docker container inspect calico-20220512010244-7184 --format={{.State.Status}}: (1.1008644s)
	I0512 01:26:01.629764    8652 cli_runner.go:164] Run: docker container inspect calico-20220512010244-7184 --format={{.State.Status}}
	I0512 01:26:02.707009    8652 cli_runner.go:217] Completed: docker container inspect calico-20220512010244-7184 --format={{.State.Status}}: (1.0770628s)
	I0512 01:26:05.723416    8652 cli_runner.go:164] Run: docker container inspect calico-20220512010244-7184 --format={{.State.Status}}
	I0512 01:26:06.831773    8652 cli_runner.go:217] Completed: docker container inspect calico-20220512010244-7184 --format={{.State.Status}}: (1.1083005s)
	I0512 01:26:09.857487    8652 cli_runner.go:164] Run: docker container inspect calico-20220512010244-7184 --format={{.State.Status}}
	I0512 01:26:10.976187    8652 cli_runner.go:217] Completed: docker container inspect calico-20220512010244-7184 --format={{.State.Status}}: (1.1186429s)
	I0512 01:26:13.998604    8652 cli_runner.go:164] Run: docker container inspect calico-20220512010244-7184 --format={{.State.Status}}
	I0512 01:26:15.115122    8652 cli_runner.go:217] Completed: docker container inspect calico-20220512010244-7184 --format={{.State.Status}}: (1.1164603s)
	I0512 01:26:18.142540    8652 cli_runner.go:164] Run: docker container inspect calico-20220512010244-7184 --format={{.State.Status}}
	I0512 01:26:19.251113    8652 cli_runner.go:217] Completed: docker container inspect calico-20220512010244-7184 --format={{.State.Status}}: (1.1080886s)
	I0512 01:26:22.281397    8652 cli_runner.go:164] Run: docker container inspect calico-20220512010244-7184 --format={{.State.Status}}
	I0512 01:26:23.448511    8652 cli_runner.go:217] Completed: docker container inspect calico-20220512010244-7184 --format={{.State.Status}}: (1.1668016s)
	I0512 01:26:26.478738    8652 cli_runner.go:164] Run: docker container inspect calico-20220512010244-7184 --format={{.State.Status}}
	I0512 01:26:27.640733    8652 cli_runner.go:217] Completed: docker container inspect calico-20220512010244-7184 --format={{.State.Status}}: (1.1618468s)
	I0512 01:26:30.670750    8652 cli_runner.go:164] Run: docker container inspect calico-20220512010244-7184 --format={{.State.Status}}
	I0512 01:26:31.814746    8652 cli_runner.go:217] Completed: docker container inspect calico-20220512010244-7184 --format={{.State.Status}}: (1.1429359s)
	I0512 01:26:34.840904    8652 cli_runner.go:164] Run: docker container inspect calico-20220512010244-7184 --format={{.State.Status}}
	I0512 01:26:35.927518    8652 cli_runner.go:217] Completed: docker container inspect calico-20220512010244-7184 --format={{.State.Status}}: (1.0854444s)
	I0512 01:26:38.949236    8652 cli_runner.go:164] Run: docker container inspect calico-20220512010244-7184 --format={{.State.Status}}
	I0512 01:26:40.051656    8652 cli_runner.go:217] Completed: docker container inspect calico-20220512010244-7184 --format={{.State.Status}}: (1.1022607s)
	I0512 01:26:43.072973    8652 cli_runner.go:164] Run: docker container inspect calico-20220512010244-7184 --format={{.State.Status}}
	I0512 01:26:44.249662    8652 cli_runner.go:217] Completed: docker container inspect calico-20220512010244-7184 --format={{.State.Status}}: (1.1764689s)
	I0512 01:26:47.281954    8652 cli_runner.go:164] Run: docker container inspect calico-20220512010244-7184 --format={{.State.Status}}
	I0512 01:26:48.400800    8652 cli_runner.go:217] Completed: docker container inspect calico-20220512010244-7184 --format={{.State.Status}}: (1.1187887s)
	I0512 01:26:51.430905    8652 cli_runner.go:164] Run: docker container inspect calico-20220512010244-7184 --format={{.State.Status}}
	I0512 01:26:52.525545    8652 cli_runner.go:217] Completed: docker container inspect calico-20220512010244-7184 --format={{.State.Status}}: (1.094478s)
	I0512 01:26:55.550180    8652 cli_runner.go:164] Run: docker container inspect calico-20220512010244-7184 --format={{.State.Status}}
	I0512 01:26:56.611914    8652 cli_runner.go:217] Completed: docker container inspect calico-20220512010244-7184 --format={{.State.Status}}: (1.0616798s)
	I0512 01:26:59.633848    8652 cli_runner.go:164] Run: docker container inspect calico-20220512010244-7184 --format={{.State.Status}}
	I0512 01:27:00.705099    8652 cli_runner.go:217] Completed: docker container inspect calico-20220512010244-7184 --format={{.State.Status}}: (1.0711326s)
	I0512 01:27:03.735760    8652 cli_runner.go:164] Run: docker container inspect calico-20220512010244-7184 --format={{.State.Status}}
	I0512 01:27:04.813583    8652 cli_runner.go:217] Completed: docker container inspect calico-20220512010244-7184 --format={{.State.Status}}: (1.0777679s)
	I0512 01:27:07.846659    8652 cli_runner.go:164] Run: docker container inspect calico-20220512010244-7184 --format={{.State.Status}}
	I0512 01:27:08.927088    8652 cli_runner.go:217] Completed: docker container inspect calico-20220512010244-7184 --format={{.State.Status}}: (1.0803338s)
	I0512 01:27:11.959948    8652 cli_runner.go:164] Run: docker container inspect calico-20220512010244-7184 --format={{.State.Status}}
	I0512 01:27:13.060462    8652 cli_runner.go:217] Completed: docker container inspect calico-20220512010244-7184 --format={{.State.Status}}: (1.1004578s)
	I0512 01:27:16.085247    8652 cli_runner.go:164] Run: docker container inspect calico-20220512010244-7184 --format={{.State.Status}}
	I0512 01:27:17.144689    8652 cli_runner.go:217] Completed: docker container inspect calico-20220512010244-7184 --format={{.State.Status}}: (1.0587328s)
	I0512 01:27:20.169036    8652 cli_runner.go:164] Run: docker container inspect calico-20220512010244-7184 --format={{.State.Status}}
	I0512 01:27:21.283522    8652 cli_runner.go:217] Completed: docker container inspect calico-20220512010244-7184 --format={{.State.Status}}: (1.1142388s)
	I0512 01:27:24.303182    8652 cli_runner.go:164] Run: docker container inspect calico-20220512010244-7184 --format={{.State.Status}}
	I0512 01:27:25.363628    8652 cli_runner.go:217] Completed: docker container inspect calico-20220512010244-7184 --format={{.State.Status}}: (1.060023s)
	I0512 01:27:28.385566    8652 cli_runner.go:164] Run: docker container inspect calico-20220512010244-7184 --format={{.State.Status}}
	I0512 01:27:29.469405    8652 cli_runner.go:217] Completed: docker container inspect calico-20220512010244-7184 --format={{.State.Status}}: (1.0836552s)
	I0512 01:27:32.502663    8652 cli_runner.go:164] Run: docker container inspect calico-20220512010244-7184 --format={{.State.Status}}
	I0512 01:27:33.600309    8652 cli_runner.go:217] Completed: docker container inspect calico-20220512010244-7184 --format={{.State.Status}}: (1.0974654s)
	I0512 01:27:36.640511    8652 cli_runner.go:164] Run: docker container inspect calico-20220512010244-7184 --format={{.State.Status}}
	I0512 01:27:37.700436    8652 cli_runner.go:217] Completed: docker container inspect calico-20220512010244-7184 --format={{.State.Status}}: (1.059758s)
	I0512 01:27:40.720916    8652 cli_runner.go:164] Run: docker container inspect calico-20220512010244-7184 --format={{.State.Status}}
	I0512 01:27:41.808268    8652 cli_runner.go:217] Completed: docker container inspect calico-20220512010244-7184 --format={{.State.Status}}: (1.0872957s)
	I0512 01:27:44.831145    8652 cli_runner.go:164] Run: docker container inspect calico-20220512010244-7184 --format={{.State.Status}}
	I0512 01:27:45.907946    8652 cli_runner.go:217] Completed: docker container inspect calico-20220512010244-7184 --format={{.State.Status}}: (1.0767453s)
	I0512 01:27:48.934895    8652 cli_runner.go:164] Run: docker container inspect calico-20220512010244-7184 --format={{.State.Status}}
	I0512 01:27:50.051906    8652 cli_runner.go:217] Completed: docker container inspect calico-20220512010244-7184 --format={{.State.Status}}: (1.116954s)
	I0512 01:27:53.073475    8652 cli_runner.go:164] Run: docker container inspect calico-20220512010244-7184 --format={{.State.Status}}
	I0512 01:27:54.177848    8652 cli_runner.go:217] Completed: docker container inspect calico-20220512010244-7184 --format={{.State.Status}}: (1.1043159s)
	I0512 01:27:57.205050    8652 cli_runner.go:164] Run: docker container inspect calico-20220512010244-7184 --format={{.State.Status}}
	I0512 01:27:58.321999    8652 cli_runner.go:217] Completed: docker container inspect calico-20220512010244-7184 --format={{.State.Status}}: (1.1168915s)
	I0512 01:28:01.359637    8652 cli_runner.go:164] Run: docker container inspect calico-20220512010244-7184 --format={{.State.Status}}
	I0512 01:28:02.448285    8652 cli_runner.go:217] Completed: docker container inspect calico-20220512010244-7184 --format={{.State.Status}}: (1.0885923s)
	I0512 01:28:05.472281    8652 cli_runner.go:164] Run: docker container inspect calico-20220512010244-7184 --format={{.State.Status}}
	I0512 01:28:06.622411    8652 cli_runner.go:217] Completed: docker container inspect calico-20220512010244-7184 --format={{.State.Status}}: (1.1498527s)
	I0512 01:28:09.636185    8652 stop.go:59] stop err: Maximum number of retries (60) exceeded
	I0512 01:28:09.636185    8652 delete.go:129] stophost failed (probably ok): Temporary Error: stop: Maximum number of retries (60) exceeded
	I0512 01:28:09.652456    8652 cli_runner.go:164] Run: docker container inspect calico-20220512010244-7184 --format={{.State.Status}}
	I0512 01:28:10.754171    8652 cli_runner.go:217] Completed: docker container inspect calico-20220512010244-7184 --format={{.State.Status}}: (1.1016586s)
	W0512 01:28:10.754171    8652 delete.go:135] deletehost failed: Docker machine "calico-20220512010244-7184" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I0512 01:28:10.762172    8652 cli_runner.go:164] Run: docker container inspect -f {{.Id}} calico-20220512010244-7184
	I0512 01:28:11.869352    8652 cli_runner.go:217] Completed: docker container inspect -f {{.Id}} calico-20220512010244-7184: (1.1070177s)
	I0512 01:28:11.877904    8652 cli_runner.go:164] Run: docker container inspect calico-20220512010244-7184 --format={{.State.Status}}
	I0512 01:28:12.975117    8652 cli_runner.go:217] Completed: docker container inspect calico-20220512010244-7184 --format={{.State.Status}}: (1.097027s)
	I0512 01:28:12.984952    8652 cli_runner.go:164] Run: docker exec --privileged -t calico-20220512010244-7184 /bin/bash -c "sudo init 0"
	W0512 01:28:14.092784    8652 cli_runner.go:211] docker exec --privileged -t calico-20220512010244-7184 /bin/bash -c "sudo init 0" returned with exit code 1
	I0512 01:28:14.092853    8652 cli_runner.go:217] Completed: docker exec --privileged -t calico-20220512010244-7184 /bin/bash -c "sudo init 0": (1.107641s)
	I0512 01:28:14.092882    8652 oci.go:625] error shutdown calico-20220512010244-7184: docker exec --privileged -t calico-20220512010244-7184 /bin/bash -c "sudo init 0": exit status 1
	stdout:
	
	stderr:
	Error response from daemon: Container 42260fb35b9f6c8dc51706a11a6662b410a0fafb217c805e626ae317096e4c99 is not running
	I0512 01:28:15.111080    8652 cli_runner.go:164] Run: docker container inspect calico-20220512010244-7184 --format={{.State.Status}}
	I0512 01:28:16.205036    8652 cli_runner.go:217] Completed: docker container inspect calico-20220512010244-7184 --format={{.State.Status}}: (1.0920409s)
	I0512 01:28:16.205036    8652 oci.go:639] temporary error: container calico-20220512010244-7184 status is  but expect it to be exited
	I0512 01:28:16.205036    8652 oci.go:645] Successfully shutdown container calico-20220512010244-7184
	I0512 01:28:16.213051    8652 cli_runner.go:164] Run: docker rm -f -v calico-20220512010244-7184
	I0512 01:28:17.399268    8652 cli_runner.go:217] Completed: docker rm -f -v calico-20220512010244-7184: (1.1851759s)
	I0512 01:28:17.406244    8652 cli_runner.go:164] Run: docker container inspect -f {{.Id}} calico-20220512010244-7184
	W0512 01:28:18.511946    8652 cli_runner.go:211] docker container inspect -f {{.Id}} calico-20220512010244-7184 returned with exit code 1
	I0512 01:28:18.511946    8652 cli_runner.go:217] Completed: docker container inspect -f {{.Id}} calico-20220512010244-7184: (1.105646s)
	I0512 01:28:18.520542    8652 cli_runner.go:164] Run: docker network inspect calico-20220512010244-7184 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0512 01:28:19.581519    8652 cli_runner.go:211] docker network inspect calico-20220512010244-7184 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0512 01:28:19.581519    8652 cli_runner.go:217] Completed: docker network inspect calico-20220512010244-7184 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}": (1.0609228s)
	I0512 01:28:19.588523    8652 network_create.go:272] running [docker network inspect calico-20220512010244-7184] to gather additional debugging logs...
	I0512 01:28:19.588523    8652 cli_runner.go:164] Run: docker network inspect calico-20220512010244-7184
	W0512 01:28:20.665164    8652 cli_runner.go:211] docker network inspect calico-20220512010244-7184 returned with exit code 1
	I0512 01:28:20.665164    8652 cli_runner.go:217] Completed: docker network inspect calico-20220512010244-7184: (1.0765862s)
	I0512 01:28:20.665164    8652 network_create.go:275] error running [docker network inspect calico-20220512010244-7184]: docker network inspect calico-20220512010244-7184: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: calico-20220512010244-7184
	I0512 01:28:20.665164    8652 network_create.go:277] output of [docker network inspect calico-20220512010244-7184]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: calico-20220512010244-7184
	
	** /stderr **
	W0512 01:28:20.666183    8652 delete.go:139] delete failed (probably ok) <nil>
	I0512 01:28:20.666183    8652 fix.go:115] Sleeping 1 second for extra luck!
	I0512 01:28:21.679683    8652 start.go:131] createHost starting for "" (driver="docker")
	I0512 01:28:21.926278    8652 out.go:204] * Creating docker container (CPUs=2, Memory=2048MB) ...
	I0512 01:28:21.926912    8652 start.go:165] libmachine.API.Create for "calico-20220512010244-7184" (driver="docker")
	I0512 01:28:21.926957    8652 client.go:168] LocalClient.Create starting
	I0512 01:28:21.927630    8652 main.go:134] libmachine: Reading certificate data from C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem
	I0512 01:28:21.927811    8652 main.go:134] libmachine: Decoding PEM data...
	I0512 01:28:21.927811    8652 main.go:134] libmachine: Parsing certificate...
	I0512 01:28:21.927811    8652 main.go:134] libmachine: Reading certificate data from C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem
	I0512 01:28:21.927811    8652 main.go:134] libmachine: Decoding PEM data...
	I0512 01:28:21.927811    8652 main.go:134] libmachine: Parsing certificate...
	I0512 01:28:21.938636    8652 cli_runner.go:164] Run: docker network inspect calico-20220512010244-7184 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0512 01:28:23.089005    8652 cli_runner.go:211] docker network inspect calico-20220512010244-7184 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0512 01:28:23.089005    8652 cli_runner.go:217] Completed: docker network inspect calico-20220512010244-7184 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}": (1.1503096s)
	I0512 01:28:23.096991    8652 network_create.go:272] running [docker network inspect calico-20220512010244-7184] to gather additional debugging logs...
	I0512 01:28:23.096991    8652 cli_runner.go:164] Run: docker network inspect calico-20220512010244-7184
	W0512 01:28:24.226288    8652 cli_runner.go:211] docker network inspect calico-20220512010244-7184 returned with exit code 1
	I0512 01:28:24.226288    8652 cli_runner.go:217] Completed: docker network inspect calico-20220512010244-7184: (1.1292396s)
	I0512 01:28:24.226288    8652 network_create.go:275] error running [docker network inspect calico-20220512010244-7184]: docker network inspect calico-20220512010244-7184: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: calico-20220512010244-7184
	I0512 01:28:24.226288    8652 network_create.go:277] output of [docker network inspect calico-20220512010244-7184]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: calico-20220512010244-7184
	
	** /stderr **
	I0512 01:28:24.234258    8652 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0512 01:28:25.347498    8652 cli_runner.go:217] Completed: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}": (1.1131834s)
	I0512 01:28:25.363500    8652 network.go:284] reusing subnet 192.168.49.0 that has expired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc00014e698] amended:false}} dirty:map[] misses:0}
	I0512 01:28:25.363500    8652 network.go:235] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0512 01:28:25.363500    8652 network_create.go:115] attempt to create docker network calico-20220512010244-7184 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0512 01:28:25.371501    8652 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true calico-20220512010244-7184
	W0512 01:28:26.480990    8652 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true calico-20220512010244-7184 returned with exit code 1
	I0512 01:28:26.480990    8652 cli_runner.go:217] Completed: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true calico-20220512010244-7184: (1.109433s)
	W0512 01:28:26.480990    8652 network_create.go:107] failed to create docker network calico-20220512010244-7184 192.168.49.0/24, will retry: subnet is taken
	I0512 01:28:26.497990    8652 network.go:279] skipping subnet 192.168.49.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc00014e698] amended:false}} dirty:map[] misses:0}
	I0512 01:28:26.497990    8652 network.go:238] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0512 01:28:26.515995    8652 network.go:288] reserving subnet 192.168.58.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc00014e698] amended:true}} dirty:map[192.168.49.0:0xc00014e698 192.168.58.0:0xc00051ad18] misses:0}
	I0512 01:28:26.515995    8652 network.go:235] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0512 01:28:26.515995    8652 network_create.go:115] attempt to create docker network calico-20220512010244-7184 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
	I0512 01:28:26.522985    8652 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true calico-20220512010244-7184
	W0512 01:28:27.576680    8652 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true calico-20220512010244-7184 returned with exit code 1
	I0512 01:28:27.576680    8652 cli_runner.go:217] Completed: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true calico-20220512010244-7184: (1.0536413s)
	W0512 01:28:27.576680    8652 network_create.go:107] failed to create docker network calico-20220512010244-7184 192.168.58.0/24, will retry: subnet is taken
	I0512 01:28:27.595620    8652 network.go:279] skipping subnet 192.168.58.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc00014e698] amended:true}} dirty:map[192.168.49.0:0xc00014e698 192.168.58.0:0xc00051ad18] misses:1}
	I0512 01:28:27.595620    8652 network.go:238] skipping subnet 192.168.58.0/24 that is reserved: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0512 01:28:27.613617    8652 network.go:288] reserving subnet 192.168.67.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc00014e698] amended:true}} dirty:map[192.168.49.0:0xc00014e698 192.168.58.0:0xc00051ad18 192.168.67.0:0xc000530368] misses:1}
	I0512 01:28:27.613617    8652 network.go:235] using free private subnet 192.168.67.0/24: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0512 01:28:27.613617    8652 network_create.go:115] attempt to create docker network calico-20220512010244-7184 192.168.67.0/24 with gateway 192.168.67.1 and MTU of 1500 ...
	I0512 01:28:27.620611    8652 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true calico-20220512010244-7184
	I0512 01:28:29.944904    8652 cli_runner.go:217] Completed: docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true calico-20220512010244-7184: (2.3241748s)
	I0512 01:28:29.944904    8652 network_create.go:99] docker network calico-20220512010244-7184 192.168.67.0/24 created
	I0512 01:28:29.944904    8652 kic.go:106] calculated static IP "192.168.67.2" for the "calico-20220512010244-7184" container
	I0512 01:28:29.964964    8652 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0512 01:28:31.054344    8652 cli_runner.go:217] Completed: docker ps -a --format {{.Names}}: (1.0892358s)
	I0512 01:28:31.063265    8652 cli_runner.go:164] Run: docker volume create calico-20220512010244-7184 --label name.minikube.sigs.k8s.io=calico-20220512010244-7184 --label created_by.minikube.sigs.k8s.io=true
	I0512 01:28:32.101828    8652 cli_runner.go:217] Completed: docker volume create calico-20220512010244-7184 --label name.minikube.sigs.k8s.io=calico-20220512010244-7184 --label created_by.minikube.sigs.k8s.io=true: (1.0385101s)
	I0512 01:28:32.101828    8652 oci.go:103] Successfully created a docker volume calico-20220512010244-7184
	I0512 01:28:32.109292    8652 cli_runner.go:164] Run: docker run --rm --name calico-20220512010244-7184-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=calico-20220512010244-7184 --entrypoint /usr/bin/test -v calico-20220512010244-7184:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1652251400-14138@sha256:8c847a4aa2afc5a7fc659f9731046bf9cc7e788283deecc83c8633014fb0828a -d /var/lib
	I0512 01:28:41.085633    8652 cli_runner.go:217] Completed: docker run --rm --name calico-20220512010244-7184-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=calico-20220512010244-7184 --entrypoint /usr/bin/test -v calico-20220512010244-7184:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1652251400-14138@sha256:8c847a4aa2afc5a7fc659f9731046bf9cc7e788283deecc83c8633014fb0828a -d /var/lib: (8.9751987s)
	I0512 01:28:41.085633    8652 oci.go:107] Successfully prepared a docker volume calico-20220512010244-7184
	I0512 01:28:41.085633    8652 preload.go:132] Checking if preload exists for k8s version v1.23.5 and runtime docker
	I0512 01:28:41.085633    8652 kic.go:179] Starting extracting preloaded images to volume ...
	I0512 01:28:41.097427    8652 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.23.5-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v calico-20220512010244-7184:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1652251400-14138@sha256:8c847a4aa2afc5a7fc659f9731046bf9cc7e788283deecc83c8633014fb0828a -I lz4 -xf /preloaded.tar -C /extractDir
	I0512 01:29:08.382969    8652 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.23.5-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v calico-20220512010244-7184:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1652251400-14138@sha256:8c847a4aa2afc5a7fc659f9731046bf9cc7e788283deecc83c8633014fb0828a -I lz4 -xf /preloaded.tar -C /extractDir: (27.2840274s)
	I0512 01:29:08.383040    8652 kic.go:188] duration metric: took 27.295985 seconds to extract preloaded images to volume
	I0512 01:29:08.392253    8652 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0512 01:29:10.596494    8652 cli_runner.go:217] Completed: docker system info --format "{{json .}}": (2.2041288s)
	I0512 01:29:10.597062    8652 info.go:265] docker info: {ID:5MOX:W55Z:6RSS:V5PU:46KT:D723:NTM4:N7FK:USOO:URA3:TW6J:2PNT Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:77 OomKillDisable:true NGoroutines:58 SystemTime:2022-05-12 01:29:09.4768859 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.102.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_
64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:53902323712 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.14 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3df54a852345ae127d1fa3092b95168e4a88e2f8 Expected:3df54a852345ae127d1fa3092b95168e4a88e2f8} RuncCommit:{ID:v1.0.3-0-gf46b6ba Expected:v1.0.3-0-gf46b6ba} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,
profile=default] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.5.0] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:C:\Program Files\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc.
Version:v0.17.0]] Warnings:<nil>}}
	I0512 01:29:10.606552    8652 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0512 01:29:12.909155    8652 cli_runner.go:217] Completed: docker info --format "'{{json .SecurityOptions}}'": (2.3024857s)
	I0512 01:29:12.922282    8652 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname calico-20220512010244-7184 --name calico-20220512010244-7184 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=calico-20220512010244-7184 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=calico-20220512010244-7184 --network calico-20220512010244-7184 --ip 192.168.67.2 --volume calico-20220512010244-7184:/var --security-opt apparmor=unconfined --memory=2048mb --memory-swap=2048mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1652251400-14138@sha256:8c847a4aa2afc5a7fc659f9731046bf9cc7e788283deecc83c8633014fb0828a
	I0512 01:29:15.192797    8652 cli_runner.go:217] Completed: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname calico-20220512010244-7184 --name calico-20220512010244-7184 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=calico-20220512010244-7184 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=calico-20220512010244-7184 --network calico-20220512010244-7184 --ip 192.168.67.2 --volume calico-20220512010244-7184:/var --security-opt apparmor=unconfined --memory=2048mb --memory-swap=2048mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1652251400-14138@sha256:8c847a4aa2afc5a7fc659f9731046bf9cc7e788283deecc83c8633014fb0828a: (2.270399s)
	I0512 01:29:15.205802    8652 cli_runner.go:164] Run: docker container inspect calico-20220512010244-7184 --format={{.State.Running}}
	I0512 01:29:16.401745    8652 cli_runner.go:217] Completed: docker container inspect calico-20220512010244-7184 --format={{.State.Running}}: (1.1958828s)
	I0512 01:29:16.412365    8652 cli_runner.go:164] Run: docker container inspect calico-20220512010244-7184 --format={{.State.Status}}
	I0512 01:29:17.557845    8652 cli_runner.go:217] Completed: docker container inspect calico-20220512010244-7184 --format={{.State.Status}}: (1.1454214s)
	I0512 01:29:17.564850    8652 cli_runner.go:164] Run: docker exec calico-20220512010244-7184 stat /var/lib/dpkg/alternatives/iptables
	I0512 01:29:18.951812    8652 cli_runner.go:217] Completed: docker exec calico-20220512010244-7184 stat /var/lib/dpkg/alternatives/iptables: (1.3868915s)
	I0512 01:29:18.951812    8652 oci.go:247] the created container "calico-20220512010244-7184" has a running status.
	I0512 01:29:18.951812    8652 kic.go:210] Creating ssh key for kic: C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\calico-20220512010244-7184\id_rsa...
	I0512 01:29:19.188127    8652 kic_runner.go:191] docker (temp): C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\calico-20220512010244-7184\id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0512 01:29:20.530163    8652 cli_runner.go:164] Run: docker container inspect calico-20220512010244-7184 --format={{.State.Status}}
	I0512 01:29:21.742486    8652 cli_runner.go:217] Completed: docker container inspect calico-20220512010244-7184 --format={{.State.Status}}: (1.2122617s)
	I0512 01:29:21.759492    8652 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0512 01:29:21.759492    8652 kic_runner.go:114] Args: [docker exec --privileged calico-20220512010244-7184 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0512 01:29:23.151639    8652 kic_runner.go:123] Done: [docker exec --privileged calico-20220512010244-7184 chown docker:docker /home/docker/.ssh/authorized_keys]: (1.3919766s)
	I0512 01:29:23.159881    8652 kic.go:250] ensuring only current user has permissions to key file located at : C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\calico-20220512010244-7184\id_rsa...
	I0512 01:29:23.745730    8652 cli_runner.go:164] Run: docker container inspect calico-20220512010244-7184 --format={{.State.Status}}
	I0512 01:29:24.938271    8652 cli_runner.go:217] Completed: docker container inspect calico-20220512010244-7184 --format={{.State.Status}}: (1.1919511s)
	I0512 01:29:24.938271    8652 machine.go:88] provisioning docker machine ...
	I0512 01:29:24.938271    8652 ubuntu.go:169] provisioning hostname "calico-20220512010244-7184"
	I0512 01:29:24.947275    8652 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220512010244-7184
	I0512 01:29:26.103703    8652 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220512010244-7184: (1.1562463s)
	I0512 01:29:26.110035    8652 main.go:134] libmachine: Using SSH client type: native
	I0512 01:29:26.118844    8652 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x422e20] 0x425c80 <nil>  [] 0s} 127.0.0.1 51249 <nil> <nil>}
	I0512 01:29:26.118844    8652 main.go:134] libmachine: About to run SSH command:
	sudo hostname calico-20220512010244-7184 && echo "calico-20220512010244-7184" | sudo tee /etc/hostname
	I0512 01:29:26.349622    8652 main.go:134] libmachine: SSH cmd err, output: <nil>: calico-20220512010244-7184
	
	I0512 01:29:26.357687    8652 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220512010244-7184
	I0512 01:29:27.510900    8652 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220512010244-7184: (1.1530813s)
	I0512 01:29:27.517094    8652 main.go:134] libmachine: Using SSH client type: native
	I0512 01:29:27.517641    8652 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x422e20] 0x425c80 <nil>  [] 0s} 127.0.0.1 51249 <nil> <nil>}
	I0512 01:29:27.517730    8652 main.go:134] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\scalico-20220512010244-7184' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 calico-20220512010244-7184/g' /etc/hosts;
				else 
					echo '127.0.1.1 calico-20220512010244-7184' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0512 01:29:27.711978    8652 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I0512 01:29:27.711978    8652 ubuntu.go:175] set auth options {CertDir:C:\Users\jenkins.minikube4\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube4\minikube-integration\.minikube}
	I0512 01:29:27.711978    8652 ubuntu.go:177] setting up certificates
	I0512 01:29:27.711978    8652 provision.go:83] configureAuth start
	I0512 01:29:27.722721    8652 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" calico-20220512010244-7184
	I0512 01:29:28.856235    8652 cli_runner.go:217] Completed: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" calico-20220512010244-7184: (1.1334561s)
	I0512 01:29:28.856235    8652 provision.go:138] copyHostCerts
	I0512 01:29:28.856235    8652 exec_runner.go:144] found C:\Users\jenkins.minikube4\minikube-integration\.minikube/key.pem, removing ...
	I0512 01:29:28.856235    8652 exec_runner.go:207] rm: C:\Users\jenkins.minikube4\minikube-integration\.minikube\key.pem
	I0512 01:29:28.856235    8652 exec_runner.go:151] cp: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube4\minikube-integration\.minikube/key.pem (1675 bytes)
	I0512 01:29:28.857245    8652 exec_runner.go:144] found C:\Users\jenkins.minikube4\minikube-integration\.minikube/ca.pem, removing ...
	I0512 01:29:28.857245    8652 exec_runner.go:207] rm: C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.pem
	I0512 01:29:28.857245    8652 exec_runner.go:151] cp: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube4\minikube-integration\.minikube/ca.pem (1078 bytes)
	I0512 01:29:28.858244    8652 exec_runner.go:144] found C:\Users\jenkins.minikube4\minikube-integration\.minikube/cert.pem, removing ...
	I0512 01:29:28.858244    8652 exec_runner.go:207] rm: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cert.pem
	I0512 01:29:28.859235    8652 exec_runner.go:151] cp: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube4\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0512 01:29:28.859235    8652 provision.go:112] generating server cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.calico-20220512010244-7184 san=[192.168.67.2 127.0.0.1 localhost 127.0.0.1 minikube calico-20220512010244-7184]
	I0512 01:29:29.190102    8652 provision.go:172] copyRemoteCerts
	I0512 01:29:29.202774    8652 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0512 01:29:29.215867    8652 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220512010244-7184
	I0512 01:29:30.392456    8652 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220512010244-7184: (1.1764733s)
	I0512 01:29:30.392456    8652 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:51249 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\calico-20220512010244-7184\id_rsa Username:docker}
	I0512 01:29:30.541693    8652 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (1.3388501s)
	I0512 01:29:30.542085    8652 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0512 01:29:30.616604    8652 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1253 bytes)
	I0512 01:29:30.671552    8652 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0512 01:29:30.725112    8652 provision.go:86] duration metric: configureAuth took 3.0129805s
	I0512 01:29:30.725112    8652 ubuntu.go:193] setting minikube options for container-runtime
	I0512 01:29:30.725112    8652 config.go:178] Loaded profile config "calico-20220512010244-7184": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.5
	I0512 01:29:30.734109    8652 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220512010244-7184
	I0512 01:29:31.839464    8652 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220512010244-7184: (1.1052994s)
	I0512 01:29:31.844452    8652 main.go:134] libmachine: Using SSH client type: native
	I0512 01:29:31.844452    8652 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x422e20] 0x425c80 <nil>  [] 0s} 127.0.0.1 51249 <nil> <nil>}
	I0512 01:29:31.844452    8652 main.go:134] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0512 01:29:32.057679    8652 main.go:134] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0512 01:29:32.057679    8652 ubuntu.go:71] root file system type: overlay
	I0512 01:29:32.058939    8652 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0512 01:29:32.072810    8652 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220512010244-7184
	I0512 01:29:33.172347    8652 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220512010244-7184: (1.099437s)
	I0512 01:29:33.176340    8652 main.go:134] libmachine: Using SSH client type: native
	I0512 01:29:33.176340    8652 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x422e20] 0x425c80 <nil>  [] 0s} 127.0.0.1 51249 <nil> <nil>}
	I0512 01:29:33.176340    8652 main.go:134] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0512 01:29:33.395631    8652 main.go:134] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0512 01:29:33.406643    8652 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220512010244-7184
	I0512 01:29:34.518941    8652 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220512010244-7184: (1.1122408s)
	I0512 01:29:34.528520    8652 main.go:134] libmachine: Using SSH client type: native
	I0512 01:29:34.529194    8652 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x422e20] 0x425c80 <nil>  [] 0s} 127.0.0.1 51249 <nil> <nil>}
	I0512 01:29:34.529194    8652 main.go:134] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0512 01:29:35.991688    8652 main.go:134] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2022-05-05 13:17:28.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2022-05-12 01:29:33.376869000 +0000
	@@ -1,30 +1,32 @@
	 [Unit]
	 Description=Docker Application Container Engine
	 Documentation=https://docs.docker.com
	-After=network-online.target docker.socket firewalld.service containerd.service
	+BindsTo=containerd.service
	+After=network-online.target firewalld.service containerd.service
	 Wants=network-online.target
	-Requires=docker.socket containerd.service
	+Requires=docker.socket
	+StartLimitBurst=3
	+StartLimitIntervalSec=60
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutSec=0
	-RestartSec=2
	-Restart=always
	-
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	+Restart=on-failure
	 
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP $MAINPID
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	@@ -32,16 +34,16 @@
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	 
	 # kill only the docker process, not all processes in the cgroup
	 KillMode=process
	-OOMScoreAdjust=-500
	 
	 [Install]
	 WantedBy=multi-user.target
	Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install enable docker
	
	I0512 01:29:35.991766    8652 machine.go:91] provisioned docker machine in 11.0529329s
	I0512 01:29:35.991766    8652 client.go:171] LocalClient.Create took 1m14.0609686s
	I0512 01:29:35.991846    8652 start.go:173] duration metric: libmachine.API.Create for "calico-20220512010244-7184" took 1m14.0610812s
	I0512 01:29:35.991846    8652 start.go:306] post-start starting for "calico-20220512010244-7184" (driver="docker")
	I0512 01:29:35.991846    8652 start.go:316] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0512 01:29:36.016387    8652 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0512 01:29:36.027781    8652 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220512010244-7184
	I0512 01:29:37.247559    8652 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220512010244-7184: (1.2197159s)
	I0512 01:29:37.247559    8652 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:51249 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\calico-20220512010244-7184\id_rsa Username:docker}
	I0512 01:29:37.398642    8652 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (1.3821842s)
	I0512 01:29:37.413283    8652 ssh_runner.go:195] Run: cat /etc/os-release
	I0512 01:29:37.429605    8652 main.go:134] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0512 01:29:37.429605    8652 main.go:134] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0512 01:29:37.429605    8652 main.go:134] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0512 01:29:37.429605    8652 info.go:137] Remote host: Ubuntu 20.04.4 LTS
	I0512 01:29:37.429605    8652 filesync.go:126] Scanning C:\Users\jenkins.minikube4\minikube-integration\.minikube\addons for local assets ...
	I0512 01:29:37.430447    8652 filesync.go:126] Scanning C:\Users\jenkins.minikube4\minikube-integration\.minikube\files for local assets ...
	I0512 01:29:37.431460    8652 filesync.go:149] local asset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\71842.pem -> 71842.pem in /etc/ssl/certs
	I0512 01:29:37.442446    8652 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0512 01:29:37.466025    8652 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\71842.pem --> /etc/ssl/certs/71842.pem (1708 bytes)
	I0512 01:29:37.537988    8652 start.go:309] post-start completed in 1.5460631s
	I0512 01:29:37.554662    8652 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" calico-20220512010244-7184
	I0512 01:29:38.915782    8652 cli_runner.go:217] Completed: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" calico-20220512010244-7184: (1.3610499s)
	I0512 01:29:38.915782    8652 profile.go:148] Saving config to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\calico-20220512010244-7184\config.json ...
	I0512 01:29:38.937263    8652 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0512 01:29:38.948936    8652 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220512010244-7184
	I0512 01:29:40.296187    8652 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220512010244-7184: (1.3471828s)
	I0512 01:29:40.296187    8652 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:51249 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\calico-20220512010244-7184\id_rsa Username:docker}
	I0512 01:29:40.438277    8652 ssh_runner.go:235] Completed: sh -c "df -h /var | awk 'NR==2{print $5}'": (1.5008325s)
	I0512 01:29:40.452820    8652 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0512 01:29:40.463818    8652 start.go:134] duration metric: createHost completed in 1m18.7800491s
	I0512 01:29:40.481239    8652 cli_runner.go:164] Run: docker container inspect calico-20220512010244-7184 --format={{.State.Status}}
	I0512 01:29:41.982786    8652 cli_runner.go:217] Completed: docker container inspect calico-20220512010244-7184 --format={{.State.Status}}: (1.5008882s)
	W0512 01:29:41.982786    8652 fix.go:129] unexpected machine state, will restart: <nil>
	I0512 01:29:41.982786    8652 machine.go:88] provisioning docker machine ...
	I0512 01:29:41.982786    8652 ubuntu.go:169] provisioning hostname "calico-20220512010244-7184"
	I0512 01:29:41.991823    8652 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220512010244-7184
	I0512 01:29:43.451688    8652 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220512010244-7184: (1.4597917s)
	I0512 01:29:43.454688    8652 main.go:134] libmachine: Using SSH client type: native
	I0512 01:29:43.455688    8652 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x422e20] 0x425c80 <nil>  [] 0s} 127.0.0.1 51249 <nil> <nil>}
	I0512 01:29:43.455688    8652 main.go:134] libmachine: About to run SSH command:
	sudo hostname calico-20220512010244-7184 && echo "calico-20220512010244-7184" | sudo tee /etc/hostname
	I0512 01:29:43.676585    8652 main.go:134] libmachine: SSH cmd err, output: <nil>: calico-20220512010244-7184
	
	I0512 01:29:43.684587    8652 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220512010244-7184
	I0512 01:29:44.956314    8652 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220512010244-7184: (1.2715643s)
	I0512 01:29:44.961476    8652 main.go:134] libmachine: Using SSH client type: native
	I0512 01:29:44.962478    8652 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x422e20] 0x425c80 <nil>  [] 0s} 127.0.0.1 51249 <nil> <nil>}
	I0512 01:29:44.962478    8652 main.go:134] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\scalico-20220512010244-7184' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 calico-20220512010244-7184/g' /etc/hosts;
				else 
					echo '127.0.1.1 calico-20220512010244-7184' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0512 01:29:45.106021    8652 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I0512 01:29:45.106121    8652 ubuntu.go:175] set auth options {CertDir:C:\Users\jenkins.minikube4\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube4\minikube-integration\.minikube}
	I0512 01:29:45.106121    8652 ubuntu.go:177] setting up certificates
	I0512 01:29:45.106206    8652 provision.go:83] configureAuth start
	I0512 01:29:45.124455    8652 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" calico-20220512010244-7184
	I0512 01:29:46.304603    8652 cli_runner.go:217] Completed: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" calico-20220512010244-7184: (1.1800887s)
	I0512 01:29:46.304603    8652 provision.go:138] copyHostCerts
	I0512 01:29:46.305584    8652 exec_runner.go:144] found C:\Users\jenkins.minikube4\minikube-integration\.minikube/ca.pem, removing ...
	I0512 01:29:46.305584    8652 exec_runner.go:207] rm: C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.pem
	I0512 01:29:46.306601    8652 exec_runner.go:151] cp: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube4\minikube-integration\.minikube/ca.pem (1078 bytes)
	I0512 01:29:46.307595    8652 exec_runner.go:144] found C:\Users\jenkins.minikube4\minikube-integration\.minikube/cert.pem, removing ...
	I0512 01:29:46.307595    8652 exec_runner.go:207] rm: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cert.pem
	I0512 01:29:46.307595    8652 exec_runner.go:151] cp: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube4\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0512 01:29:46.308596    8652 exec_runner.go:144] found C:\Users\jenkins.minikube4\minikube-integration\.minikube/key.pem, removing ...
	I0512 01:29:46.308596    8652 exec_runner.go:207] rm: C:\Users\jenkins.minikube4\minikube-integration\.minikube\key.pem
	I0512 01:29:46.309715    8652 exec_runner.go:151] cp: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube4\minikube-integration\.minikube/key.pem (1675 bytes)
	I0512 01:29:46.310662    8652 provision.go:112] generating server cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.calico-20220512010244-7184 san=[192.168.67.2 127.0.0.1 localhost 127.0.0.1 minikube calico-20220512010244-7184]
	I0512 01:29:46.597506    8652 provision.go:172] copyRemoteCerts
	I0512 01:29:46.613852    8652 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0512 01:29:46.625129    8652 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220512010244-7184
	I0512 01:29:47.750269    8652 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220512010244-7184: (1.1249463s)
	I0512 01:29:47.750269    8652 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:51249 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\calico-20220512010244-7184\id_rsa Username:docker}
	I0512 01:29:47.896490    8652 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (1.2825729s)
	I0512 01:29:47.896490    8652 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1253 bytes)
	I0512 01:29:47.961556    8652 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0512 01:29:48.030172    8652 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0512 01:29:48.095453    8652 provision.go:86] duration metric: configureAuth took 2.9890952s
	I0512 01:29:48.095547    8652 ubuntu.go:193] setting minikube options for container-runtime
	I0512 01:29:48.096160    8652 config.go:178] Loaded profile config "calico-20220512010244-7184": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.5
	I0512 01:29:48.106664    8652 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220512010244-7184
	I0512 01:29:49.195007    8652 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220512010244-7184: (1.0882874s)
	I0512 01:29:49.200394    8652 main.go:134] libmachine: Using SSH client type: native
	I0512 01:29:49.200881    8652 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x422e20] 0x425c80 <nil>  [] 0s} 127.0.0.1 51249 <nil> <nil>}
	I0512 01:29:49.200881    8652 main.go:134] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0512 01:29:49.391950    8652 main.go:134] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0512 01:29:49.680835    8652 ubuntu.go:71] root file system type: overlay
	I0512 01:29:49.681813    8652 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0512 01:29:49.690509    8652 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220512010244-7184
	I0512 01:29:50.808741    8652 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220512010244-7184: (1.1178084s)
	I0512 01:29:50.812735    8652 main.go:134] libmachine: Using SSH client type: native
	I0512 01:29:50.812735    8652 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x422e20] 0x425c80 <nil>  [] 0s} 127.0.0.1 51249 <nil> <nil>}
	I0512 01:29:50.812735    8652 main.go:134] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0512 01:29:51.008855    8652 main.go:134] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0512 01:29:51.021020    8652 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220512010244-7184
	I0512 01:29:52.121098    8652 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220512010244-7184: (1.0998292s)
	I0512 01:29:52.124950    8652 main.go:134] libmachine: Using SSH client type: native
	I0512 01:29:52.124950    8652 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x422e20] 0x425c80 <nil>  [] 0s} 127.0.0.1 51249 <nil> <nil>}
	I0512 01:29:52.124950    8652 main.go:134] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0512 01:29:52.311737    8652 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I0512 01:29:52.311737    8652 machine.go:91] provisioned docker machine in 10.3284282s
	I0512 01:29:52.311826    8652 start.go:306] post-start starting for "calico-20220512010244-7184" (driver="docker")
	I0512 01:29:52.311826    8652 start.go:316] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0512 01:29:52.329418    8652 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0512 01:29:52.335449    8652 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220512010244-7184
	I0512 01:29:53.462212    8652 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220512010244-7184: (1.1267065s)
	I0512 01:29:53.462212    8652 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:51249 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\calico-20220512010244-7184\id_rsa Username:docker}
	I0512 01:29:53.583403    8652 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (1.2539211s)
	I0512 01:29:53.593728    8652 ssh_runner.go:195] Run: cat /etc/os-release
	I0512 01:29:53.609725    8652 main.go:134] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0512 01:29:53.609725    8652 main.go:134] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0512 01:29:53.609725    8652 main.go:134] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0512 01:29:53.609725    8652 info.go:137] Remote host: Ubuntu 20.04.4 LTS
	I0512 01:29:53.609725    8652 filesync.go:126] Scanning C:\Users\jenkins.minikube4\minikube-integration\.minikube\addons for local assets ...
	I0512 01:29:53.609725    8652 filesync.go:126] Scanning C:\Users\jenkins.minikube4\minikube-integration\.minikube\files for local assets ...
	I0512 01:29:53.610790    8652 filesync.go:149] local asset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\71842.pem -> 71842.pem in /etc/ssl/certs
	I0512 01:29:53.620718    8652 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0512 01:29:53.651799    8652 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\71842.pem --> /etc/ssl/certs/71842.pem (1708 bytes)
	I0512 01:29:53.700595    8652 start.go:309] post-start completed in 1.3886988s
	I0512 01:29:53.711883    8652 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0512 01:29:53.718903    8652 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220512010244-7184
	I0512 01:29:54.821249    8652 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220512010244-7184: (1.10229s)
	I0512 01:29:54.821249    8652 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:51249 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\calico-20220512010244-7184\id_rsa Username:docker}
	I0512 01:29:54.942183    8652 ssh_runner.go:235] Completed: sh -c "df -h /var | awk 'NR==2{print $5}'": (1.2302382s)
	I0512 01:29:54.952225    8652 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0512 01:29:54.969609    8652 fix.go:57] fixHost completed within 6m2.7976846s
	I0512 01:29:54.969860    8652 start.go:81] releasing machines lock for "calico-20220512010244-7184", held for 6m2.7977851s
	I0512 01:29:54.977461    8652 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" calico-20220512010244-7184
	I0512 01:29:56.095732    8652 cli_runner.go:217] Completed: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" calico-20220512010244-7184: (1.1180798s)
	I0512 01:29:56.101176    8652 ssh_runner.go:195] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0512 01:29:56.113173    8652 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220512010244-7184
	I0512 01:29:56.117168    8652 ssh_runner.go:195] Run: sudo service containerd status
	I0512 01:29:56.130160    8652 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220512010244-7184
	I0512 01:29:57.252031    8652 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220512010244-7184: (1.138304s)
	I0512 01:29:57.252078    8652 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:51249 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\calico-20220512010244-7184\id_rsa Username:docker}
	I0512 01:29:57.267100    8652 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220512010244-7184: (1.1368818s)
	I0512 01:29:57.267211    8652 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:51249 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\calico-20220512010244-7184\id_rsa Username:docker}
	I0512 01:29:57.427775    8652 ssh_runner.go:235] Completed: curl -sS -m 2 https://k8s.gcr.io/: (1.3265317s)
	I0512 01:29:57.427775    8652 ssh_runner.go:235] Completed: sudo service containerd status: (1.31054s)
	I0512 01:29:57.437789    8652 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0512 01:29:57.463431    8652 cruntime.go:273] skipping containerd shutdown because we are bound to it
	I0512 01:29:57.472419    8652 ssh_runner.go:195] Run: sudo service crio status
	I0512 01:29:57.523635    8652 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/dockershim.sock
	image-endpoint: unix:///var/run/dockershim.sock
	" | sudo tee /etc/crictl.yaml"
	I0512 01:29:57.572637    8652 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0512 01:29:57.617649    8652 ssh_runner.go:195] Run: sudo service docker status
	I0512 01:29:57.674430    8652 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0512 01:29:57.763448    8652 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0512 01:29:57.847496    8652 out.go:204] * Preparing Kubernetes v1.23.5 on Docker 20.10.15 ...
	I0512 01:29:57.856462    8652 cli_runner.go:164] Run: docker exec -t calico-20220512010244-7184 dig +short host.docker.internal
	I0512 01:29:59.197901    8652 cli_runner.go:217] Completed: docker exec -t calico-20220512010244-7184 dig +short host.docker.internal: (1.3412584s)
	I0512 01:29:59.198040    8652 network.go:96] got host ip for mount in container by digging dns: 192.168.65.2
	I0512 01:29:59.209653    8652 ssh_runner.go:195] Run: grep 192.168.65.2	host.minikube.internal$ /etc/hosts
	I0512 01:29:59.224652    8652 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.65.2	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0512 01:29:59.267838    8652 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" calico-20220512010244-7184
	I0512 01:30:00.411532    8652 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" calico-20220512010244-7184: (1.1436367s)
	I0512 01:30:00.412126    8652 preload.go:132] Checking if preload exists for k8s version v1.23.5 and runtime docker
	I0512 01:30:00.425828    8652 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0512 01:30:00.508150    8652 docker.go:610] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.23.5
	k8s.gcr.io/kube-proxy:v1.23.5
	k8s.gcr.io/kube-scheduler:v1.23.5
	k8s.gcr.io/kube-controller-manager:v1.23.5
	k8s.gcr.io/etcd:3.5.1-0
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0512 01:30:00.508150    8652 docker.go:541] Images already preloaded, skipping extraction
	I0512 01:30:00.516542    8652 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0512 01:30:00.583922    8652 docker.go:610] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.23.5
	k8s.gcr.io/kube-proxy:v1.23.5
	k8s.gcr.io/kube-scheduler:v1.23.5
	k8s.gcr.io/kube-controller-manager:v1.23.5
	k8s.gcr.io/etcd:3.5.1-0
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0512 01:30:00.583922    8652 cache_images.go:84] Images are preloaded, skipping loading
	I0512 01:30:00.592921    8652 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0512 01:30:00.779573    8652 cni.go:95] Creating CNI manager for "calico"
	I0512 01:30:00.779573    8652 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0512 01:30:00.779573    8652 kubeadm.go:158] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.67.2 APIServerPort:8443 KubernetesVersion:v1.23.5 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:calico-20220512010244-7184 NodeName:calico-20220512010244-7184 DNSDomain:cluster.local CRISocket:/var/run/dockershim.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.67.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.67.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/mi
nikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	I0512 01:30:00.779573    8652 kubeadm.go:162] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.67.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/dockershim.sock
	  name: "calico-20220512010244-7184"
	  kubeletExtraArgs:
	    node-ip: 192.168.67.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.67.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.23.5
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0512 01:30:00.780206    8652 kubeadm.go:936] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.23.5/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=docker --hostname-override=calico-20220512010244-7184 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.67.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.23.5 ClusterName:calico-20220512010244-7184 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico NodeIP: NodePort:8443 NodeName:}
	I0512 01:30:00.792268    8652 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.23.5
	I0512 01:30:00.819093    8652 binaries.go:44] Found k8s binaries, skipping transfer
	I0512 01:30:00.831992    8652 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /var/lib/minikube /etc/init.d
	I0512 01:30:00.865331    8652 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (373 bytes)
	I0512 01:30:00.911042    8652 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0512 01:30:00.957037    8652 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2048 bytes)
	I0512 01:30:00.998067    8652 ssh_runner.go:362] scp memory --> /var/lib/minikube/openrc-restart-wrapper.sh (233 bytes)
	I0512 01:30:01.035552    8652 ssh_runner.go:362] scp memory --> /etc/init.d/kubelet (839 bytes)
	I0512 01:30:01.087196    8652 ssh_runner.go:195] Run: grep 192.168.67.2	control-plane.minikube.internal$ /etc/hosts
	I0512 01:30:01.098501    8652 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.67.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0512 01:30:01.127428    8652 certs.go:54] Setting up C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\calico-20220512010244-7184 for IP: 192.168.67.2
	I0512 01:30:01.128101    8652 certs.go:182] skipping minikubeCA CA generation: C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.key
	I0512 01:30:01.128618    8652 certs.go:182] skipping proxyClientCA CA generation: C:\Users\jenkins.minikube4\minikube-integration\.minikube\proxy-client-ca.key
	I0512 01:30:01.128947    8652 certs.go:302] generating minikube-user signed cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\calico-20220512010244-7184\client.key
	I0512 01:30:01.128947    8652 crypto.go:68] Generating cert C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\calico-20220512010244-7184\client.crt with IP's: []
	I0512 01:30:01.263699    8652 crypto.go:156] Writing cert to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\calico-20220512010244-7184\client.crt ...
	I0512 01:30:01.263699    8652 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\calico-20220512010244-7184\client.crt: {Name:mkaab263a85407c61d507327515b1712a28a44b7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0512 01:30:01.265303    8652 crypto.go:164] Writing key to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\calico-20220512010244-7184\client.key ...
	I0512 01:30:01.265366    8652 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\calico-20220512010244-7184\client.key: {Name:mk5b3a08c8e68f5550ff5a92763b8c15a6dc1cd9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0512 01:30:01.266518    8652 certs.go:302] generating minikube signed cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\calico-20220512010244-7184\apiserver.key.c7fa3a9e
	I0512 01:30:01.266747    8652 crypto.go:68] Generating cert C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\calico-20220512010244-7184\apiserver.crt.c7fa3a9e with IP's: [192.168.67.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I0512 01:30:01.497977    8652 crypto.go:156] Writing cert to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\calico-20220512010244-7184\apiserver.crt.c7fa3a9e ...
	I0512 01:30:01.497977    8652 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\calico-20220512010244-7184\apiserver.crt.c7fa3a9e: {Name:mk0a22077574d6f567440d2ce418aef9d105c0f4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0512 01:30:01.499481    8652 crypto.go:164] Writing key to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\calico-20220512010244-7184\apiserver.key.c7fa3a9e ...
	I0512 01:30:01.499481    8652 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\calico-20220512010244-7184\apiserver.key.c7fa3a9e: {Name:mkd9174a168ef563765bc2efb962d17953288438 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0512 01:30:01.499695    8652 certs.go:320] copying C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\calico-20220512010244-7184\apiserver.crt.c7fa3a9e -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\calico-20220512010244-7184\apiserver.crt
	I0512 01:30:01.506629    8652 certs.go:324] copying C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\calico-20220512010244-7184\apiserver.key.c7fa3a9e -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\calico-20220512010244-7184\apiserver.key
	I0512 01:30:01.510817    8652 certs.go:302] generating aggregator signed cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\calico-20220512010244-7184\proxy-client.key
	I0512 01:30:01.511811    8652 crypto.go:68] Generating cert C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\calico-20220512010244-7184\proxy-client.crt with IP's: []
	I0512 01:30:02.066530    8652 crypto.go:156] Writing cert to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\calico-20220512010244-7184\proxy-client.crt ...
	I0512 01:30:02.066530    8652 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\calico-20220512010244-7184\proxy-client.crt: {Name:mk33e0ca9404055b3936d5d2db130cdf7841ddb0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0512 01:30:02.067602    8652 crypto.go:164] Writing key to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\calico-20220512010244-7184\proxy-client.key ...
	I0512 01:30:02.067602    8652 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\calico-20220512010244-7184\proxy-client.key: {Name:mkb30c0965b489dfeebdedea0d88b70375ef4f02 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0512 01:30:02.076532    8652 certs.go:388] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\7184.pem (1338 bytes)
	W0512 01:30:02.076532    8652 certs.go:384] ignoring C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\7184_empty.pem, impossibly tiny 0 bytes
	I0512 01:30:02.076532    8652 certs.go:388] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca-key.pem (1675 bytes)
	I0512 01:30:02.077531    8652 certs.go:388] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem (1078 bytes)
	I0512 01:30:02.077531    8652 certs.go:388] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0512 01:30:02.077531    8652 certs.go:388] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\key.pem (1675 bytes)
	I0512 01:30:02.077531    8652 certs.go:388] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\71842.pem (1708 bytes)
	I0512 01:30:02.079537    8652 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\calico-20220512010244-7184\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0512 01:30:02.148304    8652 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\calico-20220512010244-7184\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0512 01:30:02.194892    8652 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\calico-20220512010244-7184\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0512 01:30:02.257158    8652 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\calico-20220512010244-7184\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0512 01:30:02.320471    8652 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0512 01:30:02.369991    8652 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0512 01:30:02.429336    8652 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0512 01:30:02.478702    8652 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0512 01:30:02.535521    8652 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\7184.pem --> /usr/share/ca-certificates/7184.pem (1338 bytes)
	I0512 01:30:02.582155    8652 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\71842.pem --> /usr/share/ca-certificates/71842.pem (1708 bytes)
	I0512 01:30:02.635046    8652 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0512 01:30:02.679600    8652 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0512 01:30:02.729602    8652 ssh_runner.go:195] Run: openssl version
	I0512 01:30:02.754610    8652 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/7184.pem && ln -fs /usr/share/ca-certificates/7184.pem /etc/ssl/certs/7184.pem"
	I0512 01:30:02.787106    8652 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/7184.pem
	I0512 01:30:02.797141    8652 certs.go:431] hashing: -rw-r--r-- 1 root root 1338 May 11 23:10 /usr/share/ca-certificates/7184.pem
	I0512 01:30:02.818221    8652 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/7184.pem
	I0512 01:30:02.855685    8652 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/7184.pem /etc/ssl/certs/51391683.0"
	I0512 01:30:02.893699    8652 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/71842.pem && ln -fs /usr/share/ca-certificates/71842.pem /etc/ssl/certs/71842.pem"
	I0512 01:30:02.946041    8652 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/71842.pem
	I0512 01:30:02.956663    8652 certs.go:431] hashing: -rw-r--r-- 1 root root 1708 May 11 23:10 /usr/share/ca-certificates/71842.pem
	I0512 01:30:02.965656    8652 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/71842.pem
	I0512 01:30:02.987662    8652 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/71842.pem /etc/ssl/certs/3ec20f2e.0"
	I0512 01:30:03.032020    8652 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0512 01:30:03.080014    8652 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0512 01:30:03.089593    8652 certs.go:431] hashing: -rw-r--r-- 1 root root 1111 May 11 23:00 /usr/share/ca-certificates/minikubeCA.pem
	I0512 01:30:03.102601    8652 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0512 01:30:03.132595    8652 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0512 01:30:03.162637    8652 kubeadm.go:391] StartCluster: {Name:calico-20220512010244-7184 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1652251400-14138@sha256:8c847a4aa2afc5a7fc659f9731046bf9cc7e788283deecc83c8633014fb0828a Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.5 ClusterName:calico-20220512010244-7184 Namespace:default APIServerName:minikube
CA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.23.5 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:5m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube4:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOpti
mizations:false DisableMetrics:false}
	I0512 01:30:03.169628    8652 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0512 01:30:03.260456    8652 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0512 01:30:03.310860    8652 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0512 01:30:03.333238    8652 kubeadm.go:221] ignoring SystemVerification for kubeadm because of docker driver
	I0512 01:30:03.343244    8652 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0512 01:30:03.364253    8652 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0512 01:30:03.364253    8652 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.5:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0512 01:30:29.760910    8652 out.go:204]   - Generating certificates and keys ...
	I0512 01:30:29.765974    8652 out.go:204]   - Booting up control plane ...
	I0512 01:30:29.776908    8652 out.go:204]   - Configuring RBAC rules ...
	I0512 01:30:29.779897    8652 cni.go:95] Creating CNI manager for "calico"
	I0512 01:30:29.784906    8652 out.go:177] * Configuring Calico (Container Networking Interface) ...
	I0512 01:30:29.786931    8652 cni.go:189] applying CNI manifest using /var/lib/minikube/binaries/v1.23.5/kubectl ...
	I0512 01:30:29.786931    8652 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (202049 bytes)
	I0512 01:30:30.048666    8652 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0512 01:30:36.328386    8652 ssh_runner.go:235] Completed: sudo /var/lib/minikube/binaries/v1.23.5/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml: (6.2794023s)
	I0512 01:30:36.328498    8652 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0512 01:30:36.349754    8652 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0512 01:30:36.351104    8652 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl label nodes minikube.k8s.io/version=v1.25.2 minikube.k8s.io/commit=50a7977b568d2ad3e04003527a57f4502d6177a0 minikube.k8s.io/name=calico-20220512010244-7184 minikube.k8s.io/updated_at=2022_05_12T01_30_36_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0512 01:30:36.359669    8652 ops.go:34] apiserver oom_adj: -16
	I0512 01:30:36.746917    8652 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0512 01:30:37.541519    8652 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0512 01:30:38.042998    8652 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0512 01:30:38.529762    8652 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0512 01:30:39.035299    8652 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0512 01:30:39.543392    8652 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0512 01:30:40.039477    8652 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0512 01:30:40.546084    8652 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0512 01:30:41.048466    8652 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0512 01:30:42.220422    8652 ssh_runner.go:235] Completed: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig: (1.171897s)
	I0512 01:30:42.221434    8652 kubeadm.go:1020] duration metric: took 5.8924907s to wait for elevateKubeSystemPrivileges.
	I0512 01:30:42.221434    8652 kubeadm.go:393] StartCluster complete in 39.0568168s
	I0512 01:30:42.221434    8652 settings.go:142] acquiring lock: {Name:mk5d8710830d010adb6db61f855b0ef766a8622c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0512 01:30:42.222460    8652 settings.go:150] Updating kubeconfig:  C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	I0512 01:30:42.228438    8652 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\kubeconfig: {Name:mk97c09b788e5010ffd4c9dd9525f9245d5edd25 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0512 01:30:43.305473    8652 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "calico-20220512010244-7184" rescaled to 1
	I0512 01:30:43.305550    8652 start.go:208] Will wait 5m0s for node &{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.23.5 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0512 01:30:43.312798    8652 out.go:177] * Verifying Kubernetes components...
	I0512 01:30:43.305684    8652 addons.go:415] enableAddons start: toEnable=map[], additional=[]
	I0512 01:30:43.312911    8652 addons.go:65] Setting storage-provisioner=true in profile "calico-20220512010244-7184"
	I0512 01:30:43.320256    8652 addons.go:153] Setting addon storage-provisioner=true in "calico-20220512010244-7184"
	W0512 01:30:43.320256    8652 addons.go:165] addon storage-provisioner should already be in state true
	I0512 01:30:43.306540    8652 config.go:178] Loaded profile config "calico-20220512010244-7184": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.5
	I0512 01:30:43.305655    8652 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.23.5/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0512 01:30:43.312911    8652 addons.go:65] Setting default-storageclass=true in profile "calico-20220512010244-7184"
	I0512 01:30:43.323426    8652 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "calico-20220512010244-7184"
	I0512 01:30:43.323659    8652 host.go:66] Checking if "calico-20220512010244-7184" exists ...
	I0512 01:30:43.355000    8652 ssh_runner.go:195] Run: sudo service kubelet status
	I0512 01:30:43.360998    8652 cli_runner.go:164] Run: docker container inspect calico-20220512010244-7184 --format={{.State.Status}}
	I0512 01:30:43.360998    8652 cli_runner.go:164] Run: docker container inspect calico-20220512010244-7184 --format={{.State.Status}}
	I0512 01:30:43.833047    8652 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.23.5/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.65.2 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.23.5/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0512 01:30:43.852082    8652 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" calico-20220512010244-7184
	I0512 01:30:45.025759    8652 cli_runner.go:217] Completed: docker container inspect calico-20220512010244-7184 --format={{.State.Status}}: (1.6646772s)
	I0512 01:30:45.072737    8652 cli_runner.go:217] Completed: docker container inspect calico-20220512010244-7184 --format={{.State.Status}}: (1.7116526s)
	I0512 01:30:45.075734    8652 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0512 01:30:45.077785    8652 addons.go:348] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0512 01:30:45.077785    8652 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0512 01:30:45.084730    8652 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220512010244-7184
	I0512 01:30:45.126822    8652 addons.go:153] Setting addon default-storageclass=true in "calico-20220512010244-7184"
	W0512 01:30:45.126822    8652 addons.go:165] addon default-storageclass should already be in state true
	I0512 01:30:45.126822    8652 host.go:66] Checking if "calico-20220512010244-7184" exists ...
	I0512 01:30:45.159796    8652 cli_runner.go:164] Run: docker container inspect calico-20220512010244-7184 --format={{.State.Status}}
	I0512 01:30:45.496369    8652 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" calico-20220512010244-7184: (1.644203s)
	I0512 01:30:45.500379    8652 node_ready.go:35] waiting up to 5m0s for node "calico-20220512010244-7184" to be "Ready" ...
	I0512 01:30:45.514016    8652 node_ready.go:49] node "calico-20220512010244-7184" has status "Ready":"True"
	I0512 01:30:45.514016    8652 node_ready.go:38] duration metric: took 13.6367ms waiting for node "calico-20220512010244-7184" to be "Ready" ...
	I0512 01:30:45.514016    8652 pod_ready.go:35] extra waiting up to 5m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0512 01:30:45.551010    8652 pod_ready.go:78] waiting up to 5m0s for pod "calico-kube-controllers-8594699699-dp75t" in "kube-system" namespace to be "Ready" ...
	I0512 01:30:46.789297    8652 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220512010244-7184: (1.7044814s)
	I0512 01:30:46.789297    8652 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:51249 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\calico-20220512010244-7184\id_rsa Username:docker}
	I0512 01:30:46.866377    8652 cli_runner.go:217] Completed: docker container inspect calico-20220512010244-7184 --format={{.State.Status}}: (1.7064947s)
	I0512 01:30:46.866377    8652 addons.go:348] installing /etc/kubernetes/addons/storageclass.yaml
	I0512 01:30:46.866377    8652 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0512 01:30:46.879273    8652 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220512010244-7184
	I0512 01:30:47.660864    8652 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.5/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0512 01:30:47.815030    8652 pod_ready.go:102] pod "calico-kube-controllers-8594699699-dp75t" in "kube-system" namespace has status "Ready":"False"
	I0512 01:30:48.553667    8652 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220512010244-7184: (1.6743092s)
	I0512 01:30:48.553667    8652 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:51249 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\calico-20220512010244-7184\id_rsa Username:docker}
	I0512 01:30:49.154246    8652 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.5/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0512 01:30:49.821829    8652 pod_ready.go:102] pod "calico-kube-controllers-8594699699-dp75t" in "kube-system" namespace has status "Ready":"False"
	I0512 01:30:52.493957    8652 pod_ready.go:102] pod "calico-kube-controllers-8594699699-dp75t" in "kube-system" namespace has status "Ready":"False"
	I0512 01:30:53.013437    8652 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.23.5/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.65.2 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.23.5/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (9.1799246s)
	I0512 01:30:53.013437    8652 start.go:815] {"host.minikube.internal": 192.168.65.2} host record injected into CoreDNS
	I0512 01:30:54.109853    8652 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.5/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (6.4486624s)
	I0512 01:30:54.109853    8652 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.5/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (4.955356s)
	I0512 01:30:54.115864    8652 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0512 01:30:54.119883    8652 addons.go:417] enableAddons completed in 10.8136513s
	I0512 01:30:54.909784    8652 pod_ready.go:102] pod "calico-kube-controllers-8594699699-dp75t" in "kube-system" namespace has status "Ready":"False"
	I0512 01:30:57.308083    8652 pod_ready.go:102] pod "calico-kube-controllers-8594699699-dp75t" in "kube-system" namespace has status "Ready":"False"
	I0512 01:30:59.319567    8652 pod_ready.go:102] pod "calico-kube-controllers-8594699699-dp75t" in "kube-system" namespace has status "Ready":"False"
	I0512 01:31:01.814768    8652 pod_ready.go:102] pod "calico-kube-controllers-8594699699-dp75t" in "kube-system" namespace has status "Ready":"False"
	I0512 01:31:04.322341    8652 pod_ready.go:102] pod "calico-kube-controllers-8594699699-dp75t" in "kube-system" namespace has status "Ready":"False"
	I0512 01:31:11.092893    8652 pod_ready.go:102] pod "calico-kube-controllers-8594699699-dp75t" in "kube-system" namespace has status "Ready":"False"
	I0512 01:31:13.334950    8652 pod_ready.go:102] pod "calico-kube-controllers-8594699699-dp75t" in "kube-system" namespace has status "Ready":"False"
	I0512 01:31:16.091382    8652 pod_ready.go:102] pod "calico-kube-controllers-8594699699-dp75t" in "kube-system" namespace has status "Ready":"False"
	I0512 01:31:18.240221    8652 pod_ready.go:102] pod "calico-kube-controllers-8594699699-dp75t" in "kube-system" namespace has status "Ready":"False"
	I0512 01:31:20.245036    8652 pod_ready.go:102] pod "calico-kube-controllers-8594699699-dp75t" in "kube-system" namespace has status "Ready":"False"
	I0512 01:31:22.309867    8652 pod_ready.go:102] pod "calico-kube-controllers-8594699699-dp75t" in "kube-system" namespace has status "Ready":"False"
	I0512 01:31:24.805401    8652 pod_ready.go:102] pod "calico-kube-controllers-8594699699-dp75t" in "kube-system" namespace has status "Ready":"False"
	I0512 01:31:26.826644    8652 pod_ready.go:102] pod "calico-kube-controllers-8594699699-dp75t" in "kube-system" namespace has status "Ready":"False"
	I0512 01:31:29.237977    8652 pod_ready.go:102] pod "calico-kube-controllers-8594699699-dp75t" in "kube-system" namespace has status "Ready":"False"
	I0512 01:31:31.254134    8652 pod_ready.go:102] pod "calico-kube-controllers-8594699699-dp75t" in "kube-system" namespace has status "Ready":"False"
	I0512 01:31:33.808245    8652 pod_ready.go:102] pod "calico-kube-controllers-8594699699-dp75t" in "kube-system" namespace has status "Ready":"False"
	I0512 01:31:36.308355    8652 pod_ready.go:102] pod "calico-kube-controllers-8594699699-dp75t" in "kube-system" namespace has status "Ready":"False"
	I0512 01:31:38.323700    8652 pod_ready.go:102] pod "calico-kube-controllers-8594699699-dp75t" in "kube-system" namespace has status "Ready":"False"
	I0512 01:31:40.407171    8652 pod_ready.go:102] pod "calico-kube-controllers-8594699699-dp75t" in "kube-system" namespace has status "Ready":"False"
	I0512 01:31:42.816967    8652 pod_ready.go:102] pod "calico-kube-controllers-8594699699-dp75t" in "kube-system" namespace has status "Ready":"False"
	I0512 01:31:45.246114    8652 pod_ready.go:102] pod "calico-kube-controllers-8594699699-dp75t" in "kube-system" namespace has status "Ready":"False"
	I0512 01:31:47.804156    8652 pod_ready.go:102] pod "calico-kube-controllers-8594699699-dp75t" in "kube-system" namespace has status "Ready":"False"
	I0512 01:31:49.824870    8652 pod_ready.go:102] pod "calico-kube-controllers-8594699699-dp75t" in "kube-system" namespace has status "Ready":"False"
	I0512 01:31:52.236755    8652 pod_ready.go:102] pod "calico-kube-controllers-8594699699-dp75t" in "kube-system" namespace has status "Ready":"False"
	I0512 01:31:54.249702    8652 pod_ready.go:102] pod "calico-kube-controllers-8594699699-dp75t" in "kube-system" namespace has status "Ready":"False"
	I0512 01:31:56.307022    8652 pod_ready.go:102] pod "calico-kube-controllers-8594699699-dp75t" in "kube-system" namespace has status "Ready":"False"
	I0512 01:31:58.741667    8652 pod_ready.go:102] pod "calico-kube-controllers-8594699699-dp75t" in "kube-system" namespace has status "Ready":"False"
	I0512 01:32:00.746153    8652 pod_ready.go:102] pod "calico-kube-controllers-8594699699-dp75t" in "kube-system" namespace has status "Ready":"False"
	I0512 01:32:02.894567    8652 pod_ready.go:102] pod "calico-kube-controllers-8594699699-dp75t" in "kube-system" namespace has status "Ready":"False"
	I0512 01:32:05.236033    8652 pod_ready.go:102] pod "calico-kube-controllers-8594699699-dp75t" in "kube-system" namespace has status "Ready":"False"
	I0512 01:32:07.250871    8652 pod_ready.go:102] pod "calico-kube-controllers-8594699699-dp75t" in "kube-system" namespace has status "Ready":"False"
	I0512 01:32:09.746448    8652 pod_ready.go:102] pod "calico-kube-controllers-8594699699-dp75t" in "kube-system" namespace has status "Ready":"False"
	I0512 01:32:11.810683    8652 pod_ready.go:102] pod "calico-kube-controllers-8594699699-dp75t" in "kube-system" namespace has status "Ready":"False"
	I0512 01:32:14.256872    8652 pod_ready.go:102] pod "calico-kube-controllers-8594699699-dp75t" in "kube-system" namespace has status "Ready":"False"
	I0512 01:32:16.310442    8652 pod_ready.go:102] pod "calico-kube-controllers-8594699699-dp75t" in "kube-system" namespace has status "Ready":"False"
	I0512 01:32:18.738332    8652 pod_ready.go:102] pod "calico-kube-controllers-8594699699-dp75t" in "kube-system" namespace has status "Ready":"False"
	I0512 01:32:20.741018    8652 pod_ready.go:102] pod "calico-kube-controllers-8594699699-dp75t" in "kube-system" namespace has status "Ready":"False"
	I0512 01:32:22.810342    8652 pod_ready.go:102] pod "calico-kube-controllers-8594699699-dp75t" in "kube-system" namespace has status "Ready":"False"
	I0512 01:32:25.307619    8652 pod_ready.go:102] pod "calico-kube-controllers-8594699699-dp75t" in "kube-system" namespace has status "Ready":"False"
	I0512 01:32:27.742039    8652 pod_ready.go:102] pod "calico-kube-controllers-8594699699-dp75t" in "kube-system" namespace has status "Ready":"False"
	I0512 01:32:29.744504    8652 pod_ready.go:102] pod "calico-kube-controllers-8594699699-dp75t" in "kube-system" namespace has status "Ready":"False"
	I0512 01:32:31.756707    8652 pod_ready.go:102] pod "calico-kube-controllers-8594699699-dp75t" in "kube-system" namespace has status "Ready":"False"
	I0512 01:32:33.807555    8652 pod_ready.go:102] pod "calico-kube-controllers-8594699699-dp75t" in "kube-system" namespace has status "Ready":"False"
	I0512 01:32:36.241189    8652 pod_ready.go:102] pod "calico-kube-controllers-8594699699-dp75t" in "kube-system" namespace has status "Ready":"False"
	I0512 01:32:38.310702    8652 pod_ready.go:102] pod "calico-kube-controllers-8594699699-dp75t" in "kube-system" namespace has status "Ready":"False"
	I0512 01:32:40.810058    8652 pod_ready.go:102] pod "calico-kube-controllers-8594699699-dp75t" in "kube-system" namespace has status "Ready":"False"
	I0512 01:32:42.830045    8652 pod_ready.go:102] pod "calico-kube-controllers-8594699699-dp75t" in "kube-system" namespace has status "Ready":"False"
	I0512 01:32:45.314127    8652 pod_ready.go:102] pod "calico-kube-controllers-8594699699-dp75t" in "kube-system" namespace has status "Ready":"False"
	I0512 01:32:47.811645    8652 pod_ready.go:102] pod "calico-kube-controllers-8594699699-dp75t" in "kube-system" namespace has status "Ready":"False"
	I0512 01:32:49.813073    8652 pod_ready.go:102] pod "calico-kube-controllers-8594699699-dp75t" in "kube-system" namespace has status "Ready":"False"
	I0512 01:32:52.320626    8652 pod_ready.go:102] pod "calico-kube-controllers-8594699699-dp75t" in "kube-system" namespace has status "Ready":"False"
	I0512 01:32:54.322980    8652 pod_ready.go:102] pod "calico-kube-controllers-8594699699-dp75t" in "kube-system" namespace has status "Ready":"False"
	I0512 01:32:56.811545    8652 pod_ready.go:102] pod "calico-kube-controllers-8594699699-dp75t" in "kube-system" namespace has status "Ready":"False"
	I0512 01:32:59.241856    8652 pod_ready.go:102] pod "calico-kube-controllers-8594699699-dp75t" in "kube-system" namespace has status "Ready":"False"
	I0512 01:33:01.242829    8652 pod_ready.go:102] pod "calico-kube-controllers-8594699699-dp75t" in "kube-system" namespace has status "Ready":"False"
	I0512 01:33:03.249507    8652 pod_ready.go:102] pod "calico-kube-controllers-8594699699-dp75t" in "kube-system" namespace has status "Ready":"False"
	I0512 01:33:05.813243    8652 pod_ready.go:102] pod "calico-kube-controllers-8594699699-dp75t" in "kube-system" namespace has status "Ready":"False"
	I0512 01:33:08.317040    8652 pod_ready.go:102] pod "calico-kube-controllers-8594699699-dp75t" in "kube-system" namespace has status "Ready":"False"
	I0512 01:33:10.322449    8652 pod_ready.go:102] pod "calico-kube-controllers-8594699699-dp75t" in "kube-system" namespace has status "Ready":"False"
	I0512 01:33:12.744407    8652 pod_ready.go:102] pod "calico-kube-controllers-8594699699-dp75t" in "kube-system" namespace has status "Ready":"False"
	I0512 01:33:14.823839    8652 pod_ready.go:102] pod "calico-kube-controllers-8594699699-dp75t" in "kube-system" namespace has status "Ready":"False"
	I0512 01:33:17.323720    8652 pod_ready.go:102] pod "calico-kube-controllers-8594699699-dp75t" in "kube-system" namespace has status "Ready":"False"
	I0512 01:33:19.752264    8652 pod_ready.go:102] pod "calico-kube-controllers-8594699699-dp75t" in "kube-system" namespace has status "Ready":"False"
	I0512 01:33:21.813100    8652 pod_ready.go:102] pod "calico-kube-controllers-8594699699-dp75t" in "kube-system" namespace has status "Ready":"False"
	I0512 01:33:24.247220    8652 pod_ready.go:102] pod "calico-kube-controllers-8594699699-dp75t" in "kube-system" namespace has status "Ready":"False"
	I0512 01:33:26.748404    8652 pod_ready.go:102] pod "calico-kube-controllers-8594699699-dp75t" in "kube-system" namespace has status "Ready":"False"
	I0512 01:33:29.254711    8652 pod_ready.go:102] pod "calico-kube-controllers-8594699699-dp75t" in "kube-system" namespace has status "Ready":"False"
	I0512 01:33:31.314730    8652 pod_ready.go:102] pod "calico-kube-controllers-8594699699-dp75t" in "kube-system" namespace has status "Ready":"False"
	I0512 01:33:33.753901    8652 pod_ready.go:102] pod "calico-kube-controllers-8594699699-dp75t" in "kube-system" namespace has status "Ready":"False"
	I0512 01:33:35.758435    8652 pod_ready.go:102] pod "calico-kube-controllers-8594699699-dp75t" in "kube-system" namespace has status "Ready":"False"
	I0512 01:33:37.829076    8652 pod_ready.go:102] pod "calico-kube-controllers-8594699699-dp75t" in "kube-system" namespace has status "Ready":"False"
	I0512 01:33:40.330839    8652 pod_ready.go:102] pod "calico-kube-controllers-8594699699-dp75t" in "kube-system" namespace has status "Ready":"False"
	I0512 01:33:42.411128    8652 pod_ready.go:102] pod "calico-kube-controllers-8594699699-dp75t" in "kube-system" namespace has status "Ready":"False"
	I0512 01:33:44.836145    8652 pod_ready.go:102] pod "calico-kube-controllers-8594699699-dp75t" in "kube-system" namespace has status "Ready":"False"
	I0512 01:33:47.313312    8652 pod_ready.go:102] pod "calico-kube-controllers-8594699699-dp75t" in "kube-system" namespace has status "Ready":"False"
	I0512 01:33:49.742047    8652 pod_ready.go:102] pod "calico-kube-controllers-8594699699-dp75t" in "kube-system" namespace has status "Ready":"False"
	I0512 01:33:51.742169    8652 pod_ready.go:102] pod "calico-kube-controllers-8594699699-dp75t" in "kube-system" namespace has status "Ready":"False"
	I0512 01:33:53.753333    8652 pod_ready.go:102] pod "calico-kube-controllers-8594699699-dp75t" in "kube-system" namespace has status "Ready":"False"
	I0512 01:33:55.814001    8652 pod_ready.go:102] pod "calico-kube-controllers-8594699699-dp75t" in "kube-system" namespace has status "Ready":"False"
	I0512 01:33:58.326655    8652 pod_ready.go:102] pod "calico-kube-controllers-8594699699-dp75t" in "kube-system" namespace has status "Ready":"False"
	I0512 01:34:00.414894    8652 pod_ready.go:102] pod "calico-kube-controllers-8594699699-dp75t" in "kube-system" namespace has status "Ready":"False"
	I0512 01:34:02.827277    8652 pod_ready.go:102] pod "calico-kube-controllers-8594699699-dp75t" in "kube-system" namespace has status "Ready":"False"
	I0512 01:34:05.248403    8652 pod_ready.go:102] pod "calico-kube-controllers-8594699699-dp75t" in "kube-system" namespace has status "Ready":"False"
	I0512 01:34:07.250688    8652 pod_ready.go:102] pod "calico-kube-controllers-8594699699-dp75t" in "kube-system" namespace has status "Ready":"False"
	I0512 01:34:09.261004    8652 pod_ready.go:102] pod "calico-kube-controllers-8594699699-dp75t" in "kube-system" namespace has status "Ready":"False"
	I0512 01:34:11.758543    8652 pod_ready.go:102] pod "calico-kube-controllers-8594699699-dp75t" in "kube-system" namespace has status "Ready":"False"
	I0512 01:34:13.824375    8652 pod_ready.go:102] pod "calico-kube-controllers-8594699699-dp75t" in "kube-system" namespace has status "Ready":"False"
	I0512 01:34:16.314793    8652 pod_ready.go:102] pod "calico-kube-controllers-8594699699-dp75t" in "kube-system" namespace has status "Ready":"False"
	I0512 01:34:18.914383    8652 pod_ready.go:102] pod "calico-kube-controllers-8594699699-dp75t" in "kube-system" namespace has status "Ready":"False"
	I0512 01:34:21.241418    8652 pod_ready.go:102] pod "calico-kube-controllers-8594699699-dp75t" in "kube-system" namespace has status "Ready":"False"
	I0512 01:34:23.327816    8652 pod_ready.go:102] pod "calico-kube-controllers-8594699699-dp75t" in "kube-system" namespace has status "Ready":"False"
	I0512 01:34:25.744440    8652 pod_ready.go:102] pod "calico-kube-controllers-8594699699-dp75t" in "kube-system" namespace has status "Ready":"False"
	I0512 01:34:27.815517    8652 pod_ready.go:102] pod "calico-kube-controllers-8594699699-dp75t" in "kube-system" namespace has status "Ready":"False"
	I0512 01:34:30.414402    8652 pod_ready.go:102] pod "calico-kube-controllers-8594699699-dp75t" in "kube-system" namespace has status "Ready":"False"
	I0512 01:34:32.824629    8652 pod_ready.go:102] pod "calico-kube-controllers-8594699699-dp75t" in "kube-system" namespace has status "Ready":"False"
	I0512 01:34:35.251677    8652 pod_ready.go:102] pod "calico-kube-controllers-8594699699-dp75t" in "kube-system" namespace has status "Ready":"False"
	I0512 01:34:37.750522    8652 pod_ready.go:102] pod "calico-kube-controllers-8594699699-dp75t" in "kube-system" namespace has status "Ready":"False"
	I0512 01:34:39.754113    8652 pod_ready.go:102] pod "calico-kube-controllers-8594699699-dp75t" in "kube-system" namespace has status "Ready":"False"
	I0512 01:34:41.756924    8652 pod_ready.go:102] pod "calico-kube-controllers-8594699699-dp75t" in "kube-system" namespace has status "Ready":"False"
	I0512 01:34:44.329289    8652 pod_ready.go:102] pod "calico-kube-controllers-8594699699-dp75t" in "kube-system" namespace has status "Ready":"False"
	I0512 01:34:45.828283    8652 pod_ready.go:81] duration metric: took 4m0.2650995s waiting for pod "calico-kube-controllers-8594699699-dp75t" in "kube-system" namespace to be "Ready" ...
	E0512 01:34:45.828283    8652 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting for the condition
	I0512 01:34:45.828283    8652 pod_ready.go:78] waiting up to 5m0s for pod "calico-node-mgrcc" in "kube-system" namespace to be "Ready" ...
	I0512 01:34:47.887596    8652 pod_ready.go:102] pod "calico-node-mgrcc" in "kube-system" namespace has status "Ready":"False"
	I0512 01:34:49.915734    8652 pod_ready.go:102] pod "calico-node-mgrcc" in "kube-system" namespace has status "Ready":"False"
	I0512 01:34:52.430534    8652 pod_ready.go:102] pod "calico-node-mgrcc" in "kube-system" namespace has status "Ready":"False"
	I0512 01:34:54.931208    8652 pod_ready.go:102] pod "calico-node-mgrcc" in "kube-system" namespace has status "Ready":"False"
	I0512 01:34:57.430161    8652 pod_ready.go:102] pod "calico-node-mgrcc" in "kube-system" namespace has status "Ready":"False"
	I0512 01:34:59.458846    8652 pod_ready.go:102] pod "calico-node-mgrcc" in "kube-system" namespace has status "Ready":"False"
	I0512 01:35:01.920213    8652 pod_ready.go:102] pod "calico-node-mgrcc" in "kube-system" namespace has status "Ready":"False"
	I0512 01:35:04.515738    8652 pod_ready.go:102] pod "calico-node-mgrcc" in "kube-system" namespace has status "Ready":"False"
	I0512 01:35:06.920340    8652 pod_ready.go:102] pod "calico-node-mgrcc" in "kube-system" namespace has status "Ready":"False"
	I0512 01:35:08.932733    8652 pod_ready.go:102] pod "calico-node-mgrcc" in "kube-system" namespace has status "Ready":"False"
	I0512 01:35:11.435946    8652 pod_ready.go:102] pod "calico-node-mgrcc" in "kube-system" namespace has status "Ready":"False"
	I0512 01:35:13.518534    8652 pod_ready.go:102] pod "calico-node-mgrcc" in "kube-system" namespace has status "Ready":"False"
	I0512 01:35:15.931526    8652 pod_ready.go:102] pod "calico-node-mgrcc" in "kube-system" namespace has status "Ready":"False"
	I0512 01:35:18.434192    8652 pod_ready.go:102] pod "calico-node-mgrcc" in "kube-system" namespace has status "Ready":"False"
	I0512 01:35:20.516754    8652 pod_ready.go:102] pod "calico-node-mgrcc" in "kube-system" namespace has status "Ready":"False"
	I0512 01:35:22.934656    8652 pod_ready.go:102] pod "calico-node-mgrcc" in "kube-system" namespace has status "Ready":"False"
	I0512 01:35:25.019206    8652 pod_ready.go:102] pod "calico-node-mgrcc" in "kube-system" namespace has status "Ready":"False"
	I0512 01:35:27.447581    8652 pod_ready.go:102] pod "calico-node-mgrcc" in "kube-system" namespace has status "Ready":"False"
	I0512 01:35:29.937316    8652 pod_ready.go:102] pod "calico-node-mgrcc" in "kube-system" namespace has status "Ready":"False"
	I0512 01:35:31.943501    8652 pod_ready.go:102] pod "calico-node-mgrcc" in "kube-system" namespace has status "Ready":"False"
	I0512 01:35:34.416634    8652 pod_ready.go:102] pod "calico-node-mgrcc" in "kube-system" namespace has status "Ready":"False"
	I0512 01:35:36.433336    8652 pod_ready.go:102] pod "calico-node-mgrcc" in "kube-system" namespace has status "Ready":"False"
	I0512 01:35:38.441614    8652 pod_ready.go:102] pod "calico-node-mgrcc" in "kube-system" namespace has status "Ready":"False"
	I0512 01:35:40.935498    8652 pod_ready.go:102] pod "calico-node-mgrcc" in "kube-system" namespace has status "Ready":"False"
	I0512 01:35:42.937711    8652 pod_ready.go:102] pod "calico-node-mgrcc" in "kube-system" namespace has status "Ready":"False"
	I0512 01:35:45.433399    8652 pod_ready.go:102] pod "calico-node-mgrcc" in "kube-system" namespace has status "Ready":"False"
	I0512 01:35:47.436516    8652 pod_ready.go:102] pod "calico-node-mgrcc" in "kube-system" namespace has status "Ready":"False"
	I0512 01:35:49.873301    8652 pod_ready.go:102] pod "calico-node-mgrcc" in "kube-system" namespace has status "Ready":"False"
	I0512 01:35:51.935922    8652 pod_ready.go:102] pod "calico-node-mgrcc" in "kube-system" namespace has status "Ready":"False"
	I0512 01:35:53.937926    8652 pod_ready.go:102] pod "calico-node-mgrcc" in "kube-system" namespace has status "Ready":"False"
	I0512 01:35:56.520033    8652 pod_ready.go:102] pod "calico-node-mgrcc" in "kube-system" namespace has status "Ready":"False"
	I0512 01:35:58.935111    8652 pod_ready.go:102] pod "calico-node-mgrcc" in "kube-system" namespace has status "Ready":"False"
	I0512 01:36:01.375308    8652 pod_ready.go:102] pod "calico-node-mgrcc" in "kube-system" namespace has status "Ready":"False"
	I0512 01:36:03.426311    8652 pod_ready.go:102] pod "calico-node-mgrcc" in "kube-system" namespace has status "Ready":"False"
	I0512 01:36:05.869977    8652 pod_ready.go:102] pod "calico-node-mgrcc" in "kube-system" namespace has status "Ready":"False"
	I0512 01:36:07.932663    8652 pod_ready.go:102] pod "calico-node-mgrcc" in "kube-system" namespace has status "Ready":"False"
	I0512 01:36:10.384219    8652 pod_ready.go:102] pod "calico-node-mgrcc" in "kube-system" namespace has status "Ready":"False"
	I0512 01:36:12.435190    8652 pod_ready.go:102] pod "calico-node-mgrcc" in "kube-system" namespace has status "Ready":"False"
	I0512 01:36:14.435870    8652 pod_ready.go:102] pod "calico-node-mgrcc" in "kube-system" namespace has status "Ready":"False"
	I0512 01:36:16.437872    8652 pod_ready.go:102] pod "calico-node-mgrcc" in "kube-system" namespace has status "Ready":"False"
	I0512 01:36:18.883061    8652 pod_ready.go:102] pod "calico-node-mgrcc" in "kube-system" namespace has status "Ready":"False"
	I0512 01:36:20.940588    8652 pod_ready.go:102] pod "calico-node-mgrcc" in "kube-system" namespace has status "Ready":"False"
	I0512 01:36:22.950313    8652 pod_ready.go:102] pod "calico-node-mgrcc" in "kube-system" namespace has status "Ready":"False"
	I0512 01:36:25.377184    8652 pod_ready.go:102] pod "calico-node-mgrcc" in "kube-system" namespace has status "Ready":"False"
	I0512 01:36:27.440899    8652 pod_ready.go:102] pod "calico-node-mgrcc" in "kube-system" namespace has status "Ready":"False"
	I0512 01:36:29.442593    8652 pod_ready.go:102] pod "calico-node-mgrcc" in "kube-system" namespace has status "Ready":"False"
	I0512 01:36:31.520948    8652 pod_ready.go:102] pod "calico-node-mgrcc" in "kube-system" namespace has status "Ready":"False"
	I0512 01:36:33.953350    8652 pod_ready.go:102] pod "calico-node-mgrcc" in "kube-system" namespace has status "Ready":"False"
	I0512 01:36:36.437934    8652 pod_ready.go:102] pod "calico-node-mgrcc" in "kube-system" namespace has status "Ready":"False"
	I0512 01:36:38.525893    8652 pod_ready.go:102] pod "calico-node-mgrcc" in "kube-system" namespace has status "Ready":"False"
	I0512 01:36:40.939965    8652 pod_ready.go:102] pod "calico-node-mgrcc" in "kube-system" namespace has status "Ready":"False"
	I0512 01:36:43.424316    8652 pod_ready.go:102] pod "calico-node-mgrcc" in "kube-system" namespace has status "Ready":"False"
	I0512 01:36:45.921577    8652 pod_ready.go:102] pod "calico-node-mgrcc" in "kube-system" namespace has status "Ready":"False"
	I0512 01:36:48.389178    8652 pod_ready.go:102] pod "calico-node-mgrcc" in "kube-system" namespace has status "Ready":"False"
	I0512 01:36:50.437640    8652 pod_ready.go:102] pod "calico-node-mgrcc" in "kube-system" namespace has status "Ready":"False"
	I0512 01:36:52.438611    8652 pod_ready.go:102] pod "calico-node-mgrcc" in "kube-system" namespace has status "Ready":"False"
	I0512 01:36:54.937264    8652 pod_ready.go:102] pod "calico-node-mgrcc" in "kube-system" namespace has status "Ready":"False"
	I0512 01:36:56.946001    8652 pod_ready.go:102] pod "calico-node-mgrcc" in "kube-system" namespace has status "Ready":"False"
	I0512 01:36:59.437960    8652 pod_ready.go:102] pod "calico-node-mgrcc" in "kube-system" namespace has status "Ready":"False"
	I0512 01:37:01.439101    8652 pod_ready.go:102] pod "calico-node-mgrcc" in "kube-system" namespace has status "Ready":"False"
	I0512 01:37:03.894544    8652 pod_ready.go:102] pod "calico-node-mgrcc" in "kube-system" namespace has status "Ready":"False"
	I0512 01:37:05.938894    8652 pod_ready.go:102] pod "calico-node-mgrcc" in "kube-system" namespace has status "Ready":"False"
	I0512 01:37:08.440425    8652 pod_ready.go:102] pod "calico-node-mgrcc" in "kube-system" namespace has status "Ready":"False"
	I0512 01:37:10.446227    8652 pod_ready.go:102] pod "calico-node-mgrcc" in "kube-system" namespace has status "Ready":"False"
	I0512 01:37:13.023233    8652 pod_ready.go:102] pod "calico-node-mgrcc" in "kube-system" namespace has status "Ready":"False"
	I0512 01:37:15.380090    8652 pod_ready.go:102] pod "calico-node-mgrcc" in "kube-system" namespace has status "Ready":"False"
	I0512 01:37:17.434861    8652 pod_ready.go:102] pod "calico-node-mgrcc" in "kube-system" namespace has status "Ready":"False"
	I0512 01:37:19.888066    8652 pod_ready.go:102] pod "calico-node-mgrcc" in "kube-system" namespace has status "Ready":"False"
	I0512 01:37:21.927238    8652 pod_ready.go:102] pod "calico-node-mgrcc" in "kube-system" namespace has status "Ready":"False"
	I0512 01:37:23.940199    8652 pod_ready.go:102] pod "calico-node-mgrcc" in "kube-system" namespace has status "Ready":"False"
	I0512 01:37:26.376390    8652 pod_ready.go:102] pod "calico-node-mgrcc" in "kube-system" namespace has status "Ready":"False"
	I0512 01:37:28.441201    8652 pod_ready.go:102] pod "calico-node-mgrcc" in "kube-system" namespace has status "Ready":"False"
	I0512 01:37:30.947276    8652 pod_ready.go:102] pod "calico-node-mgrcc" in "kube-system" namespace has status "Ready":"False"
	I0512 01:37:33.437823    8652 pod_ready.go:102] pod "calico-node-mgrcc" in "kube-system" namespace has status "Ready":"False"
	I0512 01:37:35.458587    8652 pod_ready.go:102] pod "calico-node-mgrcc" in "kube-system" namespace has status "Ready":"False"
	I0512 01:37:37.927044    8652 pod_ready.go:102] pod "calico-node-mgrcc" in "kube-system" namespace has status "Ready":"False"
	I0512 01:37:39.936390    8652 pod_ready.go:102] pod "calico-node-mgrcc" in "kube-system" namespace has status "Ready":"False"
	I0512 01:37:41.941970    8652 pod_ready.go:102] pod "calico-node-mgrcc" in "kube-system" namespace has status "Ready":"False"
	I0512 01:37:44.428081    8652 pod_ready.go:102] pod "calico-node-mgrcc" in "kube-system" namespace has status "Ready":"False"
	I0512 01:37:46.438840    8652 pod_ready.go:102] pod "calico-node-mgrcc" in "kube-system" namespace has status "Ready":"False"
	I0512 01:37:48.925396    8652 pod_ready.go:102] pod "calico-node-mgrcc" in "kube-system" namespace has status "Ready":"False"
	I0512 01:37:50.939967    8652 pod_ready.go:102] pod "calico-node-mgrcc" in "kube-system" namespace has status "Ready":"False"
	I0512 01:37:53.438198    8652 pod_ready.go:102] pod "calico-node-mgrcc" in "kube-system" namespace has status "Ready":"False"
	I0512 01:37:55.444005    8652 pod_ready.go:102] pod "calico-node-mgrcc" in "kube-system" namespace has status "Ready":"False"
	I0512 01:37:57.926650    8652 pod_ready.go:102] pod "calico-node-mgrcc" in "kube-system" namespace has status "Ready":"False"
	I0512 01:38:00.457452    8652 pod_ready.go:102] pod "calico-node-mgrcc" in "kube-system" namespace has status "Ready":"False"
	I0512 01:38:02.886408    8652 pod_ready.go:102] pod "calico-node-mgrcc" in "kube-system" namespace has status "Ready":"False"
	I0512 01:38:04.943418    8652 pod_ready.go:102] pod "calico-node-mgrcc" in "kube-system" namespace has status "Ready":"False"
	I0512 01:38:07.379728    8652 pod_ready.go:102] pod "calico-node-mgrcc" in "kube-system" namespace has status "Ready":"False"
	I0512 01:38:09.388738    8652 pod_ready.go:102] pod "calico-node-mgrcc" in "kube-system" namespace has status "Ready":"False"
	I0512 01:38:11.943533    8652 pod_ready.go:102] pod "calico-node-mgrcc" in "kube-system" namespace has status "Ready":"False"
	I0512 01:38:14.380947    8652 pod_ready.go:102] pod "calico-node-mgrcc" in "kube-system" namespace has status "Ready":"False"
	I0512 01:38:16.443318    8652 pod_ready.go:102] pod "calico-node-mgrcc" in "kube-system" namespace has status "Ready":"False"
	I0512 01:38:18.952276    8652 pod_ready.go:102] pod "calico-node-mgrcc" in "kube-system" namespace has status "Ready":"False"
	I0512 01:38:21.426192    8652 pod_ready.go:102] pod "calico-node-mgrcc" in "kube-system" namespace has status "Ready":"False"
	I0512 01:38:23.447536    8652 pod_ready.go:102] pod "calico-node-mgrcc" in "kube-system" namespace has status "Ready":"False"
	I0512 01:38:25.942681    8652 pod_ready.go:102] pod "calico-node-mgrcc" in "kube-system" namespace has status "Ready":"False"
	I0512 01:38:28.441596    8652 pod_ready.go:102] pod "calico-node-mgrcc" in "kube-system" namespace has status "Ready":"False"
	I0512 01:38:30.940728    8652 pod_ready.go:102] pod "calico-node-mgrcc" in "kube-system" namespace has status "Ready":"False"
	I0512 01:38:33.037661    8652 pod_ready.go:102] pod "calico-node-mgrcc" in "kube-system" namespace has status "Ready":"False"
	I0512 01:38:35.387269    8652 pod_ready.go:102] pod "calico-node-mgrcc" in "kube-system" namespace has status "Ready":"False"
	I0512 01:38:37.387391    8652 pod_ready.go:102] pod "calico-node-mgrcc" in "kube-system" namespace has status "Ready":"False"
	I0512 01:38:39.447482    8652 pod_ready.go:102] pod "calico-node-mgrcc" in "kube-system" namespace has status "Ready":"False"
	I0512 01:38:41.529346    8652 pod_ready.go:102] pod "calico-node-mgrcc" in "kube-system" namespace has status "Ready":"False"
	I0512 01:38:43.887188    8652 pod_ready.go:102] pod "calico-node-mgrcc" in "kube-system" namespace has status "Ready":"False"
	I0512 01:38:45.949288    8652 pod_ready.go:102] pod "calico-node-mgrcc" in "kube-system" namespace has status "Ready":"False"
	I0512 01:38:46.029708    8652 pod_ready.go:81] duration metric: took 4m0.1892335s waiting for pod "calico-node-mgrcc" in "kube-system" namespace to be "Ready" ...
	E0512 01:38:46.029708    8652 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting for the condition
	I0512 01:38:46.029708    8652 pod_ready.go:38] duration metric: took 8m0.4903164s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0512 01:38:46.032709    8652 out.go:177] 
	W0512 01:38:46.035705    8652 out.go:239] X Exiting due to GUEST_START: wait 5m0s for node: extra waiting: timed out waiting 5m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	X Exiting due to GUEST_START: wait 5m0s for node: extra waiting: timed out waiting 5m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	W0512 01:38:46.035705    8652 out.go:239] * 
	* 
	W0512 01:38:46.037707    8652 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0512 01:38:46.046708    8652 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:103: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/calico/Start (957.61s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-weave/Start (984.75s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-weave/Start
net_test.go:101: (dbg) Run:  out/minikube-windows-amd64.exe start -p custom-weave-20220512010244-7184 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=testdata\weavenet.yaml --driver=docker

                                                
                                                
=== CONT  TestNetworkPlugins/group/custom-weave/Start
net_test.go:101: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p custom-weave-20220512010244-7184 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=testdata\weavenet.yaml --driver=docker: exit status 105 (16m24.2546281s)

                                                
                                                
-- stdout --
	* [custom-weave-20220512010244-7184] minikube v1.25.2 on Microsoft Windows 10 Enterprise N 10.0.19042 Build 19042
	  - KUBECONFIG=C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube4\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=13639
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	* Using the docker driver based on user configuration
	* Using Docker Desktop driver with the root privilege
	* Starting control plane node custom-weave-20220512010244-7184 in cluster custom-weave-20220512010244-7184
	* Pulling base image ...
	* Creating docker container (CPUs=2, Memory=2048MB) ...
	* docker "custom-weave-20220512010244-7184" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=2048MB) ...
	* Preparing Kubernetes v1.23.5 on Docker 20.10.15 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Configuring RBAC rules ...
	* Configuring testdata\weavenet.yaml (Container Networking Interface) ...
	* Verifying Kubernetes components...
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	* Enabled addons: storage-provisioner, default-storageclass
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0512 01:23:38.167939    7708 out.go:296] Setting OutFile to fd 1520 ...
	I0512 01:23:38.224799    7708 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0512 01:23:38.224799    7708 out.go:309] Setting ErrFile to fd 1680...
	I0512 01:23:38.224799    7708 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0512 01:23:38.237579    7708 out.go:303] Setting JSON to false
	I0512 01:23:38.240615    7708 start.go:115] hostinfo: {"hostname":"minikube4","uptime":17071,"bootTime":1652301547,"procs":169,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19042 Build 19042","kernelVersion":"10.0.19042 Build 19042","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"df6bfb5f-73f2-4acb-9365-df7854ecbb28"}
	W0512 01:23:38.240615    7708 start.go:123] gopshost.Virtualization returned error: not implemented yet
	I0512 01:23:38.250217    7708 out.go:177] * [custom-weave-20220512010244-7184] minikube v1.25.2 on Microsoft Windows 10 Enterprise N 10.0.19042 Build 19042
	I0512 01:23:38.253742    7708 notify.go:193] Checking for updates...
	I0512 01:23:38.255789    7708 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	I0512 01:23:38.258370    7708 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube4\minikube-integration\.minikube
	I0512 01:23:38.261134    7708 out.go:177]   - MINIKUBE_LOCATION=13639
	I0512 01:23:38.263568    7708 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0512 01:23:38.266726    7708 config.go:178] Loaded profile config "calico-20220512010244-7184": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.5
	I0512 01:23:38.266726    7708 config.go:178] Loaded profile config "cilium-20220512010244-7184": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.5
	I0512 01:23:38.267412    7708 config.go:178] Loaded profile config "default-k8s-different-port-20220512011148-7184": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.5
	I0512 01:23:38.267412    7708 driver.go:358] Setting default libvirt URI to qemu:///system
	I0512 01:23:40.909201    7708 docker.go:137] docker version: linux-20.10.14
	I0512 01:23:40.918377    7708 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0512 01:23:42.984491    7708 cli_runner.go:217] Completed: docker system info --format "{{json .}}": (2.0660077s)
	I0512 01:23:42.985160    7708 info.go:265] docker info: {ID:5MOX:W55Z:6RSS:V5PU:46KT:D723:NTM4:N7FK:USOO:URA3:TW6J:2PNT Containers:3 ContainersRunning:1 ContainersPaused:0 ContainersStopped:2 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:59 OomKillDisable:true NGoroutines:53 SystemTime:2022-05-12 01:23:41.9509482 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.102.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_
64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:53902323712 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.14 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3df54a852345ae127d1fa3092b95168e4a88e2f8 Expected:3df54a852345ae127d1fa3092b95168e4a88e2f8} RuncCommit:{ID:v1.0.3-0-gf46b6ba Expected:v1.0.3-0-gf46b6ba} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,
profile=default] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.5.0] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:C:\Program Files\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc.
Version:v0.17.0]] Warnings:<nil>}}
	I0512 01:23:42.987291    7708 out.go:177] * Using the docker driver based on user configuration
	I0512 01:23:42.991281    7708 start.go:284] selected driver: docker
	I0512 01:23:42.991363    7708 start.go:801] validating driver "docker" against <nil>
	I0512 01:23:42.991392    7708 start.go:812] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0512 01:23:43.146475    7708 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0512 01:23:45.400908    7708 cli_runner.go:217] Completed: docker system info --format "{{json .}}": (2.2543167s)
	I0512 01:23:45.400908    7708 info.go:265] docker info: {ID:5MOX:W55Z:6RSS:V5PU:46KT:D723:NTM4:N7FK:USOO:URA3:TW6J:2PNT Containers:3 ContainersRunning:1 ContainersPaused:0 ContainersStopped:2 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:60 OomKillDisable:true NGoroutines:53 SystemTime:2022-05-12 01:23:44.3038962 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.102.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_
64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:53902323712 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.14 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3df54a852345ae127d1fa3092b95168e4a88e2f8 Expected:3df54a852345ae127d1fa3092b95168e4a88e2f8} RuncCommit:{ID:v1.0.3-0-gf46b6ba Expected:v1.0.3-0-gf46b6ba} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,
profile=default] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.5.0] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:C:\Program Files\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc.
Version:v0.17.0]] Warnings:<nil>}}
	I0512 01:23:45.400908    7708 start_flags.go:292] no existing cluster config was found, will generate one from the flags 
	I0512 01:23:45.401887    7708 start_flags.go:847] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0512 01:23:45.404560    7708 out.go:177] * Using Docker Desktop driver with the root privilege
	I0512 01:23:45.406596    7708 cni.go:95] Creating CNI manager for "testdata\\weavenet.yaml"
	I0512 01:23:45.406824    7708 start_flags.go:301] Found "testdata\\weavenet.yaml" CNI - setting NetworkPlugin=cni
	I0512 01:23:45.406890    7708 start_flags.go:306] config:
	{Name:custom-weave-20220512010244-7184 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1652251400-14138@sha256:8c847a4aa2afc5a7fc659f9731046bf9cc7e788283deecc83c8633014fb0828a Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.5 ClusterName:custom-weave-20220512010244-7184 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:c
luster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata\weavenet.yaml NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:5m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube4:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0512 01:23:45.411521    7708 out.go:177] * Starting control plane node custom-weave-20220512010244-7184 in cluster custom-weave-20220512010244-7184
	I0512 01:23:45.414493    7708 cache.go:120] Beginning downloading kic base image for docker with docker
	I0512 01:23:45.419777    7708 out.go:177] * Pulling base image ...
	I0512 01:23:45.423900    7708 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1652251400-14138@sha256:8c847a4aa2afc5a7fc659f9731046bf9cc7e788283deecc83c8633014fb0828a in local docker daemon
	I0512 01:23:45.424559    7708 preload.go:132] Checking if preload exists for k8s version v1.23.5 and runtime docker
	I0512 01:23:45.424559    7708 preload.go:148] Found local preload: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.23.5-docker-overlay2-amd64.tar.lz4
	I0512 01:23:45.424559    7708 cache.go:57] Caching tarball of preloaded images
	I0512 01:23:45.425091    7708 preload.go:174] Found C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.23.5-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0512 01:23:45.425176    7708 cache.go:60] Finished verifying existence of preloaded tar for  v1.23.5 on docker
	I0512 01:23:45.425176    7708 profile.go:148] Saving config to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\custom-weave-20220512010244-7184\config.json ...
	I0512 01:23:45.425771    7708 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\custom-weave-20220512010244-7184\config.json: {Name:mke4e3b55794ccef3659e4cea482cf72d7fe1d91 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0512 01:23:46.531179    7708 image.go:79] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1652251400-14138@sha256:8c847a4aa2afc5a7fc659f9731046bf9cc7e788283deecc83c8633014fb0828a in local docker daemon, skipping pull
	I0512 01:23:46.531179    7708 cache.go:141] gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1652251400-14138@sha256:8c847a4aa2afc5a7fc659f9731046bf9cc7e788283deecc83c8633014fb0828a exists in daemon, skipping load
	I0512 01:23:46.531179    7708 cache.go:206] Successfully downloaded all kic artifacts
	I0512 01:23:46.531179    7708 start.go:352] acquiring machines lock for custom-weave-20220512010244-7184: {Name:mk081df4f0c32f192dd90e39941efdba849aa624 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0512 01:23:46.531179    7708 start.go:356] acquired machines lock for "custom-weave-20220512010244-7184" in 0s
	I0512 01:23:46.531179    7708 start.go:91] Provisioning new machine with config: &{Name:custom-weave-20220512010244-7184 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1652251400-14138@sha256:8c847a4aa2afc5a7fc659f9731046bf9cc7e788283deecc83c8633014fb0828a Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.5 ClusterName:custom-weave-20220512010244-7184 Namesp
ace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata\weavenet.yaml NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.23.5 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:5m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube4:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p Mou
ntUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false} &{Name: IP: Port:8443 KubernetesVersion:v1.23.5 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0512 01:23:46.531179    7708 start.go:131] createHost starting for "" (driver="docker")
	I0512 01:23:46.535171    7708 out.go:204] * Creating docker container (CPUs=2, Memory=2048MB) ...
	I0512 01:23:46.536179    7708 start.go:165] libmachine.API.Create for "custom-weave-20220512010244-7184" (driver="docker")
	I0512 01:23:46.536179    7708 client.go:168] LocalClient.Create starting
	I0512 01:23:46.536179    7708 main.go:134] libmachine: Reading certificate data from C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem
	I0512 01:23:46.536179    7708 main.go:134] libmachine: Decoding PEM data...
	I0512 01:23:46.536179    7708 main.go:134] libmachine: Parsing certificate...
	I0512 01:23:46.537179    7708 main.go:134] libmachine: Reading certificate data from C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem
	I0512 01:23:46.537179    7708 main.go:134] libmachine: Decoding PEM data...
	I0512 01:23:46.537179    7708 main.go:134] libmachine: Parsing certificate...
	I0512 01:23:46.545210    7708 cli_runner.go:164] Run: docker network inspect custom-weave-20220512010244-7184 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0512 01:23:47.612905    7708 cli_runner.go:211] docker network inspect custom-weave-20220512010244-7184 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0512 01:23:47.612905    7708 cli_runner.go:217] Completed: docker network inspect custom-weave-20220512010244-7184 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}": (1.0676402s)
	I0512 01:23:47.621894    7708 network_create.go:272] running [docker network inspect custom-weave-20220512010244-7184] to gather additional debugging logs...
	I0512 01:23:47.621894    7708 cli_runner.go:164] Run: docker network inspect custom-weave-20220512010244-7184
	W0512 01:23:48.672253    7708 cli_runner.go:211] docker network inspect custom-weave-20220512010244-7184 returned with exit code 1
	I0512 01:23:48.672322    7708 cli_runner.go:217] Completed: docker network inspect custom-weave-20220512010244-7184: (1.0502523s)
	I0512 01:23:48.672346    7708 network_create.go:275] error running [docker network inspect custom-weave-20220512010244-7184]: docker network inspect custom-weave-20220512010244-7184: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: custom-weave-20220512010244-7184
	I0512 01:23:48.672346    7708 network_create.go:277] output of [docker network inspect custom-weave-20220512010244-7184]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: custom-weave-20220512010244-7184
	
	** /stderr **
	I0512 01:23:48.677740    7708 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0512 01:23:49.733568    7708 cli_runner.go:217] Completed: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}": (1.0555862s)
	I0512 01:23:49.760436    7708 network.go:288] reserving subnet 192.168.49.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[] amended:true}} dirty:map[192.168.49.0:0xc000007408] misses:0}
	I0512 01:23:49.760436    7708 network.go:235] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0512 01:23:49.761435    7708 network_create.go:115] attempt to create docker network custom-weave-20220512010244-7184 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0512 01:23:49.767436    7708 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true custom-weave-20220512010244-7184
	I0512 01:23:50.921921    7708 cli_runner.go:217] Completed: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true custom-weave-20220512010244-7184: (1.1544264s)
	I0512 01:23:50.922865    7708 network_create.go:99] docker network custom-weave-20220512010244-7184 192.168.49.0/24 created
	I0512 01:23:50.922865    7708 kic.go:106] calculated static IP "192.168.49.2" for the "custom-weave-20220512010244-7184" container
	I0512 01:23:50.939599    7708 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0512 01:23:51.979523    7708 cli_runner.go:217] Completed: docker ps -a --format {{.Names}}: (1.0396819s)
	I0512 01:23:51.988690    7708 cli_runner.go:164] Run: docker volume create custom-weave-20220512010244-7184 --label name.minikube.sigs.k8s.io=custom-weave-20220512010244-7184 --label created_by.minikube.sigs.k8s.io=true
	I0512 01:23:53.082164    7708 cli_runner.go:217] Completed: docker volume create custom-weave-20220512010244-7184 --label name.minikube.sigs.k8s.io=custom-weave-20220512010244-7184 --label created_by.minikube.sigs.k8s.io=true: (1.0933779s)
	I0512 01:23:53.082164    7708 oci.go:103] Successfully created a docker volume custom-weave-20220512010244-7184
	I0512 01:23:53.090601    7708 cli_runner.go:164] Run: docker run --rm --name custom-weave-20220512010244-7184-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=custom-weave-20220512010244-7184 --entrypoint /usr/bin/test -v custom-weave-20220512010244-7184:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1652251400-14138@sha256:8c847a4aa2afc5a7fc659f9731046bf9cc7e788283deecc83c8633014fb0828a -d /var/lib
	I0512 01:24:01.661800    7708 cli_runner.go:217] Completed: docker run --rm --name custom-weave-20220512010244-7184-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=custom-weave-20220512010244-7184 --entrypoint /usr/bin/test -v custom-weave-20220512010244-7184:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1652251400-14138@sha256:8c847a4aa2afc5a7fc659f9731046bf9cc7e788283deecc83c8633014fb0828a -d /var/lib: (8.570757s)
	I0512 01:24:01.661800    7708 oci.go:107] Successfully prepared a docker volume custom-weave-20220512010244-7184
	I0512 01:24:01.661800    7708 preload.go:132] Checking if preload exists for k8s version v1.23.5 and runtime docker
	I0512 01:24:01.661800    7708 kic.go:179] Starting extracting preloaded images to volume ...
	I0512 01:24:01.671802    7708 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.23.5-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v custom-weave-20220512010244-7184:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1652251400-14138@sha256:8c847a4aa2afc5a7fc659f9731046bf9cc7e788283deecc83c8633014fb0828a -I lz4 -xf /preloaded.tar -C /extractDir
	I0512 01:24:33.511179    7708 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.23.5-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v custom-weave-20220512010244-7184:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1652251400-14138@sha256:8c847a4aa2afc5a7fc659f9731046bf9cc7e788283deecc83c8633014fb0828a -I lz4 -xf /preloaded.tar -C /extractDir: (31.8376384s)
	I0512 01:24:33.511455    7708 kic.go:188] duration metric: took 31.847968 seconds to extract preloaded images to volume
	I0512 01:24:33.522435    7708 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0512 01:24:35.579550    7708 cli_runner.go:217] Completed: docker system info --format "{{json .}}": (2.0570093s)
	I0512 01:24:35.579550    7708 info.go:265] docker info: {ID:5MOX:W55Z:6RSS:V5PU:46KT:D723:NTM4:N7FK:USOO:URA3:TW6J:2PNT Containers:2 ContainersRunning:0 ContainersPaused:0 ContainersStopped:2 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:50 OomKillDisable:true NGoroutines:50 SystemTime:2022-05-12 01:24:34.5377066 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.102.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_
64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:53902323712 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.14 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3df54a852345ae127d1fa3092b95168e4a88e2f8 Expected:3df54a852345ae127d1fa3092b95168e4a88e2f8} RuncCommit:{ID:v1.0.3-0-gf46b6ba Expected:v1.0.3-0-gf46b6ba} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,
profile=default] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.5.0] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:C:\Program Files\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc.
Version:v0.17.0]] Warnings:<nil>}}
	I0512 01:24:35.588320    7708 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0512 01:24:37.649567    7708 cli_runner.go:217] Completed: docker info --format "'{{json .SecurityOptions}}'": (2.0611419s)
	I0512 01:24:37.656339    7708 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname custom-weave-20220512010244-7184 --name custom-weave-20220512010244-7184 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=custom-weave-20220512010244-7184 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=custom-weave-20220512010244-7184 --network custom-weave-20220512010244-7184 --ip 192.168.49.2 --volume custom-weave-20220512010244-7184:/var --security-opt apparmor=unconfined --memory=2048mb --memory-swap=2048mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1652251400-14138@sha256:8c847a4aa2afc5a7fc659f9731046bf9cc7e788283deecc83c8633014fb0828a
	W0512 01:24:39.326797    7708 cli_runner.go:211] docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname custom-weave-20220512010244-7184 --name custom-weave-20220512010244-7184 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=custom-weave-20220512010244-7184 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=custom-weave-20220512010244-7184 --network custom-weave-20220512010244-7184 --ip 192.168.49.2 --volume custom-weave-20220512010244-7184:/var --security-opt apparmor=unconfined --memory=2048mb --memory-swap=2048mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1652251400-14138@sha256:8c847a4aa2afc5a7fc659f9731046bf9cc7e788283deecc83c8633014fb0828a returned with exit code 125
	I0512 01:24:39.326797    7708 cli_runner.go:217] Completed: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname custom-weave-20220512010244-7184 --name custom-weave-20220512010244-7184 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=custom-weave-20220512010244-7184 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=custom-weave-20220512010244-7184 --network custom-weave-20220512010244-7184 --ip 192.168.49.2 --volume custom-weave-20220512010244-7184:/var --security-opt apparmor=unconfined --memory=2048mb --memory-swap=2048mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1652251400-14138@sha256:8c847a4aa2afc5a7fc659f9731046bf9cc7e788283deecc83c8633014fb0828a: (1.6703723s)
	I0512 01:24:39.326797    7708 client.go:171] LocalClient.Create took 52.7879017s
	I0512 01:24:41.349850    7708 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0512 01:24:41.355924    7708 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-weave-20220512010244-7184
	W0512 01:24:42.488566    7708 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-weave-20220512010244-7184 returned with exit code 1
	I0512 01:24:42.488566    7708 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-weave-20220512010244-7184: (1.1325839s)
	I0512 01:24:42.488566    7708 retry.go:31] will retry after 276.165072ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0512 01:24:42.783763    7708 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-weave-20220512010244-7184
	W0512 01:24:43.878355    7708 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-weave-20220512010244-7184 returned with exit code 1
	I0512 01:24:43.878577    7708 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-weave-20220512010244-7184: (1.0945358s)
	W0512 01:24:43.878754    7708 start.go:281] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	W0512 01:24:43.878754    7708 start.go:248] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0512 01:24:43.888722    7708 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0512 01:24:43.905415    7708 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-weave-20220512010244-7184
	W0512 01:24:44.955982    7708 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-weave-20220512010244-7184 returned with exit code 1
	I0512 01:24:44.955982    7708 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-weave-20220512010244-7184: (1.0502935s)
	I0512 01:24:44.955982    7708 retry.go:31] will retry after 291.140013ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0512 01:24:45.264637    7708 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-weave-20220512010244-7184
	W0512 01:24:46.347296    7708 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-weave-20220512010244-7184 returned with exit code 1
	I0512 01:24:46.347296    7708 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-weave-20220512010244-7184: (1.0826035s)
	W0512 01:24:46.347296    7708 start.go:296] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	W0512 01:24:46.347296    7708 start.go:253] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0512 01:24:46.347296    7708 start.go:134] duration metric: createHost completed in 59.8130393s
	I0512 01:24:46.347296    7708 start.go:81] releasing machines lock for "custom-weave-20220512010244-7184", held for 59.8130393s
	W0512 01:24:46.347296    7708 start.go:608] error starting host: creating host: create: creating: create kic node: create container: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname custom-weave-20220512010244-7184 --name custom-weave-20220512010244-7184 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=custom-weave-20220512010244-7184 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=custom-weave-20220512010244-7184 --network custom-weave-20220512010244-7184 --ip 192.168.49.2 --volume custom-weave-20220512010244-7184:/var --security-opt apparmor=unconfined --memory=2048mb --memory-swap=2048mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1652251400-14138@sha256:8c847a4aa2afc5a7fc659f9731046bf9cc7e788283d
eecc83c8633014fb0828a: exit status 125
	stdout:
	7d1c6a00f8e4969acb332305a972b6855a49846a5d43e97c62e78dc742988076
	
	stderr:
	docker: Error response from daemon: network custom-weave-20220512010244-7184 not found.
	I0512 01:24:46.362638    7708 cli_runner.go:164] Run: docker container inspect custom-weave-20220512010244-7184 --format={{.State.Status}}
	I0512 01:24:47.454482    7708 cli_runner.go:217] Completed: docker container inspect custom-weave-20220512010244-7184 --format={{.State.Status}}: (1.0917599s)
	W0512 01:24:47.454482    7708 start.go:613] delete host: Docker machine "custom-weave-20220512010244-7184" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	W0512 01:24:47.454482    7708 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: create kic node: create container: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname custom-weave-20220512010244-7184 --name custom-weave-20220512010244-7184 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=custom-weave-20220512010244-7184 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=custom-weave-20220512010244-7184 --network custom-weave-20220512010244-7184 --ip 192.168.49.2 --volume custom-weave-20220512010244-7184:/var --security-opt apparmor=unconfined --memory=2048mb --memory-swap=2048mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1652251400-14138@sha256:8c847a4aa2afc5a7fc659f9731
046bf9cc7e788283deecc83c8633014fb0828a: exit status 125
	stdout:
	7d1c6a00f8e4969acb332305a972b6855a49846a5d43e97c62e78dc742988076
	
	stderr:
	docker: Error response from daemon: network custom-weave-20220512010244-7184 not found.
	
	! StartHost failed, but will try again: creating host: create: creating: create kic node: create container: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname custom-weave-20220512010244-7184 --name custom-weave-20220512010244-7184 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=custom-weave-20220512010244-7184 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=custom-weave-20220512010244-7184 --network custom-weave-20220512010244-7184 --ip 192.168.49.2 --volume custom-weave-20220512010244-7184:/var --security-opt apparmor=unconfined --memory=2048mb --memory-swap=2048mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1652251400-14138@sha256:8c847a4aa2afc5a7fc659f9731046bf9cc7e788283deecc83c8633014fb0828a: ex
it status 125
	stdout:
	7d1c6a00f8e4969acb332305a972b6855a49846a5d43e97c62e78dc742988076
	
	stderr:
	docker: Error response from daemon: network custom-weave-20220512010244-7184 not found.
	
	I0512 01:24:47.454482    7708 start.go:623] Will try again in 5 seconds ...
	I0512 01:24:52.466181    7708 start.go:352] acquiring machines lock for custom-weave-20220512010244-7184: {Name:mk081df4f0c32f192dd90e39941efdba849aa624 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0512 01:24:52.466474    7708 start.go:356] acquired machines lock for "custom-weave-20220512010244-7184" in 293.2µs
	I0512 01:24:52.466670    7708 start.go:94] Skipping create...Using existing machine configuration
	I0512 01:24:52.466670    7708 fix.go:55] fixHost starting: 
	I0512 01:24:52.479312    7708 cli_runner.go:164] Run: docker container inspect custom-weave-20220512010244-7184 --format={{.State.Status}}
	I0512 01:24:53.570004    7708 cli_runner.go:217] Completed: docker container inspect custom-weave-20220512010244-7184 --format={{.State.Status}}: (1.0904282s)
	I0512 01:24:53.570004    7708 fix.go:103] recreateIfNeeded on custom-weave-20220512010244-7184: state= err=<nil>
	I0512 01:24:53.570004    7708 fix.go:108] machineExists: false. err=machine does not exist
	I0512 01:24:53.578990    7708 out.go:177] * docker "custom-weave-20220512010244-7184" container is missing, will recreate.
	I0512 01:24:53.581083    7708 delete.go:124] DEMOLISHING custom-weave-20220512010244-7184 ...
	I0512 01:24:53.596632    7708 cli_runner.go:164] Run: docker container inspect custom-weave-20220512010244-7184 --format={{.State.Status}}
	I0512 01:24:54.694932    7708 cli_runner.go:217] Completed: docker container inspect custom-weave-20220512010244-7184 --format={{.State.Status}}: (1.0981282s)
	I0512 01:24:54.694932    7708 stop.go:79] host is in state 
	I0512 01:24:54.695031    7708 main.go:134] libmachine: Stopping "custom-weave-20220512010244-7184"...
	I0512 01:24:54.709809    7708 cli_runner.go:164] Run: docker container inspect custom-weave-20220512010244-7184 --format={{.State.Status}}
	I0512 01:24:55.828923    7708 cli_runner.go:217] Completed: docker container inspect custom-weave-20220512010244-7184 --format={{.State.Status}}: (1.1188836s)
	I0512 01:24:55.852130    7708 kic_runner.go:93] Run: systemctl --version
	I0512 01:24:55.852130    7708 kic_runner.go:114] Args: [docker exec --privileged custom-weave-20220512010244-7184 systemctl --version]
	I0512 01:24:56.965614    7708 kic_runner.go:93] Run: sudo service kubelet stop
	I0512 01:24:56.965614    7708 kic_runner.go:114] Args: [docker exec --privileged custom-weave-20220512010244-7184 sudo service kubelet stop]
	I0512 01:24:57.997475    7708 openrc.go:165] stop output: 
	** stderr ** 
	Error response from daemon: Container 7d1c6a00f8e4969acb332305a972b6855a49846a5d43e97c62e78dc742988076 is not running
	
	** /stderr **
	W0512 01:24:57.997711    7708 kic.go:439] couldn't stop kubelet. will continue with stop anyways: sudo service kubelet stop: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: Container 7d1c6a00f8e4969acb332305a972b6855a49846a5d43e97c62e78dc742988076 is not running
	I0512 01:24:58.014581    7708 kic_runner.go:93] Run: sudo service kubelet stop
	I0512 01:24:58.014581    7708 kic_runner.go:114] Args: [docker exec --privileged custom-weave-20220512010244-7184 sudo service kubelet stop]
	I0512 01:24:59.098923    7708 openrc.go:165] stop output: 
	** stderr ** 
	Error response from daemon: Container 7d1c6a00f8e4969acb332305a972b6855a49846a5d43e97c62e78dc742988076 is not running
	
	** /stderr **
	W0512 01:24:59.098923    7708 kic.go:441] couldn't force stop kubelet. will continue with stop anyways: sudo service kubelet stop: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: Container 7d1c6a00f8e4969acb332305a972b6855a49846a5d43e97c62e78dc742988076 is not running
	I0512 01:24:59.113058    7708 kic_runner.go:93] Run: docker ps -a --filter=name=k8s_.*_(kube-system|kubernetes-dashboard|storage-gluster|istio-operator)_ --format={{.ID}}
	I0512 01:24:59.113058    7708 kic_runner.go:114] Args: [docker exec --privileged custom-weave-20220512010244-7184 docker ps -a --filter=name=k8s_.*_(kube-system|kubernetes-dashboard|storage-gluster|istio-operator)_ --format={{.ID}}]
	I0512 01:25:00.230780    7708 kic.go:452] unable list containers : docker: docker ps -a --filter=name=k8s_.*_(kube-system|kubernetes-dashboard|storage-gluster|istio-operator)_ --format={{.ID}}: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: Container 7d1c6a00f8e4969acb332305a972b6855a49846a5d43e97c62e78dc742988076 is not running
	I0512 01:25:00.230923    7708 kic.go:462] successfully stopped kubernetes!
	I0512 01:25:00.249135    7708 kic_runner.go:93] Run: pgrep kube-apiserver
	I0512 01:25:00.249135    7708 kic_runner.go:114] Args: [docker exec --privileged custom-weave-20220512010244-7184 pgrep kube-apiserver]
	I0512 01:25:02.438242    7708 cli_runner.go:164] Run: docker container inspect custom-weave-20220512010244-7184 --format={{.State.Status}}
	I0512 01:25:03.586348    7708 cli_runner.go:217] Completed: docker container inspect custom-weave-20220512010244-7184 --format={{.State.Status}}: (1.147946s)
	I0512 01:25:06.619336    7708 cli_runner.go:164] Run: docker container inspect custom-weave-20220512010244-7184 --format={{.State.Status}}
	I0512 01:25:07.698454    7708 cli_runner.go:217] Completed: docker container inspect custom-weave-20220512010244-7184 --format={{.State.Status}}: (1.0789411s)
	I0512 01:25:10.719684    7708 cli_runner.go:164] Run: docker container inspect custom-weave-20220512010244-7184 --format={{.State.Status}}
	I0512 01:25:11.771192    7708 cli_runner.go:217] Completed: docker container inspect custom-weave-20220512010244-7184 --format={{.State.Status}}: (1.051454s)
	I0512 01:25:14.791955    7708 cli_runner.go:164] Run: docker container inspect custom-weave-20220512010244-7184 --format={{.State.Status}}
	I0512 01:25:15.933963    7708 cli_runner.go:217] Completed: docker container inspect custom-weave-20220512010244-7184 --format={{.State.Status}}: (1.1419496s)
	I0512 01:25:18.955555    7708 cli_runner.go:164] Run: docker container inspect custom-weave-20220512010244-7184 --format={{.State.Status}}
	I0512 01:25:20.047894    7708 cli_runner.go:217] Completed: docker container inspect custom-weave-20220512010244-7184 --format={{.State.Status}}: (1.0921882s)
	I0512 01:25:23.071422    7708 cli_runner.go:164] Run: docker container inspect custom-weave-20220512010244-7184 --format={{.State.Status}}
	I0512 01:25:24.188148    7708 cli_runner.go:217] Completed: docker container inspect custom-weave-20220512010244-7184 --format={{.State.Status}}: (1.1166219s)
	I0512 01:25:27.209540    7708 cli_runner.go:164] Run: docker container inspect custom-weave-20220512010244-7184 --format={{.State.Status}}
	I0512 01:25:28.306823    7708 cli_runner.go:217] Completed: docker container inspect custom-weave-20220512010244-7184 --format={{.State.Status}}: (1.0971888s)
	I0512 01:25:31.340052    7708 cli_runner.go:164] Run: docker container inspect custom-weave-20220512010244-7184 --format={{.State.Status}}
	I0512 01:25:32.418912    7708 cli_runner.go:217] Completed: docker container inspect custom-weave-20220512010244-7184 --format={{.State.Status}}: (1.0788045s)
	I0512 01:25:35.446813    7708 cli_runner.go:164] Run: docker container inspect custom-weave-20220512010244-7184 --format={{.State.Status}}
	I0512 01:25:36.579215    7708 cli_runner.go:217] Completed: docker container inspect custom-weave-20220512010244-7184 --format={{.State.Status}}: (1.1321985s)
	I0512 01:25:39.606579    7708 cli_runner.go:164] Run: docker container inspect custom-weave-20220512010244-7184 --format={{.State.Status}}
	I0512 01:25:40.790325    7708 cli_runner.go:217] Completed: docker container inspect custom-weave-20220512010244-7184 --format={{.State.Status}}: (1.1833588s)
	I0512 01:25:43.816268    7708 cli_runner.go:164] Run: docker container inspect custom-weave-20220512010244-7184 --format={{.State.Status}}
	I0512 01:25:44.933197    7708 cli_runner.go:217] Completed: docker container inspect custom-weave-20220512010244-7184 --format={{.State.Status}}: (1.1168725s)
	I0512 01:25:47.954617    7708 cli_runner.go:164] Run: docker container inspect custom-weave-20220512010244-7184 --format={{.State.Status}}
	I0512 01:25:49.024659    7708 cli_runner.go:217] Completed: docker container inspect custom-weave-20220512010244-7184 --format={{.State.Status}}: (1.0699866s)
	I0512 01:25:52.053291    7708 cli_runner.go:164] Run: docker container inspect custom-weave-20220512010244-7184 --format={{.State.Status}}
	I0512 01:25:53.095766    7708 cli_runner.go:217] Completed: docker container inspect custom-weave-20220512010244-7184 --format={{.State.Status}}: (1.0422141s)
	I0512 01:25:56.120407    7708 cli_runner.go:164] Run: docker container inspect custom-weave-20220512010244-7184 --format={{.State.Status}}
	I0512 01:25:57.259601    7708 cli_runner.go:217] Completed: docker container inspect custom-weave-20220512010244-7184 --format={{.State.Status}}: (1.1389253s)
	I0512 01:26:00.294600    7708 cli_runner.go:164] Run: docker container inspect custom-weave-20220512010244-7184 --format={{.State.Status}}
	I0512 01:26:01.377275    7708 cli_runner.go:217] Completed: docker container inspect custom-weave-20220512010244-7184 --format={{.State.Status}}: (1.0816236s)
	I0512 01:26:04.411076    7708 cli_runner.go:164] Run: docker container inspect custom-weave-20220512010244-7184 --format={{.State.Status}}
	I0512 01:26:05.529757    7708 cli_runner.go:217] Completed: docker container inspect custom-weave-20220512010244-7184 --format={{.State.Status}}: (1.1179816s)
	I0512 01:26:08.548976    7708 cli_runner.go:164] Run: docker container inspect custom-weave-20220512010244-7184 --format={{.State.Status}}
	I0512 01:26:09.679434    7708 cli_runner.go:217] Completed: docker container inspect custom-weave-20220512010244-7184 --format={{.State.Status}}: (1.1303156s)
	I0512 01:26:12.695877    7708 cli_runner.go:164] Run: docker container inspect custom-weave-20220512010244-7184 --format={{.State.Status}}
	I0512 01:26:13.778489    7708 cli_runner.go:217] Completed: docker container inspect custom-weave-20220512010244-7184 --format={{.State.Status}}: (1.0823886s)
	I0512 01:26:16.803991    7708 cli_runner.go:164] Run: docker container inspect custom-weave-20220512010244-7184 --format={{.State.Status}}
	I0512 01:26:17.890069    7708 cli_runner.go:217] Completed: docker container inspect custom-weave-20220512010244-7184 --format={{.State.Status}}: (1.0860227s)
	I0512 01:26:20.921122    7708 cli_runner.go:164] Run: docker container inspect custom-weave-20220512010244-7184 --format={{.State.Status}}
	I0512 01:26:22.123410    7708 cli_runner.go:217] Completed: docker container inspect custom-weave-20220512010244-7184 --format={{.State.Status}}: (1.2022266s)
	I0512 01:26:25.143368    7708 cli_runner.go:164] Run: docker container inspect custom-weave-20220512010244-7184 --format={{.State.Status}}
	I0512 01:26:26.322127    7708 cli_runner.go:217] Completed: docker container inspect custom-weave-20220512010244-7184 --format={{.State.Status}}: (1.1786982s)
	I0512 01:26:29.345810    7708 cli_runner.go:164] Run: docker container inspect custom-weave-20220512010244-7184 --format={{.State.Status}}
	I0512 01:26:30.479594    7708 cli_runner.go:217] Completed: docker container inspect custom-weave-20220512010244-7184 --format={{.State.Status}}: (1.133693s)
	I0512 01:26:33.497005    7708 cli_runner.go:164] Run: docker container inspect custom-weave-20220512010244-7184 --format={{.State.Status}}
	I0512 01:26:34.567735    7708 cli_runner.go:217] Completed: docker container inspect custom-weave-20220512010244-7184 --format={{.State.Status}}: (1.0706003s)
	I0512 01:26:37.593145    7708 cli_runner.go:164] Run: docker container inspect custom-weave-20220512010244-7184 --format={{.State.Status}}
	I0512 01:26:38.775360    7708 cli_runner.go:217] Completed: docker container inspect custom-weave-20220512010244-7184 --format={{.State.Status}}: (1.1821541s)
	I0512 01:26:41.820890    7708 cli_runner.go:164] Run: docker container inspect custom-weave-20220512010244-7184 --format={{.State.Status}}
	I0512 01:26:43.134059    7708 cli_runner.go:217] Completed: docker container inspect custom-weave-20220512010244-7184 --format={{.State.Status}}: (1.3123183s)
	I0512 01:26:46.160235    7708 cli_runner.go:164] Run: docker container inspect custom-weave-20220512010244-7184 --format={{.State.Status}}
	I0512 01:26:47.278941    7708 cli_runner.go:217] Completed: docker container inspect custom-weave-20220512010244-7184 --format={{.State.Status}}: (1.1186482s)
	I0512 01:26:50.301401    7708 cli_runner.go:164] Run: docker container inspect custom-weave-20220512010244-7184 --format={{.State.Status}}
	I0512 01:26:51.385079    7708 cli_runner.go:217] Completed: docker container inspect custom-weave-20220512010244-7184 --format={{.State.Status}}: (1.0836223s)
	I0512 01:26:54.410971    7708 cli_runner.go:164] Run: docker container inspect custom-weave-20220512010244-7184 --format={{.State.Status}}
	I0512 01:26:55.500806    7708 cli_runner.go:217] Completed: docker container inspect custom-weave-20220512010244-7184 --format={{.State.Status}}: (1.0897793s)
	I0512 01:26:58.527677    7708 cli_runner.go:164] Run: docker container inspect custom-weave-20220512010244-7184 --format={{.State.Status}}
	I0512 01:26:59.660302    7708 cli_runner.go:217] Completed: docker container inspect custom-weave-20220512010244-7184 --format={{.State.Status}}: (1.1324728s)
	I0512 01:27:02.689038    7708 cli_runner.go:164] Run: docker container inspect custom-weave-20220512010244-7184 --format={{.State.Status}}
	I0512 01:27:03.775471    7708 cli_runner.go:217] Completed: docker container inspect custom-weave-20220512010244-7184 --format={{.State.Status}}: (1.0862184s)
	I0512 01:27:06.807257    7708 cli_runner.go:164] Run: docker container inspect custom-weave-20220512010244-7184 --format={{.State.Status}}
	I0512 01:27:07.903970    7708 cli_runner.go:217] Completed: docker container inspect custom-weave-20220512010244-7184 --format={{.State.Status}}: (1.095571s)
	I0512 01:27:10.921753    7708 cli_runner.go:164] Run: docker container inspect custom-weave-20220512010244-7184 --format={{.State.Status}}
	I0512 01:27:12.098969    7708 cli_runner.go:217] Completed: docker container inspect custom-weave-20220512010244-7184 --format={{.State.Status}}: (1.172313s)
	I0512 01:27:15.117777    7708 cli_runner.go:164] Run: docker container inspect custom-weave-20220512010244-7184 --format={{.State.Status}}
	I0512 01:27:16.178200    7708 cli_runner.go:217] Completed: docker container inspect custom-weave-20220512010244-7184 --format={{.State.Status}}: (1.060176s)
	I0512 01:27:19.191948    7708 cli_runner.go:164] Run: docker container inspect custom-weave-20220512010244-7184 --format={{.State.Status}}
	I0512 01:27:20.267242    7708 cli_runner.go:217] Completed: docker container inspect custom-weave-20220512010244-7184 --format={{.State.Status}}: (1.0752392s)
	I0512 01:27:23.292258    7708 cli_runner.go:164] Run: docker container inspect custom-weave-20220512010244-7184 --format={{.State.Status}}
	I0512 01:27:24.381740    7708 cli_runner.go:217] Completed: docker container inspect custom-weave-20220512010244-7184 --format={{.State.Status}}: (1.0894269s)
	I0512 01:27:27.401949    7708 cli_runner.go:164] Run: docker container inspect custom-weave-20220512010244-7184 --format={{.State.Status}}
	I0512 01:27:28.465307    7708 cli_runner.go:217] Completed: docker container inspect custom-weave-20220512010244-7184 --format={{.State.Status}}: (1.0633038s)
	I0512 01:27:31.491595    7708 cli_runner.go:164] Run: docker container inspect custom-weave-20220512010244-7184 --format={{.State.Status}}
	I0512 01:27:32.575496    7708 cli_runner.go:217] Completed: docker container inspect custom-weave-20220512010244-7184 --format={{.State.Status}}: (1.0838453s)
	I0512 01:27:35.606760    7708 cli_runner.go:164] Run: docker container inspect custom-weave-20220512010244-7184 --format={{.State.Status}}
	I0512 01:27:36.675057    7708 cli_runner.go:217] Completed: docker container inspect custom-weave-20220512010244-7184 --format={{.State.Status}}: (1.0681588s)
	I0512 01:27:39.695892    7708 cli_runner.go:164] Run: docker container inspect custom-weave-20220512010244-7184 --format={{.State.Status}}
	I0512 01:27:40.781285    7708 cli_runner.go:217] Completed: docker container inspect custom-weave-20220512010244-7184 --format={{.State.Status}}: (1.0853377s)
	I0512 01:27:43.806516    7708 cli_runner.go:164] Run: docker container inspect custom-weave-20220512010244-7184 --format={{.State.Status}}
	I0512 01:27:44.897985    7708 cli_runner.go:217] Completed: docker container inspect custom-weave-20220512010244-7184 --format={{.State.Status}}: (1.0914129s)
	I0512 01:27:47.918963    7708 cli_runner.go:164] Run: docker container inspect custom-weave-20220512010244-7184 --format={{.State.Status}}
	I0512 01:27:49.039795    7708 cli_runner.go:217] Completed: docker container inspect custom-weave-20220512010244-7184 --format={{.State.Status}}: (1.1207743s)
	I0512 01:27:52.058845    7708 cli_runner.go:164] Run: docker container inspect custom-weave-20220512010244-7184 --format={{.State.Status}}
	I0512 01:27:53.136598    7708 cli_runner.go:217] Completed: docker container inspect custom-weave-20220512010244-7184 --format={{.State.Status}}: (1.0776979s)
	I0512 01:27:56.172460    7708 cli_runner.go:164] Run: docker container inspect custom-weave-20220512010244-7184 --format={{.State.Status}}
	I0512 01:27:57.267344    7708 cli_runner.go:217] Completed: docker container inspect custom-weave-20220512010244-7184 --format={{.State.Status}}: (1.0948282s)
	I0512 01:28:00.298006    7708 cli_runner.go:164] Run: docker container inspect custom-weave-20220512010244-7184 --format={{.State.Status}}
	I0512 01:28:01.400630    7708 cli_runner.go:217] Completed: docker container inspect custom-weave-20220512010244-7184 --format={{.State.Status}}: (1.1025678s)
	I0512 01:28:04.427202    7708 cli_runner.go:164] Run: docker container inspect custom-weave-20220512010244-7184 --format={{.State.Status}}
	I0512 01:28:05.504335    7708 cli_runner.go:217] Completed: docker container inspect custom-weave-20220512010244-7184 --format={{.State.Status}}: (1.0770776s)
	I0512 01:28:08.533026    7708 cli_runner.go:164] Run: docker container inspect custom-weave-20220512010244-7184 --format={{.State.Status}}
	I0512 01:28:09.636082    7708 cli_runner.go:217] Completed: docker container inspect custom-weave-20220512010244-7184 --format={{.State.Status}}: (1.1028469s)
	I0512 01:28:12.659873    7708 cli_runner.go:164] Run: docker container inspect custom-weave-20220512010244-7184 --format={{.State.Status}}
	I0512 01:28:13.714409    7708 cli_runner.go:217] Completed: docker container inspect custom-weave-20220512010244-7184 --format={{.State.Status}}: (1.0544214s)
	I0512 01:28:16.879994    7708 cli_runner.go:164] Run: docker container inspect custom-weave-20220512010244-7184 --format={{.State.Status}}
	I0512 01:28:17.948560    7708 cli_runner.go:217] Completed: docker container inspect custom-weave-20220512010244-7184 --format={{.State.Status}}: (1.0685119s)
	I0512 01:28:20.978851    7708 cli_runner.go:164] Run: docker container inspect custom-weave-20220512010244-7184 --format={{.State.Status}}
	I0512 01:28:22.044771    7708 cli_runner.go:217] Completed: docker container inspect custom-weave-20220512010244-7184 --format={{.State.Status}}: (1.0658651s)
	I0512 01:28:25.068187    7708 cli_runner.go:164] Run: docker container inspect custom-weave-20220512010244-7184 --format={{.State.Status}}
	I0512 01:28:26.151770    7708 cli_runner.go:217] Completed: docker container inspect custom-weave-20220512010244-7184 --format={{.State.Status}}: (1.0835278s)
	I0512 01:28:29.174421    7708 cli_runner.go:164] Run: docker container inspect custom-weave-20220512010244-7184 --format={{.State.Status}}
	I0512 01:28:30.318528    7708 cli_runner.go:217] Completed: docker container inspect custom-weave-20220512010244-7184 --format={{.State.Status}}: (1.1440477s)
	I0512 01:28:33.339865    7708 cli_runner.go:164] Run: docker container inspect custom-weave-20220512010244-7184 --format={{.State.Status}}
	I0512 01:28:34.549081    7708 cli_runner.go:217] Completed: docker container inspect custom-weave-20220512010244-7184 --format={{.State.Status}}: (1.112119s)
	I0512 01:28:37.573005    7708 cli_runner.go:164] Run: docker container inspect custom-weave-20220512010244-7184 --format={{.State.Status}}
	I0512 01:28:38.680757    7708 cli_runner.go:217] Completed: docker container inspect custom-weave-20220512010244-7184 --format={{.State.Status}}: (1.1076959s)
	I0512 01:28:41.695831    7708 cli_runner.go:164] Run: docker container inspect custom-weave-20220512010244-7184 --format={{.State.Status}}
	I0512 01:28:42.768014    7708 cli_runner.go:217] Completed: docker container inspect custom-weave-20220512010244-7184 --format={{.State.Status}}: (1.072128s)
	I0512 01:28:45.792617    7708 cli_runner.go:164] Run: docker container inspect custom-weave-20220512010244-7184 --format={{.State.Status}}
	I0512 01:28:46.877298    7708 cli_runner.go:217] Completed: docker container inspect custom-weave-20220512010244-7184 --format={{.State.Status}}: (1.0846254s)
	I0512 01:28:49.900667    7708 cli_runner.go:164] Run: docker container inspect custom-weave-20220512010244-7184 --format={{.State.Status}}
	I0512 01:28:51.054273    7708 cli_runner.go:217] Completed: docker container inspect custom-weave-20220512010244-7184 --format={{.State.Status}}: (1.1535465s)
	I0512 01:28:54.084519    7708 cli_runner.go:164] Run: docker container inspect custom-weave-20220512010244-7184 --format={{.State.Status}}
	I0512 01:28:55.250922    7708 cli_runner.go:217] Completed: docker container inspect custom-weave-20220512010244-7184 --format={{.State.Status}}: (1.1662067s)
	I0512 01:28:58.270144    7708 cli_runner.go:164] Run: docker container inspect custom-weave-20220512010244-7184 --format={{.State.Status}}
	I0512 01:28:59.353831    7708 cli_runner.go:217] Completed: docker container inspect custom-weave-20220512010244-7184 --format={{.State.Status}}: (1.0835752s)
	I0512 01:29:02.383864    7708 cli_runner.go:164] Run: docker container inspect custom-weave-20220512010244-7184 --format={{.State.Status}}
	I0512 01:29:03.469774    7708 cli_runner.go:217] Completed: docker container inspect custom-weave-20220512010244-7184 --format={{.State.Status}}: (1.0858546s)
	I0512 01:29:06.495854    7708 cli_runner.go:164] Run: docker container inspect custom-weave-20220512010244-7184 --format={{.State.Status}}
	I0512 01:29:07.678273    7708 cli_runner.go:217] Completed: docker container inspect custom-weave-20220512010244-7184 --format={{.State.Status}}: (1.1821973s)
	I0512 01:29:10.689737    7708 stop.go:59] stop err: Maximum number of retries (60) exceeded
	I0512 01:29:10.689928    7708 delete.go:129] stophost failed (probably ok): Temporary Error: stop: Maximum number of retries (60) exceeded
	I0512 01:29:10.718130    7708 cli_runner.go:164] Run: docker container inspect custom-weave-20220512010244-7184 --format={{.State.Status}}
	I0512 01:29:11.904698    7708 cli_runner.go:217] Completed: docker container inspect custom-weave-20220512010244-7184 --format={{.State.Status}}: (1.1865072s)
	W0512 01:29:11.904698    7708 delete.go:135] deletehost failed: Docker machine "custom-weave-20220512010244-7184" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I0512 01:29:11.915704    7708 cli_runner.go:164] Run: docker container inspect -f {{.Id}} custom-weave-20220512010244-7184
	I0512 01:29:13.063922    7708 cli_runner.go:217] Completed: docker container inspect -f {{.Id}} custom-weave-20220512010244-7184: (1.1481151s)
	I0512 01:29:13.073800    7708 cli_runner.go:164] Run: docker container inspect custom-weave-20220512010244-7184 --format={{.State.Status}}
	I0512 01:29:14.265014    7708 cli_runner.go:217] Completed: docker container inspect custom-weave-20220512010244-7184 --format={{.State.Status}}: (1.1911535s)
	I0512 01:29:14.274011    7708 cli_runner.go:164] Run: docker exec --privileged -t custom-weave-20220512010244-7184 /bin/bash -c "sudo init 0"
	W0512 01:29:15.474422    7708 cli_runner.go:211] docker exec --privileged -t custom-weave-20220512010244-7184 /bin/bash -c "sudo init 0" returned with exit code 1
	I0512 01:29:15.474422    7708 cli_runner.go:217] Completed: docker exec --privileged -t custom-weave-20220512010244-7184 /bin/bash -c "sudo init 0": (1.2003498s)
	I0512 01:29:15.474422    7708 oci.go:625] error shutdown custom-weave-20220512010244-7184: docker exec --privileged -t custom-weave-20220512010244-7184 /bin/bash -c "sudo init 0": exit status 1
	stdout:
	
	stderr:
	Error response from daemon: Container 7d1c6a00f8e4969acb332305a972b6855a49846a5d43e97c62e78dc742988076 is not running
	I0512 01:29:16.485707    7708 cli_runner.go:164] Run: docker container inspect custom-weave-20220512010244-7184 --format={{.State.Status}}
	I0512 01:29:17.620855    7708 cli_runner.go:217] Completed: docker container inspect custom-weave-20220512010244-7184 --format={{.State.Status}}: (1.1350899s)
	I0512 01:29:17.620855    7708 oci.go:639] temporary error: container custom-weave-20220512010244-7184 status is  but expect it to be exited
	I0512 01:29:17.620855    7708 oci.go:645] Successfully shutdown container custom-weave-20220512010244-7184
	I0512 01:29:17.627856    7708 cli_runner.go:164] Run: docker rm -f -v custom-weave-20220512010244-7184
	I0512 01:29:18.840835    7708 cli_runner.go:217] Completed: docker rm -f -v custom-weave-20220512010244-7184: (1.2129165s)
	I0512 01:29:18.851827    7708 cli_runner.go:164] Run: docker container inspect -f {{.Id}} custom-weave-20220512010244-7184
	W0512 01:29:20.004589    7708 cli_runner.go:211] docker container inspect -f {{.Id}} custom-weave-20220512010244-7184 returned with exit code 1
	I0512 01:29:20.004589    7708 cli_runner.go:217] Completed: docker container inspect -f {{.Id}} custom-weave-20220512010244-7184: (1.1527031s)
	I0512 01:29:20.012582    7708 cli_runner.go:164] Run: docker network inspect custom-weave-20220512010244-7184 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0512 01:29:21.242400    7708 cli_runner.go:211] docker network inspect custom-weave-20220512010244-7184 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0512 01:29:21.242400    7708 cli_runner.go:217] Completed: docker network inspect custom-weave-20220512010244-7184 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}": (1.2296061s)
	I0512 01:29:21.253105    7708 network_create.go:272] running [docker network inspect custom-weave-20220512010244-7184] to gather additional debugging logs...
	I0512 01:29:21.253105    7708 cli_runner.go:164] Run: docker network inspect custom-weave-20220512010244-7184
	W0512 01:29:22.425701    7708 cli_runner.go:211] docker network inspect custom-weave-20220512010244-7184 returned with exit code 1
	I0512 01:29:22.425701    7708 cli_runner.go:217] Completed: docker network inspect custom-weave-20220512010244-7184: (1.1725359s)
	I0512 01:29:22.425701    7708 network_create.go:275] error running [docker network inspect custom-weave-20220512010244-7184]: docker network inspect custom-weave-20220512010244-7184: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: custom-weave-20220512010244-7184
	I0512 01:29:22.425701    7708 network_create.go:277] output of [docker network inspect custom-weave-20220512010244-7184]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: custom-weave-20220512010244-7184
	
	** /stderr **
	W0512 01:29:22.427407    7708 delete.go:139] delete failed (probably ok) <nil>
	I0512 01:29:22.427407    7708 fix.go:115] Sleeping 1 second for extra luck!
	I0512 01:29:23.433342    7708 start.go:131] createHost starting for "" (driver="docker")
	I0512 01:29:23.436347    7708 out.go:204] * Creating docker container (CPUs=2, Memory=2048MB) ...
	I0512 01:29:23.436347    7708 start.go:165] libmachine.API.Create for "custom-weave-20220512010244-7184" (driver="docker")
	I0512 01:29:23.436347    7708 client.go:168] LocalClient.Create starting
	I0512 01:29:23.437338    7708 main.go:134] libmachine: Reading certificate data from C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem
	I0512 01:29:23.437338    7708 main.go:134] libmachine: Decoding PEM data...
	I0512 01:29:23.437338    7708 main.go:134] libmachine: Parsing certificate...
	I0512 01:29:23.437338    7708 main.go:134] libmachine: Reading certificate data from C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem
	I0512 01:29:23.438338    7708 main.go:134] libmachine: Decoding PEM data...
	I0512 01:29:23.438338    7708 main.go:134] libmachine: Parsing certificate...
	I0512 01:29:23.448344    7708 cli_runner.go:164] Run: docker network inspect custom-weave-20220512010244-7184 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0512 01:29:24.684910    7708 cli_runner.go:211] docker network inspect custom-weave-20220512010244-7184 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0512 01:29:24.684949    7708 cli_runner.go:217] Completed: docker network inspect custom-weave-20220512010244-7184 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}": (1.2364627s)
	I0512 01:29:24.694114    7708 network_create.go:272] running [docker network inspect custom-weave-20220512010244-7184] to gather additional debugging logs...
	I0512 01:29:24.694197    7708 cli_runner.go:164] Run: docker network inspect custom-weave-20220512010244-7184
	W0512 01:29:25.902164    7708 cli_runner.go:211] docker network inspect custom-weave-20220512010244-7184 returned with exit code 1
	I0512 01:29:25.902164    7708 cli_runner.go:217] Completed: docker network inspect custom-weave-20220512010244-7184: (1.2076992s)
	I0512 01:29:25.902258    7708 network_create.go:275] error running [docker network inspect custom-weave-20220512010244-7184]: docker network inspect custom-weave-20220512010244-7184: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: custom-weave-20220512010244-7184
	I0512 01:29:25.902385    7708 network_create.go:277] output of [docker network inspect custom-weave-20220512010244-7184]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: custom-weave-20220512010244-7184
	
	** /stderr **
	I0512 01:29:25.912969    7708 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0512 01:29:27.044568    7708 cli_runner.go:217] Completed: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}": (1.1315419s)
	I0512 01:29:27.061570    7708 network.go:284] reusing subnet 192.168.49.0 that has expired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc000007408] amended:false}} dirty:map[] misses:0}
	I0512 01:29:27.061570    7708 network.go:235] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0512 01:29:27.061570    7708 network_create.go:115] attempt to create docker network custom-weave-20220512010244-7184 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0512 01:29:27.068565    7708 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true custom-weave-20220512010244-7184
	W0512 01:29:28.235630    7708 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true custom-weave-20220512010244-7184 returned with exit code 1
	I0512 01:29:28.235630    7708 cli_runner.go:217] Completed: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true custom-weave-20220512010244-7184: (1.1670054s)
	W0512 01:29:28.235630    7708 network_create.go:107] failed to create docker network custom-weave-20220512010244-7184 192.168.49.0/24, will retry: subnet is taken
	I0512 01:29:28.252876    7708 network.go:279] skipping subnet 192.168.49.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc000007408] amended:false}} dirty:map[] misses:0}
	I0512 01:29:28.252876    7708 network.go:238] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0512 01:29:28.270275    7708 network.go:288] reserving subnet 192.168.58.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc000007408] amended:true}} dirty:map[192.168.49.0:0xc000007408 192.168.58.0:0xc000126970] misses:0}
	I0512 01:29:28.270885    7708 network.go:235] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0512 01:29:28.270885    7708 network_create.go:115] attempt to create docker network custom-weave-20220512010244-7184 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
	I0512 01:29:28.277364    7708 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true custom-weave-20220512010244-7184
	W0512 01:29:29.483374    7708 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true custom-weave-20220512010244-7184 returned with exit code 1
	I0512 01:29:29.483374    7708 cli_runner.go:217] Completed: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true custom-weave-20220512010244-7184: (1.2059485s)
	W0512 01:29:29.483374    7708 network_create.go:107] failed to create docker network custom-weave-20220512010244-7184 192.168.58.0/24, will retry: subnet is taken
	I0512 01:29:29.503200    7708 network.go:279] skipping subnet 192.168.58.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc000007408] amended:true}} dirty:map[192.168.49.0:0xc000007408 192.168.58.0:0xc000126970] misses:1}
	I0512 01:29:29.503200    7708 network.go:238] skipping subnet 192.168.58.0/24 that is reserved: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0512 01:29:29.517789    7708 network.go:288] reserving subnet 192.168.67.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc000007408] amended:true}} dirty:map[192.168.49.0:0xc000007408 192.168.58.0:0xc000126970 192.168.67.0:0xc000014480] misses:1}
	I0512 01:29:29.517789    7708 network.go:235] using free private subnet 192.168.67.0/24: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0512 01:29:29.517789    7708 network_create.go:115] attempt to create docker network custom-weave-20220512010244-7184 192.168.67.0/24 with gateway 192.168.67.1 and MTU of 1500 ...
	I0512 01:29:29.525822    7708 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true custom-weave-20220512010244-7184
	W0512 01:29:30.686107    7708 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true custom-weave-20220512010244-7184 returned with exit code 1
	I0512 01:29:30.686107    7708 cli_runner.go:217] Completed: docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true custom-weave-20220512010244-7184: (1.1602258s)
	W0512 01:29:30.686107    7708 network_create.go:107] failed to create docker network custom-weave-20220512010244-7184 192.168.67.0/24, will retry: subnet is taken
	I0512 01:29:30.706211    7708 network.go:279] skipping subnet 192.168.67.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc000007408] amended:true}} dirty:map[192.168.49.0:0xc000007408 192.168.58.0:0xc000126970 192.168.67.0:0xc000014480] misses:2}
	I0512 01:29:30.706211    7708 network.go:238] skipping subnet 192.168.67.0/24 that is reserved: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0512 01:29:30.726112    7708 network.go:288] reserving subnet 192.168.76.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc000007408] amended:true}} dirty:map[192.168.49.0:0xc000007408 192.168.58.0:0xc000126970 192.168.67.0:0xc000014480 192.168.76.0:0xc000007150] misses:2}
	I0512 01:29:30.726112    7708 network.go:235] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0512 01:29:30.726112    7708 network_create.go:115] attempt to create docker network custom-weave-20220512010244-7184 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I0512 01:29:30.736111    7708 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true custom-weave-20220512010244-7184
	I0512 01:29:31.965016    7708 cli_runner.go:217] Completed: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true custom-weave-20220512010244-7184: (1.228842s)
	I0512 01:29:31.965016    7708 network_create.go:99] docker network custom-weave-20220512010244-7184 192.168.76.0/24 created
	I0512 01:29:31.965016    7708 kic.go:106] calculated static IP "192.168.76.2" for the "custom-weave-20220512010244-7184" container
	I0512 01:29:31.990158    7708 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0512 01:29:33.093327    7708 cli_runner.go:217] Completed: docker ps -a --format {{.Names}}: (1.1031137s)
	I0512 01:29:33.100324    7708 cli_runner.go:164] Run: docker volume create custom-weave-20220512010244-7184 --label name.minikube.sigs.k8s.io=custom-weave-20220512010244-7184 --label created_by.minikube.sigs.k8s.io=true
	I0512 01:29:34.219475    7708 cli_runner.go:217] Completed: docker volume create custom-weave-20220512010244-7184 --label name.minikube.sigs.k8s.io=custom-weave-20220512010244-7184 --label created_by.minikube.sigs.k8s.io=true: (1.1190942s)
	I0512 01:29:34.219475    7708 oci.go:103] Successfully created a docker volume custom-weave-20220512010244-7184
	I0512 01:29:34.227481    7708 cli_runner.go:164] Run: docker run --rm --name custom-weave-20220512010244-7184-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=custom-weave-20220512010244-7184 --entrypoint /usr/bin/test -v custom-weave-20220512010244-7184:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1652251400-14138@sha256:8c847a4aa2afc5a7fc659f9731046bf9cc7e788283deecc83c8633014fb0828a -d /var/lib
	I0512 01:29:36.914626    7708 cli_runner.go:217] Completed: docker run --rm --name custom-weave-20220512010244-7184-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=custom-weave-20220512010244-7184 --entrypoint /usr/bin/test -v custom-weave-20220512010244-7184:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1652251400-14138@sha256:8c847a4aa2afc5a7fc659f9731046bf9cc7e788283deecc83c8633014fb0828a -d /var/lib: (2.6870081s)
	I0512 01:29:36.914626    7708 oci.go:107] Successfully prepared a docker volume custom-weave-20220512010244-7184
	I0512 01:29:36.914626    7708 preload.go:132] Checking if preload exists for k8s version v1.23.5 and runtime docker
	I0512 01:29:36.914626    7708 kic.go:179] Starting extracting preloaded images to volume ...
	I0512 01:29:36.921653    7708 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.23.5-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v custom-weave-20220512010244-7184:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1652251400-14138@sha256:8c847a4aa2afc5a7fc659f9731046bf9cc7e788283deecc83c8633014fb0828a -I lz4 -xf /preloaded.tar -C /extractDir
	I0512 01:30:01.889661    7708 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.23.5-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v custom-weave-20220512010244-7184:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1652251400-14138@sha256:8c847a4aa2afc5a7fc659f9731046bf9cc7e788283deecc83c8633014fb0828a -I lz4 -xf /preloaded.tar -C /extractDir: (24.9666237s)
	I0512 01:30:01.889808    7708 kic.go:188] duration metric: took 24.973770 seconds to extract preloaded images to volume
	I0512 01:30:01.903876    7708 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0512 01:30:04.260675    7708 cli_runner.go:217] Completed: docker system info --format "{{json .}}": (2.3566802s)
	I0512 01:30:04.260675    7708 info.go:265] docker info: {ID:5MOX:W55Z:6RSS:V5PU:46KT:D723:NTM4:N7FK:USOO:URA3:TW6J:2PNT Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:87 OomKillDisable:true NGoroutines:63 SystemTime:2022-05-12 01:30:03.0704676 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.102.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_
64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:53902323712 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.14 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3df54a852345ae127d1fa3092b95168e4a88e2f8 Expected:3df54a852345ae127d1fa3092b95168e4a88e2f8} RuncCommit:{ID:v1.0.3-0-gf46b6ba Expected:v1.0.3-0-gf46b6ba} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,
profile=default] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.5.0] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:C:\Program Files\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc.
Version:v0.17.0]] Warnings:<nil>}}
	I0512 01:30:04.270495    7708 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0512 01:30:06.464045    7708 cli_runner.go:217] Completed: docker info --format "'{{json .SecurityOptions}}'": (2.1932445s)
	I0512 01:30:06.472826    7708 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname custom-weave-20220512010244-7184 --name custom-weave-20220512010244-7184 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=custom-weave-20220512010244-7184 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=custom-weave-20220512010244-7184 --network custom-weave-20220512010244-7184 --ip 192.168.76.2 --volume custom-weave-20220512010244-7184:/var --security-opt apparmor=unconfined --memory=2048mb --memory-swap=2048mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1652251400-14138@sha256:8c847a4aa2afc5a7fc659f9731046bf9cc7e788283deecc83c8633014fb0828a
	I0512 01:30:08.553779    7708 cli_runner.go:217] Completed: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname custom-weave-20220512010244-7184 --name custom-weave-20220512010244-7184 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=custom-weave-20220512010244-7184 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=custom-weave-20220512010244-7184 --network custom-weave-20220512010244-7184 --ip 192.168.76.2 --volume custom-weave-20220512010244-7184:/var --security-opt apparmor=unconfined --memory=2048mb --memory-swap=2048mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1652251400-14138@sha256:8c847a4aa2afc5a7fc659f9731046bf9cc7e788283deecc83c8633014fb0828a: (2.0808478s)
	I0512 01:30:08.560519    7708 cli_runner.go:164] Run: docker container inspect custom-weave-20220512010244-7184 --format={{.State.Running}}
	I0512 01:30:09.785767    7708 cli_runner.go:217] Completed: docker container inspect custom-weave-20220512010244-7184 --format={{.State.Running}}: (1.2241847s)
	I0512 01:30:09.795497    7708 cli_runner.go:164] Run: docker container inspect custom-weave-20220512010244-7184 --format={{.State.Status}}
	I0512 01:30:11.073665    7708 cli_runner.go:217] Completed: docker container inspect custom-weave-20220512010244-7184 --format={{.State.Status}}: (1.2781032s)
	I0512 01:30:11.079661    7708 cli_runner.go:164] Run: docker exec custom-weave-20220512010244-7184 stat /var/lib/dpkg/alternatives/iptables
	I0512 01:30:12.457040    7708 cli_runner.go:217] Completed: docker exec custom-weave-20220512010244-7184 stat /var/lib/dpkg/alternatives/iptables: (1.3773097s)
	I0512 01:30:12.457040    7708 oci.go:247] the created container "custom-weave-20220512010244-7184" has a running status.
	I0512 01:30:12.457040    7708 kic.go:210] Creating ssh key for kic: C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\custom-weave-20220512010244-7184\id_rsa...
	I0512 01:30:13.089135    7708 kic_runner.go:191] docker (temp): C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\custom-weave-20220512010244-7184\id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0512 01:30:14.410537    7708 cli_runner.go:164] Run: docker container inspect custom-weave-20220512010244-7184 --format={{.State.Status}}
	I0512 01:30:15.639757    7708 cli_runner.go:217] Completed: docker container inspect custom-weave-20220512010244-7184 --format={{.State.Status}}: (1.2281538s)
	I0512 01:30:15.659743    7708 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0512 01:30:15.659743    7708 kic_runner.go:114] Args: [docker exec --privileged custom-weave-20220512010244-7184 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0512 01:30:17.076510    7708 kic_runner.go:123] Done: [docker exec --privileged custom-weave-20220512010244-7184 chown docker:docker /home/docker/.ssh/authorized_keys]: (1.4166956s)
	I0512 01:30:17.079588    7708 kic.go:250] ensuring only current user has permissions to key file located at : C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\custom-weave-20220512010244-7184\id_rsa...
	I0512 01:30:17.663186    7708 cli_runner.go:164] Run: docker container inspect custom-weave-20220512010244-7184 --format={{.State.Status}}
	I0512 01:30:18.919705    7708 cli_runner.go:217] Completed: docker container inspect custom-weave-20220512010244-7184 --format={{.State.Status}}: (1.2564555s)
	I0512 01:30:18.919705    7708 machine.go:88] provisioning docker machine ...
	I0512 01:30:18.919705    7708 ubuntu.go:169] provisioning hostname "custom-weave-20220512010244-7184"
	I0512 01:30:18.934623    7708 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-weave-20220512010244-7184
	I0512 01:30:20.291326    7708 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-weave-20220512010244-7184: (1.3566344s)
	I0512 01:30:20.295326    7708 main.go:134] libmachine: Using SSH client type: native
	I0512 01:30:20.295326    7708 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x422e20] 0x425c80 <nil>  [] 0s} 127.0.0.1 51331 <nil> <nil>}
	I0512 01:30:20.295326    7708 main.go:134] libmachine: About to run SSH command:
	sudo hostname custom-weave-20220512010244-7184 && echo "custom-weave-20220512010244-7184" | sudo tee /etc/hostname
	I0512 01:30:20.516105    7708 main.go:134] libmachine: SSH cmd err, output: <nil>: custom-weave-20220512010244-7184
	
	I0512 01:30:20.525167    7708 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-weave-20220512010244-7184
	I0512 01:30:21.693388    7708 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-weave-20220512010244-7184: (1.168054s)
	I0512 01:30:21.697361    7708 main.go:134] libmachine: Using SSH client type: native
	I0512 01:30:21.697843    7708 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x422e20] 0x425c80 <nil>  [] 0s} 127.0.0.1 51331 <nil> <nil>}
	I0512 01:30:21.697884    7708 main.go:134] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\scustom-weave-20220512010244-7184' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 custom-weave-20220512010244-7184/g' /etc/hosts;
				else 
					echo '127.0.1.1 custom-weave-20220512010244-7184' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0512 01:30:21.892013    7708 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I0512 01:30:21.892013    7708 ubuntu.go:175] set auth options {CertDir:C:\Users\jenkins.minikube4\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube4\minikube-integration\.minikube}
	I0512 01:30:21.892555    7708 ubuntu.go:177] setting up certificates
	I0512 01:30:21.892555    7708 provision.go:83] configureAuth start
	I0512 01:30:21.905474    7708 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" custom-weave-20220512010244-7184
	I0512 01:30:23.099422    7708 cli_runner.go:217] Completed: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" custom-weave-20220512010244-7184: (1.1938877s)
	I0512 01:30:23.099422    7708 provision.go:138] copyHostCerts
	I0512 01:30:23.099422    7708 exec_runner.go:144] found C:\Users\jenkins.minikube4\minikube-integration\.minikube/key.pem, removing ...
	I0512 01:30:23.099422    7708 exec_runner.go:207] rm: C:\Users\jenkins.minikube4\minikube-integration\.minikube\key.pem
	I0512 01:30:23.100422    7708 exec_runner.go:151] cp: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube4\minikube-integration\.minikube/key.pem (1675 bytes)
	I0512 01:30:23.101421    7708 exec_runner.go:144] found C:\Users\jenkins.minikube4\minikube-integration\.minikube/ca.pem, removing ...
	I0512 01:30:23.101421    7708 exec_runner.go:207] rm: C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.pem
	I0512 01:30:23.101421    7708 exec_runner.go:151] cp: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube4\minikube-integration\.minikube/ca.pem (1078 bytes)
	I0512 01:30:23.103428    7708 exec_runner.go:144] found C:\Users\jenkins.minikube4\minikube-integration\.minikube/cert.pem, removing ...
	I0512 01:30:23.103428    7708 exec_runner.go:207] rm: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cert.pem
	I0512 01:30:23.103428    7708 exec_runner.go:151] cp: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube4\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0512 01:30:23.104420    7708 provision.go:112] generating server cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.custom-weave-20220512010244-7184 san=[192.168.76.2 127.0.0.1 localhost 127.0.0.1 minikube custom-weave-20220512010244-7184]
	I0512 01:30:23.683385    7708 provision.go:172] copyRemoteCerts
	I0512 01:30:23.693240    7708 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0512 01:30:23.703096    7708 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-weave-20220512010244-7184
	I0512 01:30:24.973649    7708 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-weave-20220512010244-7184: (1.2704879s)
	I0512 01:30:24.974006    7708 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:51331 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\custom-weave-20220512010244-7184\id_rsa Username:docker}
	I0512 01:30:25.120840    7708 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (1.4269945s)
	I0512 01:30:25.120840    7708 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0512 01:30:25.186459    7708 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1269 bytes)
	I0512 01:30:25.260298    7708 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0512 01:30:25.321997    7708 provision.go:86] duration metric: configureAuth took 3.429223s
	I0512 01:30:25.321997    7708 ubuntu.go:193] setting minikube options for container-runtime
	I0512 01:30:25.322776    7708 config.go:178] Loaded profile config "custom-weave-20220512010244-7184": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.5
	I0512 01:30:25.338203    7708 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-weave-20220512010244-7184
	I0512 01:30:26.671860    7708 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-weave-20220512010244-7184: (1.3335898s)
	I0512 01:30:26.675845    7708 main.go:134] libmachine: Using SSH client type: native
	I0512 01:30:26.675845    7708 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x422e20] 0x425c80 <nil>  [] 0s} 127.0.0.1 51331 <nil> <nil>}
	I0512 01:30:26.675845    7708 main.go:134] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0512 01:30:26.822863    7708 main.go:134] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0512 01:30:26.822863    7708 ubuntu.go:71] root file system type: overlay
	I0512 01:30:26.822863    7708 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0512 01:30:26.841879    7708 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-weave-20220512010244-7184
	I0512 01:30:28.405333    7708 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-weave-20220512010244-7184: (1.563375s)
	I0512 01:30:28.414316    7708 main.go:134] libmachine: Using SSH client type: native
	I0512 01:30:28.414316    7708 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x422e20] 0x425c80 <nil>  [] 0s} 127.0.0.1 51331 <nil> <nil>}
	I0512 01:30:28.414316    7708 main.go:134] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0512 01:30:28.684332    7708 main.go:134] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0512 01:30:28.693328    7708 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-weave-20220512010244-7184
	I0512 01:30:30.357183    7708 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-weave-20220512010244-7184: (1.6637708s)
	I0512 01:30:30.360173    7708 main.go:134] libmachine: Using SSH client type: native
	I0512 01:30:30.361173    7708 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x422e20] 0x425c80 <nil>  [] 0s} 127.0.0.1 51331 <nil> <nil>}
	I0512 01:30:30.361173    7708 main.go:134] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0512 01:30:32.524027    7708 main.go:134] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2022-05-05 13:17:28.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2022-05-12 01:30:28.667114000 +0000
	@@ -1,30 +1,32 @@
	 [Unit]
	 Description=Docker Application Container Engine
	 Documentation=https://docs.docker.com
	-After=network-online.target docker.socket firewalld.service containerd.service
	+BindsTo=containerd.service
	+After=network-online.target firewalld.service containerd.service
	 Wants=network-online.target
	-Requires=docker.socket containerd.service
	+Requires=docker.socket
	+StartLimitBurst=3
	+StartLimitIntervalSec=60
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutSec=0
	-RestartSec=2
	-Restart=always
	-
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	+Restart=on-failure
	 
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP $MAINPID
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	@@ -32,16 +34,16 @@
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	 
	 # kill only the docker process, not all processes in the cgroup
	 KillMode=process
	-OOMScoreAdjust=-500
	 
	 [Install]
	 WantedBy=multi-user.target
	Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install enable docker
	
	I0512 01:30:32.524027    7708 machine.go:91] provisioned docker machine in 13.6036329s
	I0512 01:30:32.524027    7708 client.go:171] LocalClient.Create took 1m9.0841761s
	I0512 01:30:32.524027    7708 start.go:173] duration metric: libmachine.API.Create for "custom-weave-20220512010244-7184" took 1m9.0841761s
	I0512 01:30:32.524027    7708 start.go:306] post-start starting for "custom-weave-20220512010244-7184" (driver="docker")
	I0512 01:30:32.524027    7708 start.go:316] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0512 01:30:32.548998    7708 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0512 01:30:32.560983    7708 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-weave-20220512010244-7184
	I0512 01:30:34.064172    7708 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-weave-20220512010244-7184: (1.5031126s)
	I0512 01:30:34.064172    7708 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:51331 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\custom-weave-20220512010244-7184\id_rsa Username:docker}
	I0512 01:30:34.210050    7708 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (1.660968s)
	I0512 01:30:34.234587    7708 ssh_runner.go:195] Run: cat /etc/os-release
	I0512 01:30:34.250576    7708 main.go:134] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0512 01:30:34.250576    7708 main.go:134] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0512 01:30:34.250576    7708 main.go:134] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0512 01:30:34.250576    7708 info.go:137] Remote host: Ubuntu 20.04.4 LTS
	I0512 01:30:34.250576    7708 filesync.go:126] Scanning C:\Users\jenkins.minikube4\minikube-integration\.minikube\addons for local assets ...
	I0512 01:30:34.251582    7708 filesync.go:126] Scanning C:\Users\jenkins.minikube4\minikube-integration\.minikube\files for local assets ...
	I0512 01:30:34.252579    7708 filesync.go:149] local asset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\71842.pem -> 71842.pem in /etc/ssl/certs
	I0512 01:30:34.264561    7708 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0512 01:30:34.285559    7708 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\71842.pem --> /etc/ssl/certs/71842.pem (1708 bytes)
	I0512 01:30:34.358579    7708 start.go:309] post-start completed in 1.8344581s
	I0512 01:30:34.370571    7708 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" custom-weave-20220512010244-7184
	I0512 01:30:35.689980    7708 cli_runner.go:217] Completed: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" custom-weave-20220512010244-7184: (1.3193426s)
	I0512 01:30:35.689980    7708 profile.go:148] Saving config to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\custom-weave-20220512010244-7184\config.json ...
	I0512 01:30:35.714862    7708 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0512 01:30:35.729870    7708 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-weave-20220512010244-7184
	I0512 01:30:37.055197    7708 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-weave-20220512010244-7184: (1.3252604s)
	I0512 01:30:37.055800    7708 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:51331 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\custom-weave-20220512010244-7184\id_rsa Username:docker}
	I0512 01:30:37.140718    7708 ssh_runner.go:235] Completed: sh -c "df -h /var | awk 'NR==2{print $5}'": (1.4257839s)
	I0512 01:30:37.152884    7708 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0512 01:30:37.164722    7708 start.go:134] duration metric: createHost completed in 1m13.7276402s
	I0512 01:30:37.178722    7708 cli_runner.go:164] Run: docker container inspect custom-weave-20220512010244-7184 --format={{.State.Status}}
	I0512 01:30:38.393241    7708 cli_runner.go:217] Completed: docker container inspect custom-weave-20220512010244-7184 --format={{.State.Status}}: (1.2144568s)
	W0512 01:30:38.393241    7708 fix.go:129] unexpected machine state, will restart: <nil>
	I0512 01:30:38.393241    7708 machine.go:88] provisioning docker machine ...
	I0512 01:30:38.393241    7708 ubuntu.go:169] provisioning hostname "custom-weave-20220512010244-7184"
	I0512 01:30:38.405223    7708 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-weave-20220512010244-7184
	I0512 01:30:39.620751    7708 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-weave-20220512010244-7184: (1.215466s)
	I0512 01:30:39.626773    7708 main.go:134] libmachine: Using SSH client type: native
	I0512 01:30:39.627777    7708 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x422e20] 0x425c80 <nil>  [] 0s} 127.0.0.1 51331 <nil> <nil>}
	I0512 01:30:39.627777    7708 main.go:134] libmachine: About to run SSH command:
	sudo hostname custom-weave-20220512010244-7184 && echo "custom-weave-20220512010244-7184" | sudo tee /etc/hostname
	I0512 01:30:39.851323    7708 main.go:134] libmachine: SSH cmd err, output: <nil>: custom-weave-20220512010244-7184
	
	I0512 01:30:39.861072    7708 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-weave-20220512010244-7184
	I0512 01:30:41.108197    7708 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-weave-20220512010244-7184: (1.2470619s)
	I0512 01:30:41.116178    7708 main.go:134] libmachine: Using SSH client type: native
	I0512 01:30:41.117176    7708 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x422e20] 0x425c80 <nil>  [] 0s} 127.0.0.1 51331 <nil> <nil>}
	I0512 01:30:41.117176    7708 main.go:134] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\scustom-weave-20220512010244-7184' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 custom-weave-20220512010244-7184/g' /etc/hosts;
				else 
					echo '127.0.1.1 custom-weave-20220512010244-7184' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0512 01:30:41.311782    7708 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I0512 01:30:41.311782    7708 ubuntu.go:175] set auth options {CertDir:C:\Users\jenkins.minikube4\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube4\minikube-integration\.minikube}
	I0512 01:30:41.311782    7708 ubuntu.go:177] setting up certificates
	I0512 01:30:41.311782    7708 provision.go:83] configureAuth start
	I0512 01:30:41.326631    7708 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" custom-weave-20220512010244-7184
	I0512 01:30:42.727265    7708 cli_runner.go:217] Completed: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" custom-weave-20220512010244-7184: (1.400455s)
	I0512 01:30:42.727265    7708 provision.go:138] copyHostCerts
	I0512 01:30:42.727265    7708 exec_runner.go:144] found C:\Users\jenkins.minikube4\minikube-integration\.minikube/ca.pem, removing ...
	I0512 01:30:42.727265    7708 exec_runner.go:207] rm: C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.pem
	I0512 01:30:42.728255    7708 exec_runner.go:151] cp: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube4\minikube-integration\.minikube/ca.pem (1078 bytes)
	I0512 01:30:42.729259    7708 exec_runner.go:144] found C:\Users\jenkins.minikube4\minikube-integration\.minikube/cert.pem, removing ...
	I0512 01:30:42.729259    7708 exec_runner.go:207] rm: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cert.pem
	I0512 01:30:42.730263    7708 exec_runner.go:151] cp: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube4\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0512 01:30:42.731263    7708 exec_runner.go:144] found C:\Users\jenkins.minikube4\minikube-integration\.minikube/key.pem, removing ...
	I0512 01:30:42.731263    7708 exec_runner.go:207] rm: C:\Users\jenkins.minikube4\minikube-integration\.minikube\key.pem
	I0512 01:30:42.731263    7708 exec_runner.go:151] cp: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube4\minikube-integration\.minikube/key.pem (1675 bytes)
	I0512 01:30:42.732255    7708 provision.go:112] generating server cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.custom-weave-20220512010244-7184 san=[192.168.76.2 127.0.0.1 localhost 127.0.0.1 minikube custom-weave-20220512010244-7184]
	I0512 01:30:43.110073    7708 provision.go:172] copyRemoteCerts
	I0512 01:30:43.131065    7708 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0512 01:30:43.145053    7708 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-weave-20220512010244-7184
	I0512 01:30:44.742309    7708 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-weave-20220512010244-7184: (1.5971752s)
	I0512 01:30:44.742962    7708 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:51331 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\custom-weave-20220512010244-7184\id_rsa Username:docker}
	I0512 01:30:44.896733    7708 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (1.7655779s)
	I0512 01:30:44.896733    7708 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0512 01:30:44.990725    7708 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1269 bytes)
	I0512 01:30:45.138814    7708 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0512 01:30:45.238527    7708 provision.go:86] duration metric: configureAuth took 3.9265461s
	I0512 01:30:45.238527    7708 ubuntu.go:193] setting minikube options for container-runtime
	I0512 01:30:45.239530    7708 config.go:178] Loaded profile config "custom-weave-20220512010244-7184": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.5
	I0512 01:30:45.257518    7708 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-weave-20220512010244-7184
	I0512 01:30:46.896278    7708 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-weave-20220512010244-7184: (1.6386771s)
	I0512 01:30:46.913651    7708 main.go:134] libmachine: Using SSH client type: native
	I0512 01:30:46.916690    7708 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x422e20] 0x425c80 <nil>  [] 0s} 127.0.0.1 51331 <nil> <nil>}
	I0512 01:30:46.916690    7708 main.go:134] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0512 01:30:47.160648    7708 main.go:134] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0512 01:30:47.160648    7708 ubuntu.go:71] root file system type: overlay
	I0512 01:30:47.161664    7708 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0512 01:30:47.177646    7708 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-weave-20220512010244-7184
	I0512 01:30:48.837364    7708 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-weave-20220512010244-7184: (1.6596337s)
	I0512 01:30:48.847339    7708 main.go:134] libmachine: Using SSH client type: native
	I0512 01:30:48.847339    7708 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x422e20] 0x425c80 <nil>  [] 0s} 127.0.0.1 51331 <nil> <nil>}
	I0512 01:30:48.847339    7708 main.go:134] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0512 01:30:49.094964    7708 main.go:134] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0512 01:30:49.109991    7708 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-weave-20220512010244-7184
	I0512 01:30:50.645937    7708 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-weave-20220512010244-7184: (1.5358685s)
	I0512 01:30:50.653932    7708 main.go:134] libmachine: Using SSH client type: native
	I0512 01:30:50.654916    7708 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x422e20] 0x425c80 <nil>  [] 0s} 127.0.0.1 51331 <nil> <nil>}
	I0512 01:30:50.654916    7708 main.go:134] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0512 01:30:50.928337    7708 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I0512 01:30:50.928337    7708 machine.go:91] provisioned docker machine in 12.5344604s
	I0512 01:30:50.928337    7708 start.go:306] post-start starting for "custom-weave-20220512010244-7184" (driver="docker")
	I0512 01:30:50.928337    7708 start.go:316] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0512 01:30:50.948298    7708 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0512 01:30:50.957306    7708 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-weave-20220512010244-7184
	I0512 01:30:52.619979    7708 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-weave-20220512010244-7184: (1.6625891s)
	I0512 01:30:52.619979    7708 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:51331 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\custom-weave-20220512010244-7184\id_rsa Username:docker}
	I0512 01:30:52.815860    7708 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (1.8674671s)
	I0512 01:30:52.842677    7708 ssh_runner.go:195] Run: cat /etc/os-release
	I0512 01:30:52.855614    7708 main.go:134] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0512 01:30:52.855614    7708 main.go:134] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0512 01:30:52.855614    7708 main.go:134] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0512 01:30:52.855614    7708 info.go:137] Remote host: Ubuntu 20.04.4 LTS
	I0512 01:30:52.855614    7708 filesync.go:126] Scanning C:\Users\jenkins.minikube4\minikube-integration\.minikube\addons for local assets ...
	I0512 01:30:52.856620    7708 filesync.go:126] Scanning C:\Users\jenkins.minikube4\minikube-integration\.minikube\files for local assets ...
	I0512 01:30:52.856620    7708 filesync.go:149] local asset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\71842.pem -> 71842.pem in /etc/ssl/certs
	I0512 01:30:52.867611    7708 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0512 01:30:52.889623    7708 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\71842.pem --> /etc/ssl/certs/71842.pem (1708 bytes)
	I0512 01:30:52.962755    7708 start.go:309] post-start completed in 2.0343148s
	I0512 01:30:52.972747    7708 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0512 01:30:52.978782    7708 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-weave-20220512010244-7184
	I0512 01:30:54.346525    7708 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-weave-20220512010244-7184: (1.3676741s)
	I0512 01:30:54.346525    7708 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:51331 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\custom-weave-20220512010244-7184\id_rsa Username:docker}
	I0512 01:30:54.473943    7708 ssh_runner.go:235] Completed: sh -c "df -h /var | awk 'NR==2{print $5}'": (1.501119s)
	I0512 01:30:54.482970    7708 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0512 01:30:54.496943    7708 fix.go:57] fixHost completed within 6m2.0117976s
	I0512 01:30:54.496943    7708 start.go:81] releasing machines lock for "custom-weave-20220512010244-7184", held for 6m2.0119929s
	I0512 01:30:54.517958    7708 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" custom-weave-20220512010244-7184
	I0512 01:30:55.830235    7708 cli_runner.go:217] Completed: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" custom-weave-20220512010244-7184: (1.3122104s)
	I0512 01:30:55.834278    7708 ssh_runner.go:195] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0512 01:30:55.842217    7708 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-weave-20220512010244-7184
	I0512 01:30:55.843228    7708 ssh_runner.go:195] Run: sudo service containerd status
	I0512 01:30:55.850222    7708 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-weave-20220512010244-7184
	I0512 01:30:57.236923    7708 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-weave-20220512010244-7184: (1.3946345s)
	I0512 01:30:57.237763    7708 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:51331 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\custom-weave-20220512010244-7184\id_rsa Username:docker}
	I0512 01:30:57.260904    7708 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-weave-20220512010244-7184: (1.4106098s)
	I0512 01:30:57.261913    7708 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:51331 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\custom-weave-20220512010244-7184\id_rsa Username:docker}
	I0512 01:30:57.426816    7708 ssh_runner.go:235] Completed: curl -sS -m 2 https://k8s.gcr.io/: (1.5924574s)
	I0512 01:30:57.490842    7708 ssh_runner.go:235] Completed: sudo service containerd status: (1.647476s)
	I0512 01:30:57.515584    7708 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0512 01:30:57.548583    7708 cruntime.go:273] skipping containerd shutdown because we are bound to it
	I0512 01:30:57.559577    7708 ssh_runner.go:195] Run: sudo service crio status
	I0512 01:30:57.609754    7708 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/dockershim.sock
	image-endpoint: unix:///var/run/dockershim.sock
	" | sudo tee /etc/crictl.yaml"
	I0512 01:30:57.682282    7708 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0512 01:30:57.738921    7708 ssh_runner.go:195] Run: sudo service docker status
	I0512 01:30:57.797425    7708 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0512 01:30:57.905055    7708 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0512 01:30:58.016242    7708 out.go:204] * Preparing Kubernetes v1.23.5 on Docker 20.10.15 ...
	I0512 01:30:58.033267    7708 cli_runner.go:164] Run: docker exec -t custom-weave-20220512010244-7184 dig +short host.docker.internal
	I0512 01:30:59.566520    7708 cli_runner.go:217] Completed: docker exec -t custom-weave-20220512010244-7184 dig +short host.docker.internal: (1.5330574s)
	I0512 01:30:59.566596    7708 network.go:96] got host ip for mount in container by digging dns: 192.168.65.2
	I0512 01:30:59.577978    7708 ssh_runner.go:195] Run: grep 192.168.65.2	host.minikube.internal$ /etc/hosts
	I0512 01:30:59.595169    7708 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.65.2	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0512 01:30:59.638026    7708 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" custom-weave-20220512010244-7184
	I0512 01:31:00.920021    7708 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" custom-weave-20220512010244-7184: (1.2819299s)
	I0512 01:31:00.921046    7708 preload.go:132] Checking if preload exists for k8s version v1.23.5 and runtime docker
	I0512 01:31:00.933016    7708 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0512 01:31:01.010680    7708 docker.go:610] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.23.5
	k8s.gcr.io/kube-proxy:v1.23.5
	k8s.gcr.io/kube-scheduler:v1.23.5
	k8s.gcr.io/kube-controller-manager:v1.23.5
	k8s.gcr.io/etcd:3.5.1-0
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0512 01:31:01.010680    7708 docker.go:541] Images already preloaded, skipping extraction
	I0512 01:31:01.025323    7708 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0512 01:31:01.131538    7708 docker.go:610] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.23.5
	k8s.gcr.io/kube-proxy:v1.23.5
	k8s.gcr.io/kube-scheduler:v1.23.5
	k8s.gcr.io/kube-controller-manager:v1.23.5
	k8s.gcr.io/etcd:3.5.1-0
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0512 01:31:01.131538    7708 cache_images.go:84] Images are preloaded, skipping loading
	I0512 01:31:01.146875    7708 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0512 01:31:01.360405    7708 cni.go:95] Creating CNI manager for "testdata\\weavenet.yaml"
	I0512 01:31:01.360405    7708 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0512 01:31:01.360405    7708 kubeadm.go:158] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.23.5 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:custom-weave-20220512010244-7184 NodeName:custom-weave-20220512010244-7184 DNSDomain:cluster.local CRISocket:/var/run/dockershim.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile
:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	I0512 01:31:01.361409    7708 kubeadm.go:162] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/dockershim.sock
	  name: "custom-weave-20220512010244-7184"
	  kubeletExtraArgs:
	    node-ip: 192.168.76.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.23.5
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0512 01:31:01.361409    7708 kubeadm.go:936] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.23.5/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=docker --hostname-override=custom-weave-20220512010244-7184 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.23.5 ClusterName:custom-weave-20220512010244-7184 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata\weavenet.yaml NodeIP: NodePort:8443 NodeName:}
	I0512 01:31:01.378383    7708 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.23.5
	I0512 01:31:01.412020    7708 binaries.go:44] Found k8s binaries, skipping transfer
	I0512 01:31:01.435021    7708 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /var/lib/minikube /etc/init.d
	I0512 01:31:01.459005    7708 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (379 bytes)
	I0512 01:31:01.495012    7708 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0512 01:31:01.827881    7708 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2054 bytes)
	I0512 01:31:01.872123    7708 ssh_runner.go:362] scp memory --> /var/lib/minikube/openrc-restart-wrapper.sh (233 bytes)
	I0512 01:31:01.917233    7708 ssh_runner.go:362] scp memory --> /etc/init.d/kubelet (839 bytes)
	I0512 01:31:01.975875    7708 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I0512 01:31:01.985875    7708 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0512 01:31:02.025902    7708 certs.go:54] Setting up C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\custom-weave-20220512010244-7184 for IP: 192.168.76.2
	I0512 01:31:02.026605    7708 certs.go:182] skipping minikubeCA CA generation: C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.key
	I0512 01:31:02.027398    7708 certs.go:182] skipping proxyClientCA CA generation: C:\Users\jenkins.minikube4\minikube-integration\.minikube\proxy-client-ca.key
	I0512 01:31:02.028257    7708 certs.go:302] generating minikube-user signed cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\custom-weave-20220512010244-7184\client.key
	I0512 01:31:02.028773    7708 crypto.go:68] Generating cert C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\custom-weave-20220512010244-7184\client.crt with IP's: []
	I0512 01:31:02.173165    7708 crypto.go:156] Writing cert to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\custom-weave-20220512010244-7184\client.crt ...
	I0512 01:31:02.173165    7708 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\custom-weave-20220512010244-7184\client.crt: {Name:mk05b8eff337940520bab053e17a04ec5fc5415e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0512 01:31:02.175110    7708 crypto.go:164] Writing key to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\custom-weave-20220512010244-7184\client.key ...
	I0512 01:31:02.175169    7708 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\custom-weave-20220512010244-7184\client.key: {Name:mk1253f51684215f8d5ef88f972dd34751768d14 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0512 01:31:02.176327    7708 certs.go:302] generating minikube signed cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\custom-weave-20220512010244-7184\apiserver.key.31bdca25
	I0512 01:31:02.176327    7708 crypto.go:68] Generating cert C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\custom-weave-20220512010244-7184\apiserver.crt.31bdca25 with IP's: [192.168.76.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I0512 01:31:02.482786    7708 crypto.go:156] Writing cert to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\custom-weave-20220512010244-7184\apiserver.crt.31bdca25 ...
	I0512 01:31:02.482786    7708 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\custom-weave-20220512010244-7184\apiserver.crt.31bdca25: {Name:mk4b59fde2286bc6dcae15c38b4c1fba42d7c0dd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0512 01:31:02.483366    7708 crypto.go:164] Writing key to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\custom-weave-20220512010244-7184\apiserver.key.31bdca25 ...
	I0512 01:31:02.484366    7708 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\custom-weave-20220512010244-7184\apiserver.key.31bdca25: {Name:mk769c068859a69a1ca829d7085d1b95dd3182c6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0512 01:31:02.484685    7708 certs.go:320] copying C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\custom-weave-20220512010244-7184\apiserver.crt.31bdca25 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\custom-weave-20220512010244-7184\apiserver.crt
	I0512 01:31:02.492371    7708 certs.go:324] copying C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\custom-weave-20220512010244-7184\apiserver.key.31bdca25 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\custom-weave-20220512010244-7184\apiserver.key
	I0512 01:31:02.493476    7708 certs.go:302] generating aggregator signed cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\custom-weave-20220512010244-7184\proxy-client.key
	I0512 01:31:02.494253    7708 crypto.go:68] Generating cert C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\custom-weave-20220512010244-7184\proxy-client.crt with IP's: []
	I0512 01:31:03.541960    7708 crypto.go:156] Writing cert to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\custom-weave-20220512010244-7184\proxy-client.crt ...
	I0512 01:31:03.541960    7708 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\custom-weave-20220512010244-7184\proxy-client.crt: {Name:mk430cced8283df9e8e2128ebfa2ec83a4f8c3a3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0512 01:31:03.542954    7708 crypto.go:164] Writing key to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\custom-weave-20220512010244-7184\proxy-client.key ...
	I0512 01:31:03.543956    7708 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\custom-weave-20220512010244-7184\proxy-client.key: {Name:mkef7dca90d6df1cb13ab74c6de0c4fe32c9a570 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0512 01:31:03.556392    7708 certs.go:388] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\7184.pem (1338 bytes)
	W0512 01:31:03.556504    7708 certs.go:384] ignoring C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\7184_empty.pem, impossibly tiny 0 bytes
	I0512 01:31:03.556504    7708 certs.go:388] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca-key.pem (1675 bytes)
	I0512 01:31:03.556971    7708 certs.go:388] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem (1078 bytes)
	I0512 01:31:03.556971    7708 certs.go:388] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0512 01:31:03.556971    7708 certs.go:388] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\key.pem (1675 bytes)
	I0512 01:31:03.556971    7708 certs.go:388] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\71842.pem (1708 bytes)
	I0512 01:31:03.558983    7708 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\custom-weave-20220512010244-7184\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0512 01:31:03.626319    7708 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\custom-weave-20220512010244-7184\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0512 01:31:03.675903    7708 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\custom-weave-20220512010244-7184\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0512 01:31:03.729936    7708 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\custom-weave-20220512010244-7184\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0512 01:31:03.782776    7708 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0512 01:31:03.846729    7708 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0512 01:31:03.890904    7708 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0512 01:31:03.955629    7708 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0512 01:31:04.006342    7708 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0512 01:31:04.065753    7708 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\7184.pem --> /usr/share/ca-certificates/7184.pem (1338 bytes)
	I0512 01:31:04.136432    7708 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\71842.pem --> /usr/share/ca-certificates/71842.pem (1708 bytes)
	I0512 01:31:04.187925    7708 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0512 01:31:04.256068    7708 ssh_runner.go:195] Run: openssl version
	I0512 01:31:04.278173    7708 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/71842.pem && ln -fs /usr/share/ca-certificates/71842.pem /etc/ssl/certs/71842.pem"
	I0512 01:31:04.332342    7708 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/71842.pem
	I0512 01:31:04.345531    7708 certs.go:431] hashing: -rw-r--r-- 1 root root 1708 May 11 23:10 /usr/share/ca-certificates/71842.pem
	I0512 01:31:04.355525    7708 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/71842.pem
	I0512 01:31:04.383568    7708 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/71842.pem /etc/ssl/certs/3ec20f2e.0"
	I0512 01:31:04.435237    7708 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0512 01:31:04.470241    7708 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0512 01:31:04.480241    7708 certs.go:431] hashing: -rw-r--r-- 1 root root 1111 May 11 23:00 /usr/share/ca-certificates/minikubeCA.pem
	I0512 01:31:04.489229    7708 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0512 01:31:04.534459    7708 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0512 01:31:04.583361    7708 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/7184.pem && ln -fs /usr/share/ca-certificates/7184.pem /etc/ssl/certs/7184.pem"
	I0512 01:31:04.630106    7708 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/7184.pem
	I0512 01:31:04.647502    7708 certs.go:431] hashing: -rw-r--r-- 1 root root 1338 May 11 23:10 /usr/share/ca-certificates/7184.pem
	I0512 01:31:04.661276    7708 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/7184.pem
	I0512 01:31:04.691213    7708 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/7184.pem /etc/ssl/certs/51391683.0"
	I0512 01:31:04.734159    7708 kubeadm.go:391] StartCluster: {Name:custom-weave-20220512010244-7184 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1652251400-14138@sha256:8c847a4aa2afc5a7fc659f9731046bf9cc7e788283deecc83c8633014fb0828a Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.5 ClusterName:custom-weave-20220512010244-7184 Namespace:default APIServerN
ame:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata\weavenet.yaml NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.23.5 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:5m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube4:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:dock
er BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0512 01:31:04.745019    7708 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0512 01:31:04.849373    7708 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0512 01:31:04.886179    7708 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0512 01:31:04.909175    7708 kubeadm.go:221] ignoring SystemVerification for kubeadm because of docker driver
	I0512 01:31:04.934176    7708 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0512 01:31:04.958193    7708 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0512 01:31:04.958193    7708 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.5:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0512 01:31:32.706305    7708 out.go:204]   - Generating certificates and keys ...
	I0512 01:31:32.717304    7708 out.go:204]   - Booting up control plane ...
	I0512 01:31:32.724305    7708 out.go:204]   - Configuring RBAC rules ...
	I0512 01:31:32.730356    7708 cni.go:95] Creating CNI manager for "testdata\\weavenet.yaml"
	I0512 01:31:32.735306    7708 out.go:177] * Configuring testdata\weavenet.yaml (Container Networking Interface) ...
	I0512 01:31:32.754314    7708 cni.go:189] applying CNI manifest using /var/lib/minikube/binaries/v1.23.5/kubectl ...
	I0512 01:31:32.769346    7708 ssh_runner.go:195] Run: stat -c "%s %y" /var/tmp/minikube/cni.yaml
	I0512 01:31:32.783307    7708 ssh_runner.go:352] existence check for /var/tmp/minikube/cni.yaml: stat -c "%s %y" /var/tmp/minikube/cni.yaml: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/var/tmp/minikube/cni.yaml': No such file or directory
	I0512 01:31:32.783307    7708 ssh_runner.go:362] scp testdata\weavenet.yaml --> /var/tmp/minikube/cni.yaml (10948 bytes)
	I0512 01:31:32.939973    7708 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0512 01:31:37.320857    7708 ssh_runner.go:235] Completed: sudo /var/lib/minikube/binaries/v1.23.5/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml: (4.3806615s)
	I0512 01:31:37.320857    7708 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0512 01:31:37.337853    7708 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl label nodes minikube.k8s.io/version=v1.25.2 minikube.k8s.io/commit=50a7977b568d2ad3e04003527a57f4502d6177a0 minikube.k8s.io/name=custom-weave-20220512010244-7184 minikube.k8s.io/updated_at=2022_05_12T01_31_37_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0512 01:31:37.337853    7708 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0512 01:31:37.623672    7708 ops.go:34] apiserver oom_adj: -16
	I0512 01:31:37.639667    7708 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0512 01:31:38.354125    7708 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0512 01:31:38.837653    7708 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0512 01:31:39.347484    7708 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0512 01:31:39.847275    7708 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0512 01:31:40.348692    7708 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0512 01:31:40.853746    7708 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0512 01:31:41.336801    7708 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0512 01:31:41.848272    7708 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0512 01:31:42.350819    7708 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0512 01:31:42.853886    7708 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0512 01:31:43.339385    7708 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0512 01:31:43.849232    7708 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0512 01:31:44.354501    7708 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0512 01:31:44.841558    7708 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0512 01:31:45.344431    7708 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0512 01:31:46.006634    7708 kubeadm.go:1020] duration metric: took 8.6853371s to wait for elevateKubeSystemPrivileges.
	I0512 01:31:46.006846    7708 kubeadm.go:393] StartCluster complete in 41.2705945s
	I0512 01:31:46.006933    7708 settings.go:142] acquiring lock: {Name:mk5d8710830d010adb6db61f855b0ef766a8622c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0512 01:31:46.007125    7708 settings.go:150] Updating kubeconfig:  C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	I0512 01:31:46.010823    7708 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\kubeconfig: {Name:mk97c09b788e5010ffd4c9dd9525f9245d5edd25 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	W0512 01:31:46.205353    7708 kapi.go:233] failed rescaling deployment, will retry: Operation cannot be fulfilled on deployments.apps "coredns": the object has been modified; please apply your changes to the latest version and try again
	I0512 01:31:47.806163    7708 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "custom-weave-20220512010244-7184" rescaled to 1
	I0512 01:31:47.806163    7708 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.23.5/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0512 01:31:47.806163    7708 addons.go:415] enableAddons start: toEnable=map[], additional=[]
	I0512 01:31:47.806163    7708 start.go:208] Will wait 5m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.23.5 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0512 01:31:47.806163    7708 addons.go:65] Setting storage-provisioner=true in profile "custom-weave-20220512010244-7184"
	I0512 01:31:47.810909    7708 out.go:177] * Verifying Kubernetes components...
	I0512 01:31:47.806163    7708 addons.go:65] Setting default-storageclass=true in profile "custom-weave-20220512010244-7184"
	I0512 01:31:47.806163    7708 addons.go:153] Setting addon storage-provisioner=true in "custom-weave-20220512010244-7184"
	I0512 01:31:47.807138    7708 config.go:178] Loaded profile config "custom-weave-20220512010244-7184": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.5
	W0512 01:31:47.811504    7708 addons.go:165] addon storage-provisioner should already be in state true
	I0512 01:31:47.811456    7708 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "custom-weave-20220512010244-7184"
	I0512 01:31:47.811725    7708 host.go:66] Checking if "custom-weave-20220512010244-7184" exists ...
	I0512 01:31:47.840424    7708 ssh_runner.go:195] Run: sudo service kubelet status
	I0512 01:31:47.846550    7708 cli_runner.go:164] Run: docker container inspect custom-weave-20220512010244-7184 --format={{.State.Status}}
	I0512 01:31:47.848906    7708 cli_runner.go:164] Run: docker container inspect custom-weave-20220512010244-7184 --format={{.State.Status}}
	I0512 01:31:48.482543    7708 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.23.5/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.65.2 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.23.5/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0512 01:31:48.493523    7708 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" custom-weave-20220512010244-7184
	I0512 01:31:49.210153    7708 cli_runner.go:217] Completed: docker container inspect custom-weave-20220512010244-7184 --format={{.State.Status}}: (1.3611777s)
	I0512 01:31:49.221164    7708 cli_runner.go:217] Completed: docker container inspect custom-weave-20220512010244-7184 --format={{.State.Status}}: (1.3745446s)
	I0512 01:31:49.222159    7708 addons.go:153] Setting addon default-storageclass=true in "custom-weave-20220512010244-7184"
	I0512 01:31:49.248297    7708 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	W0512 01:31:49.249432    7708 addons.go:165] addon default-storageclass should already be in state true
	I0512 01:31:49.253308    7708 host.go:66] Checking if "custom-weave-20220512010244-7184" exists ...
	I0512 01:31:49.253308    7708 addons.go:348] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0512 01:31:49.253308    7708 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0512 01:31:49.263564    7708 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-weave-20220512010244-7184
	I0512 01:31:49.274007    7708 cli_runner.go:164] Run: docker container inspect custom-weave-20220512010244-7184 --format={{.State.Status}}
	I0512 01:31:49.508438    7708 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.23.5/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.65.2 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.23.5/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.0258428s)
	I0512 01:31:49.508438    7708 start.go:815] {"host.minikube.internal": 192.168.65.2} host record injected into CoreDNS
	I0512 01:31:49.871626    7708 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" custom-weave-20220512010244-7184: (1.3780329s)
	I0512 01:31:49.874699    7708 node_ready.go:35] waiting up to 5m0s for node "custom-weave-20220512010244-7184" to be "Ready" ...
	I0512 01:31:49.914650    7708 node_ready.go:49] node "custom-weave-20220512010244-7184" has status "Ready":"True"
	I0512 01:31:49.915241    7708 node_ready.go:38] duration metric: took 40.5397ms waiting for node "custom-weave-20220512010244-7184" to be "Ready" ...
	I0512 01:31:49.915241    7708 pod_ready.go:35] extra waiting up to 5m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0512 01:31:49.943332    7708 pod_ready.go:78] waiting up to 5m0s for pod "coredns-64897985d-4tt76" in "kube-system" namespace to be "Ready" ...
	I0512 01:31:50.661095    7708 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-weave-20220512010244-7184: (1.3974594s)
	I0512 01:31:50.662081    7708 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:51331 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\custom-weave-20220512010244-7184\id_rsa Username:docker}
	I0512 01:31:50.674142    7708 cli_runner.go:217] Completed: docker container inspect custom-weave-20220512010244-7184 --format={{.State.Status}}: (1.4000641s)
	I0512 01:31:50.674142    7708 addons.go:348] installing /etc/kubernetes/addons/storageclass.yaml
	I0512 01:31:50.674142    7708 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0512 01:31:50.681088    7708 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-weave-20220512010244-7184
	I0512 01:31:51.030542    7708 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.5/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0512 01:31:52.030860    7708 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-weave-20220512010244-7184: (1.3497033s)
	I0512 01:31:52.030860    7708 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:51331 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\custom-weave-20220512010244-7184\id_rsa Username:docker}
	I0512 01:31:52.184378    7708 pod_ready.go:102] pod "coredns-64897985d-4tt76" in "kube-system" namespace has status "Ready":"False"
	I0512 01:31:52.332806    7708 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.5/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0512 01:31:52.535593    7708 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.5/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.5049747s)
	I0512 01:31:53.525305    7708 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.5/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.1924385s)
	I0512 01:31:53.558269    7708 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0512 01:31:53.581276    7708 addons.go:417] enableAddons completed in 5.7738195s
	I0512 01:31:54.709724    7708 pod_ready.go:102] pod "coredns-64897985d-4tt76" in "kube-system" namespace has status "Ready":"False"
	I0512 01:31:57.118725    7708 pod_ready.go:102] pod "coredns-64897985d-4tt76" in "kube-system" namespace has status "Ready":"False"
	I0512 01:31:59.120784    7708 pod_ready.go:102] pod "coredns-64897985d-4tt76" in "kube-system" namespace has status "Ready":"False"
	I0512 01:32:00.011795    7708 pod_ready.go:97] error getting pod "coredns-64897985d-4tt76" in "kube-system" namespace (skipping!): pods "coredns-64897985d-4tt76" not found
	I0512 01:32:00.011795    7708 pod_ready.go:81] duration metric: took 10.0679517s waiting for pod "coredns-64897985d-4tt76" in "kube-system" namespace to be "Ready" ...
	E0512 01:32:00.011795    7708 pod_ready.go:66] WaitExtra: waitPodCondition: error getting pod "coredns-64897985d-4tt76" in "kube-system" namespace (skipping!): pods "coredns-64897985d-4tt76" not found
	I0512 01:32:00.011795    7708 pod_ready.go:78] waiting up to 5m0s for pod "coredns-64897985d-qb7hg" in "kube-system" namespace to be "Ready" ...
	I0512 01:32:02.310931    7708 pod_ready.go:102] pod "coredns-64897985d-qb7hg" in "kube-system" namespace has status "Ready":"False"
	I0512 01:32:04.911817    7708 pod_ready.go:102] pod "coredns-64897985d-qb7hg" in "kube-system" namespace has status "Ready":"False"
	I0512 01:32:07.328975    7708 pod_ready.go:102] pod "coredns-64897985d-qb7hg" in "kube-system" namespace has status "Ready":"False"
	I0512 01:32:09.820460    7708 pod_ready.go:102] pod "coredns-64897985d-qb7hg" in "kube-system" namespace has status "Ready":"False"
	I0512 01:32:11.823696    7708 pod_ready.go:102] pod "coredns-64897985d-qb7hg" in "kube-system" namespace has status "Ready":"False"
	I0512 01:32:14.268382    7708 pod_ready.go:102] pod "coredns-64897985d-qb7hg" in "kube-system" namespace has status "Ready":"False"
	I0512 01:32:16.765292    7708 pod_ready.go:102] pod "coredns-64897985d-qb7hg" in "kube-system" namespace has status "Ready":"False"
	I0512 01:32:18.767304    7708 pod_ready.go:102] pod "coredns-64897985d-qb7hg" in "kube-system" namespace has status "Ready":"False"
	I0512 01:32:21.260485    7708 pod_ready.go:102] pod "coredns-64897985d-qb7hg" in "kube-system" namespace has status "Ready":"False"
	I0512 01:32:23.273725    7708 pod_ready.go:102] pod "coredns-64897985d-qb7hg" in "kube-system" namespace has status "Ready":"False"
	I0512 01:32:25.769916    7708 pod_ready.go:102] pod "coredns-64897985d-qb7hg" in "kube-system" namespace has status "Ready":"False"
	I0512 01:32:27.826590    7708 pod_ready.go:102] pod "coredns-64897985d-qb7hg" in "kube-system" namespace has status "Ready":"False"
	I0512 01:32:30.263645    7708 pod_ready.go:102] pod "coredns-64897985d-qb7hg" in "kube-system" namespace has status "Ready":"False"
	I0512 01:32:32.267590    7708 pod_ready.go:102] pod "coredns-64897985d-qb7hg" in "kube-system" namespace has status "Ready":"False"
	I0512 01:32:34.764812    7708 pod_ready.go:102] pod "coredns-64897985d-qb7hg" in "kube-system" namespace has status "Ready":"False"
	I0512 01:32:36.768143    7708 pod_ready.go:102] pod "coredns-64897985d-qb7hg" in "kube-system" namespace has status "Ready":"False"
	I0512 01:32:39.268940    7708 pod_ready.go:102] pod "coredns-64897985d-qb7hg" in "kube-system" namespace has status "Ready":"False"
	I0512 01:32:41.767295    7708 pod_ready.go:102] pod "coredns-64897985d-qb7hg" in "kube-system" namespace has status "Ready":"False"
	I0512 01:32:44.272769    7708 pod_ready.go:102] pod "coredns-64897985d-qb7hg" in "kube-system" namespace has status "Ready":"False"
	I0512 01:32:46.785603    7708 pod_ready.go:102] pod "coredns-64897985d-qb7hg" in "kube-system" namespace has status "Ready":"False"
	I0512 01:32:49.265520    7708 pod_ready.go:102] pod "coredns-64897985d-qb7hg" in "kube-system" namespace has status "Ready":"False"
	I0512 01:32:51.809083    7708 pod_ready.go:102] pod "coredns-64897985d-qb7hg" in "kube-system" namespace has status "Ready":"False"
	I0512 01:32:54.309659    7708 pod_ready.go:102] pod "coredns-64897985d-qb7hg" in "kube-system" namespace has status "Ready":"False"
	I0512 01:32:56.771407    7708 pod_ready.go:102] pod "coredns-64897985d-qb7hg" in "kube-system" namespace has status "Ready":"False"
	I0512 01:32:58.774252    7708 pod_ready.go:102] pod "coredns-64897985d-qb7hg" in "kube-system" namespace has status "Ready":"False"
	I0512 01:33:01.258827    7708 pod_ready.go:102] pod "coredns-64897985d-qb7hg" in "kube-system" namespace has status "Ready":"False"
	I0512 01:33:03.261513    7708 pod_ready.go:102] pod "coredns-64897985d-qb7hg" in "kube-system" namespace has status "Ready":"False"
	I0512 01:33:05.759102    7708 pod_ready.go:102] pod "coredns-64897985d-qb7hg" in "kube-system" namespace has status "Ready":"False"
	I0512 01:33:07.816191    7708 pod_ready.go:102] pod "coredns-64897985d-qb7hg" in "kube-system" namespace has status "Ready":"False"
	I0512 01:33:10.263264    7708 pod_ready.go:102] pod "coredns-64897985d-qb7hg" in "kube-system" namespace has status "Ready":"False"
	I0512 01:33:12.274212    7708 pod_ready.go:102] pod "coredns-64897985d-qb7hg" in "kube-system" namespace has status "Ready":"False"
	I0512 01:33:14.766823    7708 pod_ready.go:102] pod "coredns-64897985d-qb7hg" in "kube-system" namespace has status "Ready":"False"
	I0512 01:33:17.269040    7708 pod_ready.go:102] pod "coredns-64897985d-qb7hg" in "kube-system" namespace has status "Ready":"False"
	I0512 01:33:19.270288    7708 pod_ready.go:102] pod "coredns-64897985d-qb7hg" in "kube-system" namespace has status "Ready":"False"
	I0512 01:33:21.817761    7708 pod_ready.go:102] pod "coredns-64897985d-qb7hg" in "kube-system" namespace has status "Ready":"False"
	I0512 01:33:24.263231    7708 pod_ready.go:102] pod "coredns-64897985d-qb7hg" in "kube-system" namespace has status "Ready":"False"
	I0512 01:33:26.271788    7708 pod_ready.go:102] pod "coredns-64897985d-qb7hg" in "kube-system" namespace has status "Ready":"False"
	I0512 01:33:28.769993    7708 pod_ready.go:102] pod "coredns-64897985d-qb7hg" in "kube-system" namespace has status "Ready":"False"
	I0512 01:33:31.268709    7708 pod_ready.go:102] pod "coredns-64897985d-qb7hg" in "kube-system" namespace has status "Ready":"False"
	I0512 01:33:33.274900    7708 pod_ready.go:102] pod "coredns-64897985d-qb7hg" in "kube-system" namespace has status "Ready":"False"
	I0512 01:33:35.770458    7708 pod_ready.go:102] pod "coredns-64897985d-qb7hg" in "kube-system" namespace has status "Ready":"False"
	I0512 01:33:38.270464    7708 pod_ready.go:102] pod "coredns-64897985d-qb7hg" in "kube-system" namespace has status "Ready":"False"
	I0512 01:33:40.768522    7708 pod_ready.go:102] pod "coredns-64897985d-qb7hg" in "kube-system" namespace has status "Ready":"False"
	I0512 01:33:43.271172    7708 pod_ready.go:102] pod "coredns-64897985d-qb7hg" in "kube-system" namespace has status "Ready":"False"
	I0512 01:33:45.272904    7708 pod_ready.go:102] pod "coredns-64897985d-qb7hg" in "kube-system" namespace has status "Ready":"False"
	I0512 01:33:47.311259    7708 pod_ready.go:102] pod "coredns-64897985d-qb7hg" in "kube-system" namespace has status "Ready":"False"
	I0512 01:33:49.773318    7708 pod_ready.go:102] pod "coredns-64897985d-qb7hg" in "kube-system" namespace has status "Ready":"False"
	I0512 01:33:51.831775    7708 pod_ready.go:102] pod "coredns-64897985d-qb7hg" in "kube-system" namespace has status "Ready":"False"
	I0512 01:33:54.270719    7708 pod_ready.go:102] pod "coredns-64897985d-qb7hg" in "kube-system" namespace has status "Ready":"False"
	I0512 01:33:56.273561    7708 pod_ready.go:102] pod "coredns-64897985d-qb7hg" in "kube-system" namespace has status "Ready":"False"
	I0512 01:33:58.777833    7708 pod_ready.go:102] pod "coredns-64897985d-qb7hg" in "kube-system" namespace has status "Ready":"False"
	I0512 01:34:01.277745    7708 pod_ready.go:102] pod "coredns-64897985d-qb7hg" in "kube-system" namespace has status "Ready":"False"
	I0512 01:34:03.770782    7708 pod_ready.go:102] pod "coredns-64897985d-qb7hg" in "kube-system" namespace has status "Ready":"False"
	I0512 01:34:05.813119    7708 pod_ready.go:102] pod "coredns-64897985d-qb7hg" in "kube-system" namespace has status "Ready":"False"
	I0512 01:34:08.275803    7708 pod_ready.go:102] pod "coredns-64897985d-qb7hg" in "kube-system" namespace has status "Ready":"False"
	I0512 01:34:10.768146    7708 pod_ready.go:102] pod "coredns-64897985d-qb7hg" in "kube-system" namespace has status "Ready":"False"
	I0512 01:34:12.769936    7708 pod_ready.go:102] pod "coredns-64897985d-qb7hg" in "kube-system" namespace has status "Ready":"False"
	I0512 01:34:14.771855    7708 pod_ready.go:102] pod "coredns-64897985d-qb7hg" in "kube-system" namespace has status "Ready":"False"
	I0512 01:34:17.325192    7708 pod_ready.go:102] pod "coredns-64897985d-qb7hg" in "kube-system" namespace has status "Ready":"False"
	I0512 01:34:19.775165    7708 pod_ready.go:102] pod "coredns-64897985d-qb7hg" in "kube-system" namespace has status "Ready":"False"
	I0512 01:34:22.260980    7708 pod_ready.go:102] pod "coredns-64897985d-qb7hg" in "kube-system" namespace has status "Ready":"False"
	I0512 01:34:24.268089    7708 pod_ready.go:102] pod "coredns-64897985d-qb7hg" in "kube-system" namespace has status "Ready":"False"
	I0512 01:34:26.766335    7708 pod_ready.go:102] pod "coredns-64897985d-qb7hg" in "kube-system" namespace has status "Ready":"False"
	I0512 01:34:28.772923    7708 pod_ready.go:102] pod "coredns-64897985d-qb7hg" in "kube-system" namespace has status "Ready":"False"
	I0512 01:34:30.774626    7708 pod_ready.go:102] pod "coredns-64897985d-qb7hg" in "kube-system" namespace has status "Ready":"False"
	I0512 01:34:33.273674    7708 pod_ready.go:102] pod "coredns-64897985d-qb7hg" in "kube-system" namespace has status "Ready":"False"
	I0512 01:34:35.278707    7708 pod_ready.go:102] pod "coredns-64897985d-qb7hg" in "kube-system" namespace has status "Ready":"False"
	I0512 01:34:37.283721    7708 pod_ready.go:102] pod "coredns-64897985d-qb7hg" in "kube-system" namespace has status "Ready":"False"
	I0512 01:34:39.778790    7708 pod_ready.go:102] pod "coredns-64897985d-qb7hg" in "kube-system" namespace has status "Ready":"False"
	I0512 01:34:41.814165    7708 pod_ready.go:102] pod "coredns-64897985d-qb7hg" in "kube-system" namespace has status "Ready":"False"
	I0512 01:34:43.815027    7708 pod_ready.go:102] pod "coredns-64897985d-qb7hg" in "kube-system" namespace has status "Ready":"False"
	I0512 01:34:46.278912    7708 pod_ready.go:102] pod "coredns-64897985d-qb7hg" in "kube-system" namespace has status "Ready":"False"
	I0512 01:34:48.782569    7708 pod_ready.go:102] pod "coredns-64897985d-qb7hg" in "kube-system" namespace has status "Ready":"False"
	I0512 01:34:51.278299    7708 pod_ready.go:102] pod "coredns-64897985d-qb7hg" in "kube-system" namespace has status "Ready":"False"
	I0512 01:34:53.279093    7708 pod_ready.go:102] pod "coredns-64897985d-qb7hg" in "kube-system" namespace has status "Ready":"False"
	I0512 01:34:55.772545    7708 pod_ready.go:102] pod "coredns-64897985d-qb7hg" in "kube-system" namespace has status "Ready":"False"
	I0512 01:34:57.826514    7708 pod_ready.go:102] pod "coredns-64897985d-qb7hg" in "kube-system" namespace has status "Ready":"False"
	I0512 01:35:00.273796    7708 pod_ready.go:102] pod "coredns-64897985d-qb7hg" in "kube-system" namespace has status "Ready":"False"
	I0512 01:35:02.279326    7708 pod_ready.go:102] pod "coredns-64897985d-qb7hg" in "kube-system" namespace has status "Ready":"False"
	I0512 01:35:04.792435    7708 pod_ready.go:102] pod "coredns-64897985d-qb7hg" in "kube-system" namespace has status "Ready":"False"
	I0512 01:35:07.270892    7708 pod_ready.go:102] pod "coredns-64897985d-qb7hg" in "kube-system" namespace has status "Ready":"False"
	I0512 01:35:09.279201    7708 pod_ready.go:102] pod "coredns-64897985d-qb7hg" in "kube-system" namespace has status "Ready":"False"
	I0512 01:35:11.818622    7708 pod_ready.go:102] pod "coredns-64897985d-qb7hg" in "kube-system" namespace has status "Ready":"False"
	I0512 01:35:14.269318    7708 pod_ready.go:102] pod "coredns-64897985d-qb7hg" in "kube-system" namespace has status "Ready":"False"
	I0512 01:35:16.287789    7708 pod_ready.go:102] pod "coredns-64897985d-qb7hg" in "kube-system" namespace has status "Ready":"False"
	I0512 01:35:18.772547    7708 pod_ready.go:102] pod "coredns-64897985d-qb7hg" in "kube-system" namespace has status "Ready":"False"
	I0512 01:35:21.279343    7708 pod_ready.go:102] pod "coredns-64897985d-qb7hg" in "kube-system" namespace has status "Ready":"False"
	I0512 01:35:23.315632    7708 pod_ready.go:102] pod "coredns-64897985d-qb7hg" in "kube-system" namespace has status "Ready":"False"
	I0512 01:35:25.772687    7708 pod_ready.go:102] pod "coredns-64897985d-qb7hg" in "kube-system" namespace has status "Ready":"False"
	I0512 01:35:27.816452    7708 pod_ready.go:102] pod "coredns-64897985d-qb7hg" in "kube-system" namespace has status "Ready":"False"
	I0512 01:35:30.352477    7708 pod_ready.go:102] pod "coredns-64897985d-qb7hg" in "kube-system" namespace has status "Ready":"False"
	I0512 01:35:32.767995    7708 pod_ready.go:102] pod "coredns-64897985d-qb7hg" in "kube-system" namespace has status "Ready":"False"
	I0512 01:35:34.769543    7708 pod_ready.go:102] pod "coredns-64897985d-qb7hg" in "kube-system" namespace has status "Ready":"False"
	I0512 01:35:36.779527    7708 pod_ready.go:102] pod "coredns-64897985d-qb7hg" in "kube-system" namespace has status "Ready":"False"
	I0512 01:35:38.779662    7708 pod_ready.go:102] pod "coredns-64897985d-qb7hg" in "kube-system" namespace has status "Ready":"False"
	I0512 01:35:41.283297    7708 pod_ready.go:102] pod "coredns-64897985d-qb7hg" in "kube-system" namespace has status "Ready":"False"
	I0512 01:35:43.776663    7708 pod_ready.go:102] pod "coredns-64897985d-qb7hg" in "kube-system" namespace has status "Ready":"False"
	I0512 01:35:45.777196    7708 pod_ready.go:102] pod "coredns-64897985d-qb7hg" in "kube-system" namespace has status "Ready":"False"
	I0512 01:35:47.777406    7708 pod_ready.go:102] pod "coredns-64897985d-qb7hg" in "kube-system" namespace has status "Ready":"False"
	I0512 01:35:49.780429    7708 pod_ready.go:102] pod "coredns-64897985d-qb7hg" in "kube-system" namespace has status "Ready":"False"
	I0512 01:35:52.288882    7708 pod_ready.go:102] pod "coredns-64897985d-qb7hg" in "kube-system" namespace has status "Ready":"False"
	I0512 01:35:54.776296    7708 pod_ready.go:102] pod "coredns-64897985d-qb7hg" in "kube-system" namespace has status "Ready":"False"
	I0512 01:35:56.786355    7708 pod_ready.go:102] pod "coredns-64897985d-qb7hg" in "kube-system" namespace has status "Ready":"False"
	I0512 01:35:59.283297    7708 pod_ready.go:102] pod "coredns-64897985d-qb7hg" in "kube-system" namespace has status "Ready":"False"
	I0512 01:36:00.332819    7708 pod_ready.go:81] duration metric: took 4m0.3088485s waiting for pod "coredns-64897985d-qb7hg" in "kube-system" namespace to be "Ready" ...
	E0512 01:36:00.332819    7708 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting for the condition
	I0512 01:36:00.332819    7708 pod_ready.go:78] waiting up to 5m0s for pod "etcd-custom-weave-20220512010244-7184" in "kube-system" namespace to be "Ready" ...
	I0512 01:36:00.351810    7708 pod_ready.go:92] pod "etcd-custom-weave-20220512010244-7184" in "kube-system" namespace has status "Ready":"True"
	I0512 01:36:00.351810    7708 pod_ready.go:81] duration metric: took 18.9897ms waiting for pod "etcd-custom-weave-20220512010244-7184" in "kube-system" namespace to be "Ready" ...
	I0512 01:36:00.351810    7708 pod_ready.go:78] waiting up to 5m0s for pod "kube-apiserver-custom-weave-20220512010244-7184" in "kube-system" namespace to be "Ready" ...
	I0512 01:36:00.365931    7708 pod_ready.go:92] pod "kube-apiserver-custom-weave-20220512010244-7184" in "kube-system" namespace has status "Ready":"True"
	I0512 01:36:00.365931    7708 pod_ready.go:81] duration metric: took 14.1204ms waiting for pod "kube-apiserver-custom-weave-20220512010244-7184" in "kube-system" namespace to be "Ready" ...
	I0512 01:36:00.365931    7708 pod_ready.go:78] waiting up to 5m0s for pod "kube-controller-manager-custom-weave-20220512010244-7184" in "kube-system" namespace to be "Ready" ...
	I0512 01:36:00.380833    7708 pod_ready.go:92] pod "kube-controller-manager-custom-weave-20220512010244-7184" in "kube-system" namespace has status "Ready":"True"
	I0512 01:36:00.380833    7708 pod_ready.go:81] duration metric: took 14.9008ms waiting for pod "kube-controller-manager-custom-weave-20220512010244-7184" in "kube-system" namespace to be "Ready" ...
	I0512 01:36:00.380833    7708 pod_ready.go:78] waiting up to 5m0s for pod "kube-proxy-8f2lb" in "kube-system" namespace to be "Ready" ...
	I0512 01:36:00.674195    7708 pod_ready.go:92] pod "kube-proxy-8f2lb" in "kube-system" namespace has status "Ready":"True"
	I0512 01:36:00.674195    7708 pod_ready.go:81] duration metric: took 293.347ms waiting for pod "kube-proxy-8f2lb" in "kube-system" namespace to be "Ready" ...
	I0512 01:36:00.674195    7708 pod_ready.go:78] waiting up to 5m0s for pod "kube-scheduler-custom-weave-20220512010244-7184" in "kube-system" namespace to be "Ready" ...
	I0512 01:36:01.066685    7708 pod_ready.go:92] pod "kube-scheduler-custom-weave-20220512010244-7184" in "kube-system" namespace has status "Ready":"True"
	I0512 01:36:01.066685    7708 pod_ready.go:81] duration metric: took 392.47ms waiting for pod "kube-scheduler-custom-weave-20220512010244-7184" in "kube-system" namespace to be "Ready" ...
	I0512 01:36:01.066685    7708 pod_ready.go:78] waiting up to 5m0s for pod "weave-net-st7fh" in "kube-system" namespace to be "Ready" ...
	I0512 01:36:03.505254    7708 pod_ready.go:102] pod "weave-net-st7fh" in "kube-system" namespace has status "Ready":"False"
	I0512 01:36:05.999372    7708 pod_ready.go:102] pod "weave-net-st7fh" in "kube-system" namespace has status "Ready":"False"
	I0512 01:36:08.501630    7708 pod_ready.go:102] pod "weave-net-st7fh" in "kube-system" namespace has status "Ready":"False"
	I0512 01:36:11.027749    7708 pod_ready.go:102] pod "weave-net-st7fh" in "kube-system" namespace has status "Ready":"False"
	I0512 01:36:13.508171    7708 pod_ready.go:102] pod "weave-net-st7fh" in "kube-system" namespace has status "Ready":"False"
	I0512 01:36:16.003239    7708 pod_ready.go:102] pod "weave-net-st7fh" in "kube-system" namespace has status "Ready":"False"
	I0512 01:36:18.007578    7708 pod_ready.go:102] pod "weave-net-st7fh" in "kube-system" namespace has status "Ready":"False"
	I0512 01:36:20.502462    7708 pod_ready.go:102] pod "weave-net-st7fh" in "kube-system" namespace has status "Ready":"False"
	I0512 01:36:22.505742    7708 pod_ready.go:102] pod "weave-net-st7fh" in "kube-system" namespace has status "Ready":"False"
	I0512 01:36:25.003309    7708 pod_ready.go:102] pod "weave-net-st7fh" in "kube-system" namespace has status "Ready":"False"
	I0512 01:36:27.004365    7708 pod_ready.go:102] pod "weave-net-st7fh" in "kube-system" namespace has status "Ready":"False"
	I0512 01:36:29.008859    7708 pod_ready.go:102] pod "weave-net-st7fh" in "kube-system" namespace has status "Ready":"False"
	I0512 01:36:31.030278    7708 pod_ready.go:102] pod "weave-net-st7fh" in "kube-system" namespace has status "Ready":"False"
	I0512 01:36:33.497075    7708 pod_ready.go:102] pod "weave-net-st7fh" in "kube-system" namespace has status "Ready":"False"
	I0512 01:36:35.503432    7708 pod_ready.go:102] pod "weave-net-st7fh" in "kube-system" namespace has status "Ready":"False"
	I0512 01:36:37.506193    7708 pod_ready.go:102] pod "weave-net-st7fh" in "kube-system" namespace has status "Ready":"False"
	I0512 01:36:39.509858    7708 pod_ready.go:102] pod "weave-net-st7fh" in "kube-system" namespace has status "Ready":"False"
	I0512 01:36:42.002253    7708 pod_ready.go:102] pod "weave-net-st7fh" in "kube-system" namespace has status "Ready":"False"
	I0512 01:36:44.003920    7708 pod_ready.go:102] pod "weave-net-st7fh" in "kube-system" namespace has status "Ready":"False"
	I0512 01:36:46.038932    7708 pod_ready.go:102] pod "weave-net-st7fh" in "kube-system" namespace has status "Ready":"False"
	I0512 01:36:48.519084    7708 pod_ready.go:102] pod "weave-net-st7fh" in "kube-system" namespace has status "Ready":"False"
	I0512 01:36:51.003431    7708 pod_ready.go:102] pod "weave-net-st7fh" in "kube-system" namespace has status "Ready":"False"
	I0512 01:36:53.014278    7708 pod_ready.go:102] pod "weave-net-st7fh" in "kube-system" namespace has status "Ready":"False"
	I0512 01:36:55.500603    7708 pod_ready.go:102] pod "weave-net-st7fh" in "kube-system" namespace has status "Ready":"False"
	I0512 01:36:57.523886    7708 pod_ready.go:102] pod "weave-net-st7fh" in "kube-system" namespace has status "Ready":"False"
	I0512 01:36:59.996103    7708 pod_ready.go:102] pod "weave-net-st7fh" in "kube-system" namespace has status "Ready":"False"
	I0512 01:37:02.022751    7708 pod_ready.go:102] pod "weave-net-st7fh" in "kube-system" namespace has status "Ready":"False"
	I0512 01:37:04.502568    7708 pod_ready.go:102] pod "weave-net-st7fh" in "kube-system" namespace has status "Ready":"False"
	I0512 01:37:06.510242    7708 pod_ready.go:102] pod "weave-net-st7fh" in "kube-system" namespace has status "Ready":"False"
	I0512 01:37:08.999002    7708 pod_ready.go:102] pod "weave-net-st7fh" in "kube-system" namespace has status "Ready":"False"
	I0512 01:37:11.001292    7708 pod_ready.go:102] pod "weave-net-st7fh" in "kube-system" namespace has status "Ready":"False"
	I0512 01:37:13.009068    7708 pod_ready.go:102] pod "weave-net-st7fh" in "kube-system" namespace has status "Ready":"False"
	I0512 01:37:15.027538    7708 pod_ready.go:102] pod "weave-net-st7fh" in "kube-system" namespace has status "Ready":"False"
	I0512 01:37:17.512640    7708 pod_ready.go:102] pod "weave-net-st7fh" in "kube-system" namespace has status "Ready":"False"
	I0512 01:37:19.517706    7708 pod_ready.go:102] pod "weave-net-st7fh" in "kube-system" namespace has status "Ready":"False"
	I0512 01:37:21.522132    7708 pod_ready.go:102] pod "weave-net-st7fh" in "kube-system" namespace has status "Ready":"False"
	I0512 01:37:24.008154    7708 pod_ready.go:102] pod "weave-net-st7fh" in "kube-system" namespace has status "Ready":"False"
	I0512 01:37:26.500239    7708 pod_ready.go:102] pod "weave-net-st7fh" in "kube-system" namespace has status "Ready":"False"
	I0512 01:37:28.511496    7708 pod_ready.go:102] pod "weave-net-st7fh" in "kube-system" namespace has status "Ready":"False"
	I0512 01:37:31.011363    7708 pod_ready.go:102] pod "weave-net-st7fh" in "kube-system" namespace has status "Ready":"False"
	I0512 01:37:33.536608    7708 pod_ready.go:102] pod "weave-net-st7fh" in "kube-system" namespace has status "Ready":"False"
	I0512 01:37:36.008282    7708 pod_ready.go:102] pod "weave-net-st7fh" in "kube-system" namespace has status "Ready":"False"
	I0512 01:37:38.022788    7708 pod_ready.go:102] pod "weave-net-st7fh" in "kube-system" namespace has status "Ready":"False"
	I0512 01:37:40.508236    7708 pod_ready.go:102] pod "weave-net-st7fh" in "kube-system" namespace has status "Ready":"False"
	I0512 01:37:43.006963    7708 pod_ready.go:102] pod "weave-net-st7fh" in "kube-system" namespace has status "Ready":"False"
	I0512 01:37:45.011735    7708 pod_ready.go:102] pod "weave-net-st7fh" in "kube-system" namespace has status "Ready":"False"
	I0512 01:37:47.014831    7708 pod_ready.go:102] pod "weave-net-st7fh" in "kube-system" namespace has status "Ready":"False"
	I0512 01:37:49.046703    7708 pod_ready.go:102] pod "weave-net-st7fh" in "kube-system" namespace has status "Ready":"False"
	I0512 01:37:51.513135    7708 pod_ready.go:102] pod "weave-net-st7fh" in "kube-system" namespace has status "Ready":"False"
	I0512 01:37:54.012620    7708 pod_ready.go:102] pod "weave-net-st7fh" in "kube-system" namespace has status "Ready":"False"
	I0512 01:37:56.519349    7708 pod_ready.go:102] pod "weave-net-st7fh" in "kube-system" namespace has status "Ready":"False"
	I0512 01:37:59.001204    7708 pod_ready.go:102] pod "weave-net-st7fh" in "kube-system" namespace has status "Ready":"False"
	I0512 01:38:01.011215    7708 pod_ready.go:102] pod "weave-net-st7fh" in "kube-system" namespace has status "Ready":"False"
	I0512 01:38:03.022223    7708 pod_ready.go:102] pod "weave-net-st7fh" in "kube-system" namespace has status "Ready":"False"
	I0512 01:38:05.516085    7708 pod_ready.go:102] pod "weave-net-st7fh" in "kube-system" namespace has status "Ready":"False"
	I0512 01:38:08.007678    7708 pod_ready.go:102] pod "weave-net-st7fh" in "kube-system" namespace has status "Ready":"False"
	I0512 01:38:10.074999    7708 pod_ready.go:102] pod "weave-net-st7fh" in "kube-system" namespace has status "Ready":"False"
	I0512 01:38:12.511027    7708 pod_ready.go:102] pod "weave-net-st7fh" in "kube-system" namespace has status "Ready":"False"
	I0512 01:38:14.511589    7708 pod_ready.go:102] pod "weave-net-st7fh" in "kube-system" namespace has status "Ready":"False"
	I0512 01:38:16.525369    7708 pod_ready.go:102] pod "weave-net-st7fh" in "kube-system" namespace has status "Ready":"False"
	I0512 01:38:19.017833    7708 pod_ready.go:102] pod "weave-net-st7fh" in "kube-system" namespace has status "Ready":"False"
	I0512 01:38:21.508042    7708 pod_ready.go:102] pod "weave-net-st7fh" in "kube-system" namespace has status "Ready":"False"
	I0512 01:38:23.515787    7708 pod_ready.go:102] pod "weave-net-st7fh" in "kube-system" namespace has status "Ready":"False"
	I0512 01:38:26.002951    7708 pod_ready.go:102] pod "weave-net-st7fh" in "kube-system" namespace has status "Ready":"False"
	I0512 01:38:28.009153    7708 pod_ready.go:102] pod "weave-net-st7fh" in "kube-system" namespace has status "Ready":"False"
	I0512 01:38:30.017158    7708 pod_ready.go:102] pod "weave-net-st7fh" in "kube-system" namespace has status "Ready":"False"
	I0512 01:38:32.030270    7708 pod_ready.go:102] pod "weave-net-st7fh" in "kube-system" namespace has status "Ready":"False"
	I0512 01:38:34.038373    7708 pod_ready.go:102] pod "weave-net-st7fh" in "kube-system" namespace has status "Ready":"False"
	I0512 01:38:36.505507    7708 pod_ready.go:102] pod "weave-net-st7fh" in "kube-system" namespace has status "Ready":"False"
	I0512 01:38:38.526399    7708 pod_ready.go:102] pod "weave-net-st7fh" in "kube-system" namespace has status "Ready":"False"
	I0512 01:38:41.015046    7708 pod_ready.go:102] pod "weave-net-st7fh" in "kube-system" namespace has status "Ready":"False"
	I0512 01:38:43.023501    7708 pod_ready.go:102] pod "weave-net-st7fh" in "kube-system" namespace has status "Ready":"False"
	I0512 01:38:45.080487    7708 pod_ready.go:102] pod "weave-net-st7fh" in "kube-system" namespace has status "Ready":"False"
	I0512 01:38:47.517244    7708 pod_ready.go:102] pod "weave-net-st7fh" in "kube-system" namespace has status "Ready":"False"
	I0512 01:38:50.054829    7708 pod_ready.go:102] pod "weave-net-st7fh" in "kube-system" namespace has status "Ready":"False"
	I0512 01:38:52.520030    7708 pod_ready.go:102] pod "weave-net-st7fh" in "kube-system" namespace has status "Ready":"False"
	I0512 01:38:55.005667    7708 pod_ready.go:102] pod "weave-net-st7fh" in "kube-system" namespace has status "Ready":"False"
	I0512 01:38:57.007188    7708 pod_ready.go:102] pod "weave-net-st7fh" in "kube-system" namespace has status "Ready":"False"
	I0512 01:38:59.014295    7708 pod_ready.go:102] pod "weave-net-st7fh" in "kube-system" namespace has status "Ready":"False"
	I0512 01:39:01.015300    7708 pod_ready.go:102] pod "weave-net-st7fh" in "kube-system" namespace has status "Ready":"False"
	I0512 01:39:03.513284    7708 pod_ready.go:102] pod "weave-net-st7fh" in "kube-system" namespace has status "Ready":"False"
	I0512 01:39:06.014885    7708 pod_ready.go:102] pod "weave-net-st7fh" in "kube-system" namespace has status "Ready":"False"
	I0512 01:39:08.522372    7708 pod_ready.go:102] pod "weave-net-st7fh" in "kube-system" namespace has status "Ready":"False"
	I0512 01:39:11.006669    7708 pod_ready.go:102] pod "weave-net-st7fh" in "kube-system" namespace has status "Ready":"False"
	I0512 01:39:13.522877    7708 pod_ready.go:102] pod "weave-net-st7fh" in "kube-system" namespace has status "Ready":"False"
	I0512 01:39:21.123617    7708 pod_ready.go:102] pod "weave-net-st7fh" in "kube-system" namespace has status "Ready":"False"
	I0512 01:39:23.512855    7708 pod_ready.go:102] pod "weave-net-st7fh" in "kube-system" namespace has status "Ready":"False"
	I0512 01:39:26.046764    7708 pod_ready.go:102] pod "weave-net-st7fh" in "kube-system" namespace has status "Ready":"False"
	I0512 01:39:28.547483    7708 pod_ready.go:102] pod "weave-net-st7fh" in "kube-system" namespace has status "Ready":"False"
	I0512 01:39:31.018452    7708 pod_ready.go:102] pod "weave-net-st7fh" in "kube-system" namespace has status "Ready":"False"
	I0512 01:39:33.025819    7708 pod_ready.go:102] pod "weave-net-st7fh" in "kube-system" namespace has status "Ready":"False"
	I0512 01:39:35.513463    7708 pod_ready.go:102] pod "weave-net-st7fh" in "kube-system" namespace has status "Ready":"False"
	I0512 01:39:38.006747    7708 pod_ready.go:102] pod "weave-net-st7fh" in "kube-system" namespace has status "Ready":"False"
	I0512 01:39:44.880261    7708 pod_ready.go:102] pod "weave-net-st7fh" in "kube-system" namespace has status "Ready":"False"
	I0512 01:39:47.013610    7708 pod_ready.go:102] pod "weave-net-st7fh" in "kube-system" namespace has status "Ready":"False"
	I0512 01:39:49.015894    7708 pod_ready.go:102] pod "weave-net-st7fh" in "kube-system" namespace has status "Ready":"False"
	I0512 01:39:51.017594    7708 pod_ready.go:102] pod "weave-net-st7fh" in "kube-system" namespace has status "Ready":"False"
	I0512 01:39:53.018972    7708 pod_ready.go:102] pod "weave-net-st7fh" in "kube-system" namespace has status "Ready":"False"
	I0512 01:39:55.505938    7708 pod_ready.go:102] pod "weave-net-st7fh" in "kube-system" namespace has status "Ready":"False"
	I0512 01:39:57.517611    7708 pod_ready.go:102] pod "weave-net-st7fh" in "kube-system" namespace has status "Ready":"False"
	I0512 01:39:59.521153    7708 pod_ready.go:102] pod "weave-net-st7fh" in "kube-system" namespace has status "Ready":"False"
	I0512 01:40:01.546147    7708 pod_ready.go:102] pod "weave-net-st7fh" in "kube-system" namespace has status "Ready":"False"
	I0512 01:40:01.546147    7708 pod_ready.go:81] duration metric: took 4m0.4672497s waiting for pod "weave-net-st7fh" in "kube-system" namespace to be "Ready" ...
	E0512 01:40:01.546147    7708 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting for the condition
	I0512 01:40:01.546147    7708 pod_ready.go:38] duration metric: took 8m11.6059683s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0512 01:40:01.546147    7708 api_server.go:51] waiting for apiserver process to appear ...
	I0512 01:40:01.883873    7708 out.go:177] 
	W0512 01:40:02.085942    7708 out.go:239] X Exiting due to K8S_APISERVER_MISSING: wait 5m0s for node: wait for apiserver proc: apiserver process never appeared
	X Exiting due to K8S_APISERVER_MISSING: wait 5m0s for node: wait for apiserver proc: apiserver process never appeared
	W0512 01:40:02.086119    7708 out.go:239] * Suggestion: Check that the provided apiserver flags are valid, and that SELinux is disabled
	* Suggestion: Check that the provided apiserver flags are valid, and that SELinux is disabled
	W0512 01:40:02.086119    7708 out.go:239] * Related issues:
	* Related issues:
	W0512 01:40:02.086119    7708 out.go:239]   - https://github.com/kubernetes/minikube/issues/4536
	  - https://github.com/kubernetes/minikube/issues/4536
	W0512 01:40:02.086119    7708 out.go:239]   - https://github.com/kubernetes/minikube/issues/6014
	  - https://github.com/kubernetes/minikube/issues/6014
	I0512 01:40:02.176374    7708 out.go:177] 

                                                
                                                
** /stderr **

                                                
                                                
=== CONT  TestNetworkPlugins/group/custom-weave/Start
net_test.go:103: failed start: exit status 105
--- FAIL: TestNetworkPlugins/group/custom-weave/Start (984.75s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (333s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:169: (dbg) Run:  kubectl --context enable-default-cni-20220512010229-7184 exec deployment/netcat -- nslookup kubernetes.default

                                                
                                                
=== CONT  TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:169: (dbg) Non-zero exit: kubectl --context enable-default-cni-20220512010229-7184 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.6193559s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **

                                                
                                                
=== CONT  TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:169: (dbg) Run:  kubectl --context enable-default-cni-20220512010229-7184 exec deployment/netcat -- nslookup kubernetes.default

                                                
                                                
=== CONT  TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:169: (dbg) Non-zero exit: kubectl --context enable-default-cni-20220512010229-7184 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.6516478s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **

                                                
                                                
=== CONT  TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:169: (dbg) Run:  kubectl --context enable-default-cni-20220512010229-7184 exec deployment/netcat -- nslookup kubernetes.default
E0512 01:38:51.932148    7184 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\addons-20220511225738-7184\client.crt: The system cannot find the path specified.

                                                
                                                
=== CONT  TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:169: (dbg) Non-zero exit: kubectl --context enable-default-cni-20220512010229-7184 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.6634062s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **

                                                
                                                
=== CONT  TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:169: (dbg) Run:  kubectl --context enable-default-cni-20220512010229-7184 exec deployment/netcat -- nslookup kubernetes.default

                                                
                                                
=== CONT  TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:169: (dbg) Non-zero exit: kubectl --context enable-default-cni-20220512010229-7184 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.6714223s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:169: (dbg) Run:  kubectl --context enable-default-cni-20220512010229-7184 exec deployment/netcat -- nslookup kubernetes.default

                                                
                                                
=== CONT  TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:169: (dbg) Non-zero exit: kubectl --context enable-default-cni-20220512010229-7184 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.6872323s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:169: (dbg) Run:  kubectl --context enable-default-cni-20220512010229-7184 exec deployment/netcat -- nslookup kubernetes.default

                                                
                                                
=== CONT  TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:169: (dbg) Non-zero exit: kubectl --context enable-default-cni-20220512010229-7184 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.6176205s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **

                                                
                                                
=== CONT  TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:169: (dbg) Run:  kubectl --context enable-default-cni-20220512010229-7184 exec deployment/netcat -- nslookup kubernetes.default

                                                
                                                
=== CONT  TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:169: (dbg) Non-zero exit: kubectl --context enable-default-cni-20220512010229-7184 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.5797088s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **

                                                
                                                
=== CONT  TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:169: (dbg) Run:  kubectl --context enable-default-cni-20220512010229-7184 exec deployment/netcat -- nslookup kubernetes.default
E0512 01:40:39.602255    7184 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kindnet-20220512010244-7184\client.crt: The system cannot find the path specified.
net_test.go:169: (dbg) Non-zero exit: kubectl --context enable-default-cni-20220512010229-7184 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.6361423s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **

                                                
                                                
=== CONT  TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:169: (dbg) Run:  kubectl --context enable-default-cni-20220512010229-7184 exec deployment/netcat -- nslookup kubernetes.default
E0512 01:41:19.940482    7184 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\no-preload-20220512010315-7184\client.crt: The system cannot find the path specified.
E0512 01:41:25.067052    7184 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-20220511231058-7184\client.crt: The system cannot find the path specified.
net_test.go:169: (dbg) Non-zero exit: kubectl --context enable-default-cni-20220512010229-7184 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.5659893s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
E0512 01:41:38.254811    7184 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\false-20220512010244-7184\client.crt: The system cannot find the path specified.

                                                
                                                
=== CONT  TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:169: (dbg) Run:  kubectl --context enable-default-cni-20220512010229-7184 exec deployment/netcat -- nslookup kubernetes.default

                                                
                                                
=== CONT  TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:169: (dbg) Non-zero exit: kubectl --context enable-default-cni-20220512010229-7184 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.5865596s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **

                                                
                                                
=== CONT  TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:169: (dbg) Run:  kubectl --context enable-default-cni-20220512010229-7184 exec deployment/netcat -- nslookup kubernetes.default
E0512 01:42:29.383095    7184 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\auto-20220512010229-7184\client.crt: The system cannot find the path specified.

                                                
                                                
=== CONT  TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:169: (dbg) Non-zero exit: kubectl --context enable-default-cni-20220512010229-7184 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.4869638s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **

                                                
                                                
=== CONT  TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:169: (dbg) Run:  kubectl --context enable-default-cni-20220512010229-7184 exec deployment/netcat -- nslookup kubernetes.default
net_test.go:169: (dbg) Non-zero exit: kubectl --context enable-default-cni-20220512010229-7184 exec deployment/netcat -- nslookup kubernetes.default: context deadline exceeded (0s)
net_test.go:175: failed to do nslookup on kubernetes.default: context deadline exceeded
net_test.go:180: failed nslookup: got="", want=*"10.96.0.1"*
--- FAIL: TestNetworkPlugins/group/enable-default-cni/DNS (333.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Start (157.53s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Start
net_test.go:101: (dbg) Run:  out/minikube-windows-amd64.exe start -p kubenet-20220512010229-7184 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --network-plugin=kubenet --driver=docker
E0512 01:39:53.014342    7184 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\ingress-addon-legacy-20220511235145-7184\client.crt: The system cannot find the path specified.

                                                
                                                
=== CONT  TestNetworkPlugins/group/kubenet/Start
net_test.go:101: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p kubenet-20220512010229-7184 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --network-plugin=kubenet --driver=docker: exit status 1 (2m37.3314185s)

                                                
                                                
-- stdout --
	* [kubenet-20220512010229-7184] minikube v1.25.2 on Microsoft Windows 10 Enterprise N 10.0.19042 Build 19042
	  - KUBECONFIG=C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube4\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=13639
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	* Using the docker driver based on user configuration
	* Using Docker Desktop driver with the root privilege
	* Starting control plane node kubenet-20220512010229-7184 in cluster kubenet-20220512010229-7184
	* Pulling base image ...
	* Creating docker container (CPUs=2, Memory=2048MB) ...
	* Preparing Kubernetes v1.23.5 on Docker 20.10.15 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Configuring RBAC rules ...
	* Verifying Kubernetes components...
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	* Enabled addons: storage-provisioner, default-storageclass

                                                
                                                
-- /stdout --
** stderr ** 
	I0512 01:39:52.315789    1612 out.go:296] Setting OutFile to fd 1488 ...
	I0512 01:39:52.378233    1612 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0512 01:39:52.378233    1612 out.go:309] Setting ErrFile to fd 1568...
	I0512 01:39:52.378233    1612 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0512 01:39:52.389241    1612 out.go:303] Setting JSON to false
	I0512 01:39:52.391231    1612 start.go:115] hostinfo: {"hostname":"minikube4","uptime":18045,"bootTime":1652301547,"procs":167,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19042 Build 19042","kernelVersion":"10.0.19042 Build 19042","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"df6bfb5f-73f2-4acb-9365-df7854ecbb28"}
	W0512 01:39:52.392234    1612 start.go:123] gopshost.Virtualization returned error: not implemented yet
	I0512 01:39:52.406230    1612 out.go:177] * [kubenet-20220512010229-7184] minikube v1.25.2 on Microsoft Windows 10 Enterprise N 10.0.19042 Build 19042
	I0512 01:39:52.411650    1612 notify.go:193] Checking for updates...
	I0512 01:39:52.414265    1612 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	I0512 01:39:52.421231    1612 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube4\minikube-integration\.minikube
	I0512 01:39:52.423223    1612 out.go:177]   - MINIKUBE_LOCATION=13639
	I0512 01:39:52.430225    1612 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0512 01:39:52.435232    1612 config.go:178] Loaded profile config "bridge-20220512010229-7184": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.5
	I0512 01:39:52.435232    1612 config.go:178] Loaded profile config "custom-weave-20220512010244-7184": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.5
	I0512 01:39:52.435232    1612 config.go:178] Loaded profile config "enable-default-cni-20220512010229-7184": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.5
	I0512 01:39:52.436223    1612 driver.go:358] Setting default libvirt URI to qemu:///system
	I0512 01:39:55.187777    1612 docker.go:137] docker version: linux-20.10.14
	I0512 01:39:55.195857    1612 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0512 01:39:57.312050    1612 cli_runner.go:217] Completed: docker system info --format "{{json .}}": (2.1158935s)
	I0512 01:39:57.312666    1612 info.go:265] docker info: {ID:5MOX:W55Z:6RSS:V5PU:46KT:D723:NTM4:N7FK:USOO:URA3:TW6J:2PNT Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:78 OomKillDisable:true NGoroutines:76 SystemTime:2022-05-12 01:39:56.2352953 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.102.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_
64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:53902323712 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.14 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3df54a852345ae127d1fa3092b95168e4a88e2f8 Expected:3df54a852345ae127d1fa3092b95168e4a88e2f8} RuncCommit:{ID:v1.0.3-0-gf46b6ba Expected:v1.0.3-0-gf46b6ba} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,
profile=default] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.5.0] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:C:\Program Files\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc.
Version:v0.17.0]] Warnings:<nil>}}
	I0512 01:39:57.316545    1612 out.go:177] * Using the docker driver based on user configuration
	I0512 01:39:57.318904    1612 start.go:284] selected driver: docker
	I0512 01:39:57.318904    1612 start.go:801] validating driver "docker" against <nil>
	I0512 01:39:57.318979    1612 start.go:812] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0512 01:39:57.405878    1612 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0512 01:39:59.526274    1612 cli_runner.go:217] Completed: docker system info --format "{{json .}}": (2.1192365s)
	I0512 01:39:59.526274    1612 info.go:265] docker info: {ID:5MOX:W55Z:6RSS:V5PU:46KT:D723:NTM4:N7FK:USOO:URA3:TW6J:2PNT Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:80 OomKillDisable:true NGoroutines:70 SystemTime:2022-05-12 01:39:58.4717221 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.102.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_
64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:53902323712 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.14 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3df54a852345ae127d1fa3092b95168e4a88e2f8 Expected:3df54a852345ae127d1fa3092b95168e4a88e2f8} RuncCommit:{ID:v1.0.3-0-gf46b6ba Expected:v1.0.3-0-gf46b6ba} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,
profile=default] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.5.0] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:C:\Program Files\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc.
Version:v0.17.0]] Warnings:<nil>}}
	I0512 01:39:59.526274    1612 start_flags.go:292] no existing cluster config was found, will generate one from the flags 
	I0512 01:39:59.527379    1612 start_flags.go:847] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0512 01:39:59.530695    1612 out.go:177] * Using Docker Desktop driver with the root privilege
	I0512 01:39:59.532686    1612 cni.go:91] network plugin configured as "kubenet", returning disabled
	I0512 01:39:59.532686    1612 start_flags.go:306] config:
	{Name:kubenet-20220512010229-7184 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1652251400-14138@sha256:8c847a4aa2afc5a7fc659f9731046bf9cc7e788283deecc83c8633014fb0828a Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.5 ClusterName:kubenet-20220512010229-7184 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.loc
al ContainerRuntime:docker CRISocket: NetworkPlugin:kubenet FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:5m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube4:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0512 01:39:59.534697    1612 out.go:177] * Starting control plane node kubenet-20220512010229-7184 in cluster kubenet-20220512010229-7184
	I0512 01:39:59.538705    1612 cache.go:120] Beginning downloading kic base image for docker with docker
	I0512 01:39:59.540694    1612 out.go:177] * Pulling base image ...
	I0512 01:39:59.543687    1612 preload.go:132] Checking if preload exists for k8s version v1.23.5 and runtime docker
	I0512 01:39:59.543687    1612 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1652251400-14138@sha256:8c847a4aa2afc5a7fc659f9731046bf9cc7e788283deecc83c8633014fb0828a in local docker daemon
	I0512 01:39:59.543687    1612 preload.go:148] Found local preload: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.23.5-docker-overlay2-amd64.tar.lz4
	I0512 01:39:59.543687    1612 cache.go:57] Caching tarball of preloaded images
	I0512 01:39:59.544697    1612 preload.go:174] Found C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.23.5-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0512 01:39:59.544697    1612 cache.go:60] Finished verifying existence of preloaded tar for  v1.23.5 on docker
	I0512 01:39:59.544697    1612 profile.go:148] Saving config to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kubenet-20220512010229-7184\config.json ...
	I0512 01:39:59.544697    1612 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kubenet-20220512010229-7184\config.json: {Name:mka5ed2bcf61c75ad516108a0e3c3637c605f194 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0512 01:40:00.651030    1612 image.go:79] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1652251400-14138@sha256:8c847a4aa2afc5a7fc659f9731046bf9cc7e788283deecc83c8633014fb0828a in local docker daemon, skipping pull
	I0512 01:40:00.651138    1612 cache.go:141] gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1652251400-14138@sha256:8c847a4aa2afc5a7fc659f9731046bf9cc7e788283deecc83c8633014fb0828a exists in daemon, skipping load
	I0512 01:40:00.651208    1612 cache.go:206] Successfully downloaded all kic artifacts
	I0512 01:40:00.651360    1612 start.go:352] acquiring machines lock for kubenet-20220512010229-7184: {Name:mk719d698c2586c1b82c8f5037117332f1397cc1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0512 01:40:00.651643    1612 start.go:356] acquired machines lock for "kubenet-20220512010229-7184" in 142.8µs
	I0512 01:40:00.651643    1612 start.go:91] Provisioning new machine with config: &{Name:kubenet-20220512010229-7184 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1652251400-14138@sha256:8c847a4aa2afc5a7fc659f9731046bf9cc7e788283deecc83c8633014fb0828a Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.5 ClusterName:kubenet-20220512010229-7184 Namespace:defaul
t APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:kubenet FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.23.5 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:5m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube4:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: D
isableOptimizations:false DisableMetrics:false} &{Name: IP: Port:8443 KubernetesVersion:v1.23.5 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0512 01:40:00.651643    1612 start.go:131] createHost starting for "" (driver="docker")
	I0512 01:40:00.839258    1612 out.go:204] * Creating docker container (CPUs=2, Memory=2048MB) ...
	I0512 01:40:00.840363    1612 start.go:165] libmachine.API.Create for "kubenet-20220512010229-7184" (driver="docker")
	I0512 01:40:00.840363    1612 client.go:168] LocalClient.Create starting
	I0512 01:40:00.841090    1612 main.go:134] libmachine: Reading certificate data from C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem
	I0512 01:40:00.841137    1612 main.go:134] libmachine: Decoding PEM data...
	I0512 01:40:00.841137    1612 main.go:134] libmachine: Parsing certificate...
	I0512 01:40:00.841137    1612 main.go:134] libmachine: Reading certificate data from C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem
	I0512 01:40:00.841892    1612 main.go:134] libmachine: Decoding PEM data...
	I0512 01:40:00.841892    1612 main.go:134] libmachine: Parsing certificate...
	I0512 01:40:00.853409    1612 cli_runner.go:164] Run: docker network inspect kubenet-20220512010229-7184 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0512 01:40:01.955658    1612 cli_runner.go:211] docker network inspect kubenet-20220512010229-7184 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0512 01:40:01.955658    1612 cli_runner.go:217] Completed: docker network inspect kubenet-20220512010229-7184 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}": (1.1020956s)
	I0512 01:40:01.963988    1612 network_create.go:272] running [docker network inspect kubenet-20220512010229-7184] to gather additional debugging logs...
	I0512 01:40:01.963988    1612 cli_runner.go:164] Run: docker network inspect kubenet-20220512010229-7184
	W0512 01:40:03.123947    1612 cli_runner.go:211] docker network inspect kubenet-20220512010229-7184 returned with exit code 1
	I0512 01:40:03.123947    1612 cli_runner.go:217] Completed: docker network inspect kubenet-20220512010229-7184: (1.1599s)
	I0512 01:40:03.123947    1612 network_create.go:275] error running [docker network inspect kubenet-20220512010229-7184]: docker network inspect kubenet-20220512010229-7184: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: kubenet-20220512010229-7184
	I0512 01:40:03.123947    1612 network_create.go:277] output of [docker network inspect kubenet-20220512010229-7184]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: kubenet-20220512010229-7184
	
	** /stderr **
	I0512 01:40:03.130947    1612 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0512 01:40:04.226339    1612 cli_runner.go:217] Completed: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}": (1.0953365s)
	I0512 01:40:04.247317    1612 network.go:288] reserving subnet 192.168.49.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[] amended:true}} dirty:map[192.168.49.0:0xc0000062f0] misses:0}
	I0512 01:40:04.247317    1612 network.go:235] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0512 01:40:04.247317    1612 network_create.go:115] attempt to create docker network kubenet-20220512010229-7184 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0512 01:40:04.255348    1612 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true kubenet-20220512010229-7184
	W0512 01:40:05.344301    1612 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true kubenet-20220512010229-7184 returned with exit code 1
	I0512 01:40:05.344439    1612 cli_runner.go:217] Completed: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true kubenet-20220512010229-7184: (1.0888977s)
	W0512 01:40:05.344491    1612 network_create.go:107] failed to create docker network kubenet-20220512010229-7184 192.168.49.0/24, will retry: subnet is taken
	I0512 01:40:05.366980    1612 network.go:279] skipping subnet 192.168.49.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc0000062f0] amended:false}} dirty:map[] misses:0}
	I0512 01:40:05.366980    1612 network.go:238] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0512 01:40:05.389485    1612 network.go:288] reserving subnet 192.168.58.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc0000062f0] amended:true}} dirty:map[192.168.49.0:0xc0000062f0 192.168.58.0:0xc000788768] misses:0}
	I0512 01:40:05.389485    1612 network.go:235] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0512 01:40:05.389485    1612 network_create.go:115] attempt to create docker network kubenet-20220512010229-7184 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
	I0512 01:40:05.399064    1612 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true kubenet-20220512010229-7184
	W0512 01:40:06.484293    1612 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true kubenet-20220512010229-7184 returned with exit code 1
	I0512 01:40:06.484293    1612 cli_runner.go:217] Completed: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true kubenet-20220512010229-7184: (1.0851741s)
	W0512 01:40:06.484293    1612 network_create.go:107] failed to create docker network kubenet-20220512010229-7184 192.168.58.0/24, will retry: subnet is taken
	I0512 01:40:06.502305    1612 network.go:279] skipping subnet 192.168.58.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc0000062f0] amended:true}} dirty:map[192.168.49.0:0xc0000062f0 192.168.58.0:0xc000788768] misses:1}
	I0512 01:40:06.502305    1612 network.go:238] skipping subnet 192.168.58.0/24 that is reserved: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0512 01:40:06.523364    1612 network.go:288] reserving subnet 192.168.67.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc0000062f0] amended:true}} dirty:map[192.168.49.0:0xc0000062f0 192.168.58.0:0xc000788768 192.168.67.0:0xc0005c09c0] misses:1}
	I0512 01:40:06.523715    1612 network.go:235] using free private subnet 192.168.67.0/24: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0512 01:40:06.523715    1612 network_create.go:115] attempt to create docker network kubenet-20220512010229-7184 192.168.67.0/24 with gateway 192.168.67.1 and MTU of 1500 ...
	I0512 01:40:06.532043    1612 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true kubenet-20220512010229-7184
	I0512 01:40:09.202388    1612 cli_runner.go:217] Completed: docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true kubenet-20220512010229-7184: (2.6702094s)
	I0512 01:40:09.202388    1612 network_create.go:99] docker network kubenet-20220512010229-7184 192.168.67.0/24 created
	I0512 01:40:09.202388    1612 kic.go:106] calculated static IP "192.168.67.2" for the "kubenet-20220512010229-7184" container
	I0512 01:40:09.216386    1612 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0512 01:40:10.263241    1612 cli_runner.go:217] Completed: docker ps -a --format {{.Names}}: (1.0468016s)
	I0512 01:40:10.275283    1612 cli_runner.go:164] Run: docker volume create kubenet-20220512010229-7184 --label name.minikube.sigs.k8s.io=kubenet-20220512010229-7184 --label created_by.minikube.sigs.k8s.io=true
	I0512 01:40:12.136450    1612 cli_runner.go:217] Completed: docker volume create kubenet-20220512010229-7184 --label name.minikube.sigs.k8s.io=kubenet-20220512010229-7184 --label created_by.minikube.sigs.k8s.io=true: (1.860976s)
	I0512 01:40:12.136450    1612 oci.go:103] Successfully created a docker volume kubenet-20220512010229-7184
	I0512 01:40:12.144718    1612 cli_runner.go:164] Run: docker run --rm --name kubenet-20220512010229-7184-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=kubenet-20220512010229-7184 --entrypoint /usr/bin/test -v kubenet-20220512010229-7184:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1652251400-14138@sha256:8c847a4aa2afc5a7fc659f9731046bf9cc7e788283deecc83c8633014fb0828a -d /var/lib
	I0512 01:40:16.011626    1612 cli_runner.go:217] Completed: docker run --rm --name kubenet-20220512010229-7184-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=kubenet-20220512010229-7184 --entrypoint /usr/bin/test -v kubenet-20220512010229-7184:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1652251400-14138@sha256:8c847a4aa2afc5a7fc659f9731046bf9cc7e788283deecc83c8633014fb0828a -d /var/lib: (3.8664262s)
	I0512 01:40:16.011700    1612 oci.go:107] Successfully prepared a docker volume kubenet-20220512010229-7184
	I0512 01:40:16.011774    1612 preload.go:132] Checking if preload exists for k8s version v1.23.5 and runtime docker
	I0512 01:40:16.011774    1612 kic.go:179] Starting extracting preloaded images to volume ...
	I0512 01:40:16.023137    1612 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.23.5-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v kubenet-20220512010229-7184:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1652251400-14138@sha256:8c847a4aa2afc5a7fc659f9731046bf9cc7e788283deecc83c8633014fb0828a -I lz4 -xf /preloaded.tar -C /extractDir
	I0512 01:40:38.546936    1612 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.23.5-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v kubenet-20220512010229-7184:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1652251400-14138@sha256:8c847a4aa2afc5a7fc659f9731046bf9cc7e788283deecc83c8633014fb0828a -I lz4 -xf /preloaded.tar -C /extractDir: (22.5226549s)
	I0512 01:40:38.546936    1612 kic.go:188] duration metric: took 22.534017 seconds to extract preloaded images to volume
	I0512 01:40:38.555586    1612 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0512 01:40:40.668551    1612 cli_runner.go:217] Completed: docker system info --format "{{json .}}": (2.1128575s)
	I0512 01:40:40.668551    1612 info.go:265] docker info: {ID:5MOX:W55Z:6RSS:V5PU:46KT:D723:NTM4:N7FK:USOO:URA3:TW6J:2PNT Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:78 OomKillDisable:true NGoroutines:58 SystemTime:2022-05-12 01:40:39.596948 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.102.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_6
4 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:53902323712 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.14 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3df54a852345ae127d1fa3092b95168e4a88e2f8 Expected:3df54a852345ae127d1fa3092b95168e4a88e2f8} RuncCommit:{ID:v1.0.3-0-gf46b6ba Expected:v1.0.3-0-gf46b6ba} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,p
rofile=default] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.5.0] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:C:\Program Files\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc.
Version:v0.17.0]] Warnings:<nil>}}
	I0512 01:40:40.676558    1612 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0512 01:40:42.816332    1612 cli_runner.go:217] Completed: docker info --format "'{{json .SecurityOptions}}'": (2.1396649s)
	I0512 01:40:42.826487    1612 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname kubenet-20220512010229-7184 --name kubenet-20220512010229-7184 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=kubenet-20220512010229-7184 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=kubenet-20220512010229-7184 --network kubenet-20220512010229-7184 --ip 192.168.67.2 --volume kubenet-20220512010229-7184:/var --security-opt apparmor=unconfined --memory=2048mb --memory-swap=2048mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1652251400-14138@sha256:8c847a4aa2afc5a7fc659f9731046bf9cc7e788283deecc83c8633014fb0828a
	I0512 01:40:49.233287    1612 cli_runner.go:217] Completed: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname kubenet-20220512010229-7184 --name kubenet-20220512010229-7184 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=kubenet-20220512010229-7184 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=kubenet-20220512010229-7184 --network kubenet-20220512010229-7184 --ip 192.168.67.2 --volume kubenet-20220512010229-7184:/var --security-opt apparmor=unconfined --memory=2048mb --memory-swap=2048mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1652251400-14138@sha256:8c847a4aa2afc5a7fc659f9731046bf9cc7e788283deecc83c8633014fb0828a: (6.4063938s)
	I0512 01:40:49.246518    1612 cli_runner.go:164] Run: docker container inspect kubenet-20220512010229-7184 --format={{.State.Running}}
	I0512 01:40:50.428089    1612 cli_runner.go:217] Completed: docker container inspect kubenet-20220512010229-7184 --format={{.State.Running}}: (1.1815108s)
	I0512 01:40:50.435086    1612 cli_runner.go:164] Run: docker container inspect kubenet-20220512010229-7184 --format={{.State.Status}}
	I0512 01:40:51.540941    1612 cli_runner.go:217] Completed: docker container inspect kubenet-20220512010229-7184 --format={{.State.Status}}: (1.1057983s)
	I0512 01:40:51.549061    1612 cli_runner.go:164] Run: docker exec kubenet-20220512010229-7184 stat /var/lib/dpkg/alternatives/iptables
	I0512 01:40:52.811592    1612 cli_runner.go:217] Completed: docker exec kubenet-20220512010229-7184 stat /var/lib/dpkg/alternatives/iptables: (1.2624667s)
	I0512 01:40:52.811592    1612 oci.go:247] the created container "kubenet-20220512010229-7184" has a running status.
	I0512 01:40:52.811592    1612 kic.go:210] Creating ssh key for kic: C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\kubenet-20220512010229-7184\id_rsa...
	I0512 01:40:53.341876    1612 kic_runner.go:191] docker (temp): C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\kubenet-20220512010229-7184\id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0512 01:40:54.600480    1612 cli_runner.go:164] Run: docker container inspect kubenet-20220512010229-7184 --format={{.State.Status}}
	I0512 01:40:55.774893    1612 cli_runner.go:217] Completed: docker container inspect kubenet-20220512010229-7184 --format={{.State.Status}}: (1.1742277s)
	I0512 01:40:55.790931    1612 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0512 01:40:55.790931    1612 kic_runner.go:114] Args: [docker exec --privileged kubenet-20220512010229-7184 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0512 01:40:57.093774    1612 kic_runner.go:123] Done: [docker exec --privileged kubenet-20220512010229-7184 chown docker:docker /home/docker/.ssh/authorized_keys]: (1.3027768s)
	I0512 01:40:57.096784    1612 kic.go:250] ensuring only current user has permissions to key file located at : C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\kubenet-20220512010229-7184\id_rsa...
	I0512 01:40:57.601301    1612 cli_runner.go:164] Run: docker container inspect kubenet-20220512010229-7184 --format={{.State.Status}}
	I0512 01:40:58.693979    1612 cli_runner.go:217] Completed: docker container inspect kubenet-20220512010229-7184 --format={{.State.Status}}: (1.0926222s)
	I0512 01:40:58.693979    1612 machine.go:88] provisioning docker machine ...
	I0512 01:40:58.693979    1612 ubuntu.go:169] provisioning hostname "kubenet-20220512010229-7184"
	I0512 01:40:58.702979    1612 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-20220512010229-7184
	I0512 01:40:59.771091    1612 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-20220512010229-7184: (1.0670579s)
	I0512 01:40:59.774638    1612 main.go:134] libmachine: Using SSH client type: native
	I0512 01:40:59.782009    1612 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x422e20] 0x425c80 <nil>  [] 0s} 127.0.0.1 51717 <nil> <nil>}
	I0512 01:40:59.782009    1612 main.go:134] libmachine: About to run SSH command:
	sudo hostname kubenet-20220512010229-7184 && echo "kubenet-20220512010229-7184" | sudo tee /etc/hostname
	I0512 01:40:59.923694    1612 main.go:134] libmachine: SSH cmd err, output: <nil>: kubenet-20220512010229-7184
	
	I0512 01:40:59.933687    1612 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-20220512010229-7184
	I0512 01:41:01.031717    1612 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-20220512010229-7184: (1.0978085s)
	I0512 01:41:01.036802    1612 main.go:134] libmachine: Using SSH client type: native
	I0512 01:41:01.036879    1612 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x422e20] 0x425c80 <nil>  [] 0s} 127.0.0.1 51717 <nil> <nil>}
	I0512 01:41:01.036879    1612 main.go:134] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skubenet-20220512010229-7184' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kubenet-20220512010229-7184/g' /etc/hosts;
				else 
					echo '127.0.1.1 kubenet-20220512010229-7184' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0512 01:41:01.234753    1612 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I0512 01:41:01.234753    1612 ubuntu.go:175] set auth options {CertDir:C:\Users\jenkins.minikube4\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube4\minikube-integration\.minikube}
	I0512 01:41:01.234753    1612 ubuntu.go:177] setting up certificates
	I0512 01:41:01.234753    1612 provision.go:83] configureAuth start
	I0512 01:41:01.241754    1612 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kubenet-20220512010229-7184
	I0512 01:41:02.306651    1612 cli_runner.go:217] Completed: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kubenet-20220512010229-7184: (1.0648431s)
	I0512 01:41:02.306651    1612 provision.go:138] copyHostCerts
	I0512 01:41:02.306651    1612 exec_runner.go:144] found C:\Users\jenkins.minikube4\minikube-integration\.minikube/ca.pem, removing ...
	I0512 01:41:02.306651    1612 exec_runner.go:207] rm: C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.pem
	I0512 01:41:02.306651    1612 exec_runner.go:151] cp: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube4\minikube-integration\.minikube/ca.pem (1078 bytes)
	I0512 01:41:02.307616    1612 exec_runner.go:144] found C:\Users\jenkins.minikube4\minikube-integration\.minikube/cert.pem, removing ...
	I0512 01:41:02.307616    1612 exec_runner.go:207] rm: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cert.pem
	I0512 01:41:02.309193    1612 exec_runner.go:151] cp: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube4\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0512 01:41:02.310052    1612 exec_runner.go:144] found C:\Users\jenkins.minikube4\minikube-integration\.minikube/key.pem, removing ...
	I0512 01:41:02.310052    1612 exec_runner.go:207] rm: C:\Users\jenkins.minikube4\minikube-integration\.minikube\key.pem
	I0512 01:41:02.310052    1612 exec_runner.go:151] cp: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube4\minikube-integration\.minikube/key.pem (1675 bytes)
	I0512 01:41:02.311427    1612 provision.go:112] generating server cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.kubenet-20220512010229-7184 san=[192.168.67.2 127.0.0.1 localhost 127.0.0.1 minikube kubenet-20220512010229-7184]
	I0512 01:41:02.449806    1612 provision.go:172] copyRemoteCerts
	I0512 01:41:02.460899    1612 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0512 01:41:02.467360    1612 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-20220512010229-7184
	I0512 01:41:03.492725    1612 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-20220512010229-7184: (1.0251869s)
	I0512 01:41:03.492725    1612 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:51717 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\kubenet-20220512010229-7184\id_rsa Username:docker}
	I0512 01:41:03.642081    1612 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (1.1811224s)
	I0512 01:41:03.643840    1612 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0512 01:41:03.700238    1612 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1253 bytes)
	I0512 01:41:03.757801    1612 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0512 01:41:03.813737    1612 provision.go:86] duration metric: configureAuth took 2.5788524s
	I0512 01:41:03.813737    1612 ubuntu.go:193] setting minikube options for container-runtime
	I0512 01:41:03.813737    1612 config.go:178] Loaded profile config "kubenet-20220512010229-7184": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.5
	I0512 01:41:03.824778    1612 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-20220512010229-7184
	I0512 01:41:04.908744    1612 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-20220512010229-7184: (1.0838158s)
	I0512 01:41:04.914455    1612 main.go:134] libmachine: Using SSH client type: native
	I0512 01:41:04.915079    1612 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x422e20] 0x425c80 <nil>  [] 0s} 127.0.0.1 51717 <nil> <nil>}
	I0512 01:41:04.915079    1612 main.go:134] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0512 01:41:05.109801    1612 main.go:134] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0512 01:41:05.109801    1612 ubuntu.go:71] root file system type: overlay
	I0512 01:41:05.110530    1612 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0512 01:41:05.118691    1612 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-20220512010229-7184
	I0512 01:41:06.197099    1612 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-20220512010229-7184: (1.078277s)
	I0512 01:41:06.202051    1612 main.go:134] libmachine: Using SSH client type: native
	I0512 01:41:06.202875    1612 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x422e20] 0x425c80 <nil>  [] 0s} 127.0.0.1 51717 <nil> <nil>}
	I0512 01:41:06.203078    1612 main.go:134] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0512 01:41:06.415170    1612 main.go:134] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0512 01:41:06.424158    1612 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-20220512010229-7184
	I0512 01:41:07.486961    1612 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-20220512010229-7184: (1.0627487s)
	I0512 01:41:07.491839    1612 main.go:134] libmachine: Using SSH client type: native
	I0512 01:41:07.492187    1612 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x422e20] 0x425c80 <nil>  [] 0s} 127.0.0.1 51717 <nil> <nil>}
	I0512 01:41:07.492187    1612 main.go:134] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0512 01:41:08.819481    1612 main.go:134] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2022-05-05 13:17:28.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2022-05-12 01:41:06.395778000 +0000
	@@ -1,30 +1,32 @@
	 [Unit]
	 Description=Docker Application Container Engine
	 Documentation=https://docs.docker.com
	-After=network-online.target docker.socket firewalld.service containerd.service
	+BindsTo=containerd.service
	+After=network-online.target firewalld.service containerd.service
	 Wants=network-online.target
	-Requires=docker.socket containerd.service
	+Requires=docker.socket
	+StartLimitBurst=3
	+StartLimitIntervalSec=60
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutSec=0
	-RestartSec=2
	-Restart=always
	-
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	+Restart=on-failure
	 
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP $MAINPID
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	@@ -32,16 +34,16 @@
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	 
	 # kill only the docker process, not all processes in the cgroup
	 KillMode=process
	-OOMScoreAdjust=-500
	 
	 [Install]
	 WantedBy=multi-user.target
	Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install enable docker
	
	I0512 01:41:08.819536    1612 machine.go:91] provisioned docker machine in 10.1250427s
	I0512 01:41:08.819580    1612 client.go:171] LocalClient.Create took 1m7.9757637s
	I0512 01:41:08.819580    1612 start.go:173] duration metric: libmachine.API.Create for "kubenet-20220512010229-7184" took 1m7.9757637s
	I0512 01:41:08.819649    1612 start.go:306] post-start starting for "kubenet-20220512010229-7184" (driver="docker")
	I0512 01:41:08.819676    1612 start.go:316] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0512 01:41:08.836678    1612 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0512 01:41:08.843824    1612 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-20220512010229-7184
	I0512 01:41:09.995062    1612 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-20220512010229-7184: (1.1511798s)
	I0512 01:41:09.995062    1612 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:51717 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\kubenet-20220512010229-7184\id_rsa Username:docker}
	I0512 01:41:10.170741    1612 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (1.3337385s)
	I0512 01:41:10.192735    1612 ssh_runner.go:195] Run: cat /etc/os-release
	I0512 01:41:10.207074    1612 main.go:134] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0512 01:41:10.207146    1612 main.go:134] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0512 01:41:10.207172    1612 main.go:134] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0512 01:41:10.207172    1612 info.go:137] Remote host: Ubuntu 20.04.4 LTS
	I0512 01:41:10.207172    1612 filesync.go:126] Scanning C:\Users\jenkins.minikube4\minikube-integration\.minikube\addons for local assets ...
	I0512 01:41:10.207172    1612 filesync.go:126] Scanning C:\Users\jenkins.minikube4\minikube-integration\.minikube\files for local assets ...
	I0512 01:41:10.207866    1612 filesync.go:149] local asset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\71842.pem -> 71842.pem in /etc/ssl/certs
	I0512 01:41:10.220723    1612 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0512 01:41:10.243113    1612 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\71842.pem --> /etc/ssl/certs/71842.pem (1708 bytes)
	I0512 01:41:10.298950    1612 start.go:309] post-start completed in 1.4791433s
	I0512 01:41:10.309368    1612 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kubenet-20220512010229-7184
	I0512 01:41:11.498384    1612 cli_runner.go:217] Completed: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kubenet-20220512010229-7184: (1.1887258s)
	I0512 01:41:11.498384    1612 profile.go:148] Saving config to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kubenet-20220512010229-7184\config.json ...
	I0512 01:41:11.512261    1612 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0512 01:41:11.521961    1612 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-20220512010229-7184
	I0512 01:41:12.660553    1612 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-20220512010229-7184: (1.1385342s)
	I0512 01:41:12.660553    1612 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:51717 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\kubenet-20220512010229-7184\id_rsa Username:docker}
	I0512 01:41:12.778173    1612 ssh_runner.go:235] Completed: sh -c "df -h /var | awk 'NR==2{print $5}'": (1.2656658s)
	I0512 01:41:12.791137    1612 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0512 01:41:12.810712    1612 start.go:134] duration metric: createHost completed in 1m12.1553231s
	I0512 01:41:12.810712    1612 start.go:81] releasing machines lock for "kubenet-20220512010229-7184", held for 1m12.1554029s
	I0512 01:41:12.818914    1612 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kubenet-20220512010229-7184
	I0512 01:41:13.960374    1612 cli_runner.go:217] Completed: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kubenet-20220512010229-7184: (1.1410385s)
	I0512 01:41:13.965141    1612 ssh_runner.go:195] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0512 01:41:13.972461    1612 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-20220512010229-7184
	I0512 01:41:13.974183    1612 ssh_runner.go:195] Run: systemctl --version
	I0512 01:41:13.980478    1612 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-20220512010229-7184
	I0512 01:41:15.069945    1612 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-20220512010229-7184: (1.0894112s)
	I0512 01:41:15.069945    1612 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:51717 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\kubenet-20220512010229-7184\id_rsa Username:docker}
	I0512 01:41:15.097938    1612 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-20220512010229-7184: (1.1254194s)
	I0512 01:41:15.098938    1612 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:51717 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\kubenet-20220512010229-7184\id_rsa Username:docker}
	I0512 01:41:15.185874    1612 ssh_runner.go:235] Completed: systemctl --version: (1.2116296s)
	I0512 01:41:15.195904    1612 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0512 01:41:15.277883    1612 ssh_runner.go:235] Completed: curl -sS -m 2 https://k8s.gcr.io/: (1.3126747s)
	I0512 01:41:15.290874    1612 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0512 01:41:15.317118    1612 cruntime.go:273] skipping containerd shutdown because we are bound to it
	I0512 01:41:15.327113    1612 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0512 01:41:15.357235    1612 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/dockershim.sock
	image-endpoint: unix:///var/run/dockershim.sock
	" | sudo tee /etc/crictl.yaml"
	I0512 01:41:15.409261    1612 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0512 01:41:15.575401    1612 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0512 01:41:15.751690    1612 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0512 01:41:15.801652    1612 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0512 01:41:15.964800    1612 ssh_runner.go:195] Run: sudo systemctl start docker
	I0512 01:41:16.001277    1612 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0512 01:41:16.098111    1612 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0512 01:41:16.191351    1612 out.go:204] * Preparing Kubernetes v1.23.5 on Docker 20.10.15 ...
	I0512 01:41:16.204488    1612 cli_runner.go:164] Run: docker exec -t kubenet-20220512010229-7184 dig +short host.docker.internal
	I0512 01:41:17.527522    1612 cli_runner.go:217] Completed: docker exec -t kubenet-20220512010229-7184 dig +short host.docker.internal: (1.3229665s)
	I0512 01:41:17.527522    1612 network.go:96] got host ip for mount in container by digging dns: 192.168.65.2
	I0512 01:41:17.539522    1612 ssh_runner.go:195] Run: grep 192.168.65.2	host.minikube.internal$ /etc/hosts
	I0512 01:41:17.561523    1612 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.65.2	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0512 01:41:17.612101    1612 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" kubenet-20220512010229-7184
	I0512 01:41:18.762730    1612 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" kubenet-20220512010229-7184: (1.1505705s)
	I0512 01:41:18.762730    1612 preload.go:132] Checking if preload exists for k8s version v1.23.5 and runtime docker
	I0512 01:41:18.770731    1612 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0512 01:41:18.844809    1612 docker.go:610] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.23.5
	k8s.gcr.io/kube-proxy:v1.23.5
	k8s.gcr.io/kube-scheduler:v1.23.5
	k8s.gcr.io/kube-controller-manager:v1.23.5
	k8s.gcr.io/etcd:3.5.1-0
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0512 01:41:18.844809    1612 docker.go:541] Images already preloaded, skipping extraction
	I0512 01:41:18.852968    1612 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0512 01:41:18.926817    1612 docker.go:610] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.23.5
	k8s.gcr.io/kube-proxy:v1.23.5
	k8s.gcr.io/kube-controller-manager:v1.23.5
	k8s.gcr.io/kube-scheduler:v1.23.5
	k8s.gcr.io/etcd:3.5.1-0
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0512 01:41:18.926817    1612 cache_images.go:84] Images are preloaded, skipping loading
	I0512 01:41:18.938861    1612 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0512 01:41:19.124561    1612 cni.go:91] network plugin configured as "kubenet", returning disabled
	I0512 01:41:19.124621    1612 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0512 01:41:19.124660    1612 kubeadm.go:158] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.67.2 APIServerPort:8443 KubernetesVersion:v1.23.5 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:kubenet-20220512010229-7184 NodeName:kubenet-20220512010229-7184 DNSDomain:cluster.local CRISocket:/var/run/dockershim.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.67.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.67.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/
minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	I0512 01:41:19.124779    1612 kubeadm.go:162] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.67.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/dockershim.sock
	  name: "kubenet-20220512010229-7184"
	  kubeletExtraArgs:
	    node-ip: 192.168.67.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.67.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.23.5
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0512 01:41:19.124779    1612 kubeadm.go:936] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.23.5/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=docker --hostname-override=kubenet-20220512010229-7184 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=kubenet --node-ip=192.168.67.2 --pod-cidr=10.244.0.0/16
	
	[Install]
	 config:
	{KubernetesVersion:v1.23.5 ClusterName:kubenet-20220512010229-7184 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:kubenet FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0512 01:41:19.139439    1612 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.23.5
	I0512 01:41:19.166708    1612 binaries.go:44] Found k8s binaries, skipping transfer
	I0512 01:41:19.177705    1612 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0512 01:41:19.209695    1612 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (403 bytes)
	I0512 01:41:19.257344    1612 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0512 01:41:19.294439    1612 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2049 bytes)
	I0512 01:41:19.347766    1612 ssh_runner.go:195] Run: grep 192.168.67.2	control-plane.minikube.internal$ /etc/hosts
	I0512 01:41:19.361760    1612 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.67.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0512 01:41:19.390371    1612 certs.go:54] Setting up C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kubenet-20220512010229-7184 for IP: 192.168.67.2
	I0512 01:41:19.390371    1612 certs.go:182] skipping minikubeCA CA generation: C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.key
	I0512 01:41:19.391373    1612 certs.go:182] skipping proxyClientCA CA generation: C:\Users\jenkins.minikube4\minikube-integration\.minikube\proxy-client-ca.key
	I0512 01:41:19.391373    1612 certs.go:302] generating minikube-user signed cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kubenet-20220512010229-7184\client.key
	I0512 01:41:19.391373    1612 crypto.go:68] Generating cert C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kubenet-20220512010229-7184\client.crt with IP's: []
	I0512 01:41:19.779610    1612 crypto.go:156] Writing cert to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kubenet-20220512010229-7184\client.crt ...
	I0512 01:41:19.779610    1612 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kubenet-20220512010229-7184\client.crt: {Name:mkb78a0601249b1cbbb96af84914fcb8d4c3c0eb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0512 01:41:19.781226    1612 crypto.go:164] Writing key to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kubenet-20220512010229-7184\client.key ...
	I0512 01:41:19.781226    1612 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kubenet-20220512010229-7184\client.key: {Name:mk9ce343d263fad44c4b961d23e978648c8cdee1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0512 01:41:19.781533    1612 certs.go:302] generating minikube signed cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kubenet-20220512010229-7184\apiserver.key.c7fa3a9e
	I0512 01:41:19.782537    1612 crypto.go:68] Generating cert C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kubenet-20220512010229-7184\apiserver.crt.c7fa3a9e with IP's: [192.168.67.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I0512 01:41:19.921303    1612 crypto.go:156] Writing cert to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kubenet-20220512010229-7184\apiserver.crt.c7fa3a9e ...
	I0512 01:41:19.921303    1612 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kubenet-20220512010229-7184\apiserver.crt.c7fa3a9e: {Name:mkdbecd0b7bf8ca89b0087e6f53301bc261dbefb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0512 01:41:19.921882    1612 crypto.go:164] Writing key to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kubenet-20220512010229-7184\apiserver.key.c7fa3a9e ...
	I0512 01:41:19.921882    1612 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kubenet-20220512010229-7184\apiserver.key.c7fa3a9e: {Name:mka23fd60e6f49108b353adde09c8eab9519ed65 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0512 01:41:19.922935    1612 certs.go:320] copying C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kubenet-20220512010229-7184\apiserver.crt.c7fa3a9e -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kubenet-20220512010229-7184\apiserver.crt
	I0512 01:41:19.929941    1612 certs.go:324] copying C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kubenet-20220512010229-7184\apiserver.key.c7fa3a9e -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kubenet-20220512010229-7184\apiserver.key
	I0512 01:41:19.930947    1612 certs.go:302] generating aggregator signed cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kubenet-20220512010229-7184\proxy-client.key
	I0512 01:41:19.930947    1612 crypto.go:68] Generating cert C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kubenet-20220512010229-7184\proxy-client.crt with IP's: []
	I0512 01:41:20.202856    1612 crypto.go:156] Writing cert to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kubenet-20220512010229-7184\proxy-client.crt ...
	I0512 01:41:20.202856    1612 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kubenet-20220512010229-7184\proxy-client.crt: {Name:mke2cc52971904eafbf8836ad1f0ccac006a600a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0512 01:41:20.203660    1612 crypto.go:164] Writing key to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kubenet-20220512010229-7184\proxy-client.key ...
	I0512 01:41:20.203660    1612 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kubenet-20220512010229-7184\proxy-client.key: {Name:mkba3e806ebf35237d1771c1c776b95910a2f5a7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0512 01:41:20.211228    1612 certs.go:388] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\7184.pem (1338 bytes)
	W0512 01:41:20.212102    1612 certs.go:384] ignoring C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\7184_empty.pem, impossibly tiny 0 bytes
	I0512 01:41:20.212102    1612 certs.go:388] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca-key.pem (1675 bytes)
	I0512 01:41:20.212367    1612 certs.go:388] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem (1078 bytes)
	I0512 01:41:20.212575    1612 certs.go:388] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0512 01:41:20.212797    1612 certs.go:388] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\key.pem (1675 bytes)
	I0512 01:41:20.213018    1612 certs.go:388] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\71842.pem (1708 bytes)
	I0512 01:41:20.213442    1612 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kubenet-20220512010229-7184\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0512 01:41:20.282232    1612 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kubenet-20220512010229-7184\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0512 01:41:20.330751    1612 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kubenet-20220512010229-7184\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0512 01:41:20.383206    1612 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kubenet-20220512010229-7184\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0512 01:41:20.441495    1612 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0512 01:41:20.495596    1612 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0512 01:41:20.546950    1612 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0512 01:41:20.607987    1612 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0512 01:41:20.667614    1612 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\71842.pem --> /usr/share/ca-certificates/71842.pem (1708 bytes)
	I0512 01:41:20.716599    1612 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0512 01:41:20.768096    1612 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\7184.pem --> /usr/share/ca-certificates/7184.pem (1338 bytes)
	I0512 01:41:20.815630    1612 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0512 01:41:20.864649    1612 ssh_runner.go:195] Run: openssl version
	I0512 01:41:20.887664    1612 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/71842.pem && ln -fs /usr/share/ca-certificates/71842.pem /etc/ssl/certs/71842.pem"
	I0512 01:41:20.923720    1612 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/71842.pem
	I0512 01:41:20.937969    1612 certs.go:431] hashing: -rw-r--r-- 1 root root 1708 May 11 23:10 /usr/share/ca-certificates/71842.pem
	I0512 01:41:20.947969    1612 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/71842.pem
	I0512 01:41:20.974240    1612 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/71842.pem /etc/ssl/certs/3ec20f2e.0"
	I0512 01:41:21.014886    1612 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0512 01:41:21.055695    1612 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0512 01:41:21.071641    1612 certs.go:431] hashing: -rw-r--r-- 1 root root 1111 May 11 23:00 /usr/share/ca-certificates/minikubeCA.pem
	I0512 01:41:21.081607    1612 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0512 01:41:21.109872    1612 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0512 01:41:21.146542    1612 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/7184.pem && ln -fs /usr/share/ca-certificates/7184.pem /etc/ssl/certs/7184.pem"
	I0512 01:41:21.181477    1612 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/7184.pem
	I0512 01:41:21.191498    1612 certs.go:431] hashing: -rw-r--r-- 1 root root 1338 May 11 23:10 /usr/share/ca-certificates/7184.pem
	I0512 01:41:21.200491    1612 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/7184.pem
	I0512 01:41:21.221488    1612 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/7184.pem /etc/ssl/certs/51391683.0"
	I0512 01:41:21.245180    1612 kubeadm.go:391] StartCluster: {Name:kubenet-20220512010229-7184 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1652251400-14138@sha256:8c847a4aa2afc5a7fc659f9731046bf9cc7e788283deecc83c8633014fb0828a Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.5 ClusterName:kubenet-20220512010229-7184 Namespace:default APIServerName:miniku
beCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:kubenet FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.23.5 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:5m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube4:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOpti
mizations:false DisableMetrics:false}
	I0512 01:41:21.255453    1612 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0512 01:41:21.353129    1612 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0512 01:41:21.390749    1612 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0512 01:41:21.412971    1612 kubeadm.go:221] ignoring SystemVerification for kubeadm because of docker driver
	I0512 01:41:21.422766    1612 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0512 01:41:21.444293    1612 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0512 01:41:21.444602    1612 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.5:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0512 01:41:42.774702    1612 out.go:204]   - Generating certificates and keys ...
	I0512 01:41:42.780691    1612 out.go:204]   - Booting up control plane ...
	I0512 01:41:42.789692    1612 out.go:204]   - Configuring RBAC rules ...
	I0512 01:41:42.793687    1612 cni.go:91] network plugin configured as "kubenet", returning disabled
	I0512 01:41:42.793687    1612 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0512 01:41:42.805692    1612 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0512 01:41:42.805692    1612 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl label nodes minikube.k8s.io/version=v1.25.2 minikube.k8s.io/commit=50a7977b568d2ad3e04003527a57f4502d6177a0 minikube.k8s.io/name=kubenet-20220512010229-7184 minikube.k8s.io/updated_at=2022_05_12T01_41_42_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0512 01:41:42.850707    1612 ops.go:34] apiserver oom_adj: -16
	I0512 01:41:44.777642    1612 ssh_runner.go:235] Completed: sudo /var/lib/minikube/binaries/v1.23.5/kubectl label nodes minikube.k8s.io/version=v1.25.2 minikube.k8s.io/commit=50a7977b568d2ad3e04003527a57f4502d6177a0 minikube.k8s.io/name=kubenet-20220512010229-7184 minikube.k8s.io/updated_at=2022_05_12T01_41_42_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig: (1.9718499s)
	I0512 01:41:44.777642    1612 ssh_runner.go:235] Completed: sudo /var/lib/minikube/binaries/v1.23.5/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig: (1.9718499s)
	I0512 01:41:44.787060    1612 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0512 01:41:45.478569    1612 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0512 01:41:45.973073    1612 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0512 01:41:46.474973    1612 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0512 01:41:46.971575    1612 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0512 01:41:47.478239    1612 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0512 01:41:47.977647    1612 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0512 01:41:48.469684    1612 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0512 01:41:48.977953    1612 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0512 01:41:49.479978    1612 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0512 01:41:49.972440    1612 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0512 01:41:50.478989    1612 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0512 01:41:50.968266    1612 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0512 01:41:51.474581    1612 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0512 01:41:51.975558    1612 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0512 01:41:52.482863    1612 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0512 01:41:53.488362    1612 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0512 01:41:54.246795    1612 kubeadm.go:1020] duration metric: took 11.4525286s to wait for elevateKubeSystemPrivileges.
	I0512 01:41:54.247664    1612 kubeadm.go:393] StartCluster complete in 33.000341s
	I0512 01:41:54.247664    1612 settings.go:142] acquiring lock: {Name:mk5d8710830d010adb6db61f855b0ef766a8622c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0512 01:41:54.248135    1612 settings.go:150] Updating kubeconfig:  C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	I0512 01:41:54.251824    1612 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\kubeconfig: {Name:mk97c09b788e5010ffd4c9dd9525f9245d5edd25 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0512 01:41:55.349936    1612 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "kubenet-20220512010229-7184" rescaled to 1
	I0512 01:41:55.349936    1612 start.go:208] Will wait 5m0s for node &{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.23.5 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0512 01:41:55.356009    1612 out.go:177] * Verifying Kubernetes components...
	I0512 01:41:55.349936    1612 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.23.5/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0512 01:41:55.349936    1612 addons.go:415] enableAddons start: toEnable=map[], additional=[]
	I0512 01:41:55.350985    1612 config.go:178] Loaded profile config "kubenet-20220512010229-7184": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.5
	I0512 01:41:55.359962    1612 addons.go:65] Setting default-storageclass=true in profile "kubenet-20220512010229-7184"
	I0512 01:41:55.359962    1612 addons.go:65] Setting storage-provisioner=true in profile "kubenet-20220512010229-7184"
	I0512 01:41:55.359962    1612 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "kubenet-20220512010229-7184"
	I0512 01:41:55.359962    1612 addons.go:153] Setting addon storage-provisioner=true in "kubenet-20220512010229-7184"
	W0512 01:41:55.359962    1612 addons.go:165] addon storage-provisioner should already be in state true
	I0512 01:41:55.359962    1612 host.go:66] Checking if "kubenet-20220512010229-7184" exists ...
	I0512 01:41:55.379932    1612 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0512 01:41:55.386942    1612 cli_runner.go:164] Run: docker container inspect kubenet-20220512010229-7184 --format={{.State.Status}}
	I0512 01:41:55.386942    1612 cli_runner.go:164] Run: docker container inspect kubenet-20220512010229-7184 --format={{.State.Status}}
	I0512 01:41:55.759646    1612 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.23.5/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.65.2 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.23.5/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0512 01:41:55.774710    1612 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" kubenet-20220512010229-7184
	I0512 01:41:57.007636    1612 cli_runner.go:217] Completed: docker container inspect kubenet-20220512010229-7184 --format={{.State.Status}}: (1.6206123s)
	I0512 01:41:57.011613    1612 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0512 01:41:57.013615    1612 addons.go:348] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0512 01:41:57.013615    1612 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0512 01:41:57.022648    1612 cli_runner.go:217] Completed: docker container inspect kubenet-20220512010229-7184 --format={{.State.Status}}: (1.635624s)
	I0512 01:41:57.023628    1612 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-20220512010229-7184
	I0512 01:41:57.044633    1612 addons.go:153] Setting addon default-storageclass=true in "kubenet-20220512010229-7184"
	W0512 01:41:57.044633    1612 addons.go:165] addon default-storageclass should already be in state true
	I0512 01:41:57.044633    1612 host.go:66] Checking if "kubenet-20220512010229-7184" exists ...
	I0512 01:41:57.080626    1612 cli_runner.go:164] Run: docker container inspect kubenet-20220512010229-7184 --format={{.State.Status}}
	I0512 01:41:57.530673    1612 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" kubenet-20220512010229-7184: (1.7558741s)
	I0512 01:41:57.537688    1612 node_ready.go:35] waiting up to 5m0s for node "kubenet-20220512010229-7184" to be "Ready" ...
	I0512 01:41:57.555692    1612 node_ready.go:49] node "kubenet-20220512010229-7184" has status "Ready":"True"
	I0512 01:41:57.555692    1612 node_ready.go:38] duration metric: took 18.003ms waiting for node "kubenet-20220512010229-7184" to be "Ready" ...
	I0512 01:41:57.555692    1612 pod_ready.go:35] extra waiting up to 5m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0512 01:41:57.588683    1612 pod_ready.go:78] waiting up to 5m0s for pod "coredns-64897985d-dqppb" in "kube-system" namespace to be "Ready" ...
	I0512 01:41:58.623464    1612 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-20220512010229-7184: (1.5997553s)
	I0512 01:41:58.623464    1612 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:51717 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\kubenet-20220512010229-7184\id_rsa Username:docker}
	I0512 01:41:58.654512    1612 cli_runner.go:217] Completed: docker container inspect kubenet-20220512010229-7184 --format={{.State.Status}}: (1.5738055s)
	I0512 01:41:58.654512    1612 addons.go:348] installing /etc/kubernetes/addons/storageclass.yaml
	I0512 01:41:58.654512    1612 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0512 01:41:58.673475    1612 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-20220512010229-7184
	I0512 01:41:59.081766    1612 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.5/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0512 01:41:59.750981    1612 pod_ready.go:102] pod "coredns-64897985d-dqppb" in "kube-system" namespace has status "Ready":"False"
	I0512 01:42:00.074465    1612 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-20220512010229-7184: (1.4009183s)
	I0512 01:42:00.074465    1612 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:51717 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\kubenet-20220512010229-7184\id_rsa Username:docker}
	I0512 01:42:00.862672    1612 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.5/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0512 01:42:01.656567    1612 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.23.5/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.65.2 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.23.5/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (5.8965498s)
	I0512 01:42:01.656567    1612 start.go:815] {"host.minikube.internal": 192.168.65.2} host record injected into CoreDNS
	I0512 01:42:01.769455    1612 pod_ready.go:102] pod "coredns-64897985d-dqppb" in "kube-system" namespace has status "Ready":"False"
	I0512 01:42:02.250982    1612 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.5/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (3.1690558s)
	I0512 01:42:02.250982    1612 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.5/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.3882393s)
	I0512 01:42:02.255750    1612 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0512 01:42:02.258362    1612 addons.go:417] enableAddons completed in 6.9080752s
	I0512 01:42:03.185378    1612 pod_ready.go:92] pod "coredns-64897985d-dqppb" in "kube-system" namespace has status "Ready":"True"
	I0512 01:42:03.185378    1612 pod_ready.go:81] duration metric: took 5.5964115s waiting for pod "coredns-64897985d-dqppb" in "kube-system" namespace to be "Ready" ...
	I0512 01:42:03.185378    1612 pod_ready.go:78] waiting up to 5m0s for pod "coredns-64897985d-zfdtc" in "kube-system" namespace to be "Ready" ...
	I0512 01:42:05.279066    1612 pod_ready.go:102] pod "coredns-64897985d-zfdtc" in "kube-system" namespace has status "Ready":"False"
	I0512 01:42:07.283126    1612 pod_ready.go:102] pod "coredns-64897985d-zfdtc" in "kube-system" namespace has status "Ready":"False"
	I0512 01:42:09.288328    1612 pod_ready.go:102] pod "coredns-64897985d-zfdtc" in "kube-system" namespace has status "Ready":"False"
	I0512 01:42:11.784376    1612 pod_ready.go:102] pod "coredns-64897985d-zfdtc" in "kube-system" namespace has status "Ready":"False"
	I0512 01:42:13.788324    1612 pod_ready.go:102] pod "coredns-64897985d-zfdtc" in "kube-system" namespace has status "Ready":"False"
	I0512 01:42:15.789855    1612 pod_ready.go:102] pod "coredns-64897985d-zfdtc" in "kube-system" namespace has status "Ready":"False"
	I0512 01:42:17.799825    1612 pod_ready.go:102] pod "coredns-64897985d-zfdtc" in "kube-system" namespace has status "Ready":"False"
	I0512 01:42:20.291635    1612 pod_ready.go:102] pod "coredns-64897985d-zfdtc" in "kube-system" namespace has status "Ready":"False"
	I0512 01:42:22.776427    1612 pod_ready.go:102] pod "coredns-64897985d-zfdtc" in "kube-system" namespace has status "Ready":"False"
	I0512 01:42:24.784759    1612 pod_ready.go:102] pod "coredns-64897985d-zfdtc" in "kube-system" namespace has status "Ready":"False"
	I0512 01:42:26.789065    1612 pod_ready.go:102] pod "coredns-64897985d-zfdtc" in "kube-system" namespace has status "Ready":"False"
	I0512 01:42:29.279419    1612 pod_ready.go:102] pod "coredns-64897985d-zfdtc" in "kube-system" namespace has status "Ready":"False"

                                                
                                                
** /stderr **
net_test.go:103: failed start: exit status 1
--- FAIL: TestNetworkPlugins/group/kubenet/Start (157.53s)

                                                
                                    

Test pass (230/268)

Order passed test Duration
3 TestDownloadOnly/v1.16.0/json-events 16.62
4 TestDownloadOnly/v1.16.0/preload-exists 0.08
7 TestDownloadOnly/v1.16.0/kubectl 0
8 TestDownloadOnly/v1.16.0/LogsDuration 0.63
10 TestDownloadOnly/v1.23.5/json-events 13.68
11 TestDownloadOnly/v1.23.5/preload-exists 0
14 TestDownloadOnly/v1.23.5/kubectl 0
15 TestDownloadOnly/v1.23.5/LogsDuration 0.65
17 TestDownloadOnly/v1.23.6-rc.0/json-events 13.36
18 TestDownloadOnly/v1.23.6-rc.0/preload-exists 0
21 TestDownloadOnly/v1.23.6-rc.0/kubectl 0
22 TestDownloadOnly/v1.23.6-rc.0/LogsDuration 0.59
23 TestDownloadOnly/DeleteAll 11.71
24 TestDownloadOnly/DeleteAlwaysSucceeds 7.44
25 TestDownloadOnlyKic 46.12
26 TestBinaryMirror 16.96
27 TestOffline 232.47
29 TestAddons/Setup 389.4
33 TestAddons/parallel/MetricsServer 12.96
34 TestAddons/parallel/HelmTiller 37.35
36 TestAddons/parallel/CSI 94.56
38 TestAddons/serial/GCPAuth 26.37
39 TestAddons/StoppedEnableDisable 24.26
40 TestCertOptions 182.42
41 TestCertExpiration 379.88
42 TestDockerFlags 166.32
43 TestForceSystemdFlag 229.24
44 TestForceSystemdEnv 530.08
49 TestErrorSpam/setup 109.95
50 TestErrorSpam/start 22.39
51 TestErrorSpam/status 19.89
52 TestErrorSpam/pause 17.48
53 TestErrorSpam/unpause 18.05
54 TestErrorSpam/stop 33.75
57 TestFunctional/serial/CopySyncFile 0.03
58 TestFunctional/serial/StartWithProxy 131.36
59 TestFunctional/serial/AuditLog 0
60 TestFunctional/serial/SoftStart 34.68
61 TestFunctional/serial/KubeContext 0.24
62 TestFunctional/serial/KubectlGetPods 0.39
65 TestFunctional/serial/CacheCmd/cache/add_remote 18.62
66 TestFunctional/serial/CacheCmd/cache/add_local 9.27
67 TestFunctional/serial/CacheCmd/cache/delete_k8s.gcr.io/pause:3.3 0.36
68 TestFunctional/serial/CacheCmd/cache/list 0.34
69 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 6.45
70 TestFunctional/serial/CacheCmd/cache/cache_reload 25.08
71 TestFunctional/serial/CacheCmd/cache/delete 0.74
72 TestFunctional/serial/MinikubeKubectlCmd 2.08
73 TestFunctional/serial/MinikubeKubectlCmdDirectly 2.09
74 TestFunctional/serial/ExtraConfig 75.32
75 TestFunctional/serial/ComponentHealth 0.3
76 TestFunctional/serial/LogsCmd 7.68
77 TestFunctional/serial/LogsFileCmd 8.76
79 TestFunctional/parallel/ConfigCmd 2.26
81 TestFunctional/parallel/DryRun 13.92
82 TestFunctional/parallel/InternationalLanguage 6.78
83 TestFunctional/parallel/StatusCmd 19.89
88 TestFunctional/parallel/AddonsCmd 3.56
89 TestFunctional/parallel/PersistentVolumeClaim 47.52
91 TestFunctional/parallel/SSHCmd 14.87
92 TestFunctional/parallel/CpCmd 25.66
93 TestFunctional/parallel/MySQL 84.78
94 TestFunctional/parallel/FileSync 6.49
95 TestFunctional/parallel/CertSync 42.36
99 TestFunctional/parallel/NodeLabels 0.3
101 TestFunctional/parallel/NonActiveRuntimeDisabled 6.35
103 TestFunctional/parallel/DockerEnv/powershell 28.78
104 TestFunctional/parallel/ImageCommands/ImageListShort 4.23
105 TestFunctional/parallel/ImageCommands/ImageListTable 4.2
106 TestFunctional/parallel/ImageCommands/ImageListJson 4.28
107 TestFunctional/parallel/ImageCommands/ImageListYaml 4.32
108 TestFunctional/parallel/ImageCommands/ImageBuild 18.24
109 TestFunctional/parallel/ImageCommands/Setup 6.33
110 TestFunctional/parallel/UpdateContextCmd/no_changes 4.05
111 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 4.16
112 TestFunctional/parallel/UpdateContextCmd/no_clusters 4.05
114 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0.01
116 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 16.88
117 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 19.17
119 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 14.43
120 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 24.71
121 TestFunctional/parallel/ImageCommands/ImageSaveToFile 8.96
122 TestFunctional/parallel/ImageCommands/ImageRemove 14.7
123 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 18
124 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 15.39
125 TestFunctional/parallel/ProfileCmd/profile_not_create 9.63
126 TestFunctional/parallel/ProfileCmd/profile_list 7.03
127 TestFunctional/parallel/ProfileCmd/profile_json_output 6.81
128 TestFunctional/parallel/Version/short 0.38
129 TestFunctional/parallel/Version/components 6.13
134 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.11
135 TestFunctional/delete_addon-resizer_images 0.02
136 TestFunctional/delete_my-image_image 0.01
137 TestFunctional/delete_minikube_cached_images 0.01
140 TestIngressAddonLegacy/StartLegacyK8sCluster 133.52
142 TestIngressAddonLegacy/serial/ValidateIngressAddonActivation 48.36
143 TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation 4.8
147 TestJSONOutput/start/Command 130.42
148 TestJSONOutput/start/Audit 0
150 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
151 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
153 TestJSONOutput/pause/Command 6.12
154 TestJSONOutput/pause/Audit 0
156 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
157 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
159 TestJSONOutput/unpause/Command 6.08
160 TestJSONOutput/unpause/Audit 0
162 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
163 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
165 TestJSONOutput/stop/Command 18.02
166 TestJSONOutput/stop/Audit 0
168 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
169 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
170 TestErrorJSONOutput 7.64
172 TestKicCustomNetwork/create_custom_network 139.52
173 TestKicCustomNetwork/use_default_bridge_network 125.55
174 TestKicExistingNetwork 140.99
175 TestKicCustomSubnet 143.48
176 TestMainNoArgs 0.35
179 TestMountStart/serial/StartWithMountFirst 51.76
180 TestMountStart/serial/VerifyMountFirst 6.43
181 TestMountStart/serial/StartWithMountSecond 52.39
182 TestMountStart/serial/VerifyMountSecond 6.42
183 TestMountStart/serial/DeleteFirst 19.94
184 TestMountStart/serial/VerifyMountPostDelete 6.47
185 TestMountStart/serial/Stop 8.92
186 TestMountStart/serial/RestartStopped 29.86
187 TestMountStart/serial/VerifyMountPostStop 6.45
190 TestMultiNode/serial/FreshStart2Nodes 250.65
191 TestMultiNode/serial/DeployApp2Nodes 26.24
192 TestMultiNode/serial/PingHostFrom2Pods 10.91
193 TestMultiNode/serial/AddNode 119.7
194 TestMultiNode/serial/ProfileList 6.53
195 TestMultiNode/serial/CopyFile 218.71
196 TestMultiNode/serial/StopNode 30.4
197 TestMultiNode/serial/StartAfterStop 61.82
198 TestMultiNode/serial/RestartKeepsNodes 191.27
199 TestMultiNode/serial/DeleteNode 44.68
200 TestMultiNode/serial/StopMultiNode 40.63
201 TestMultiNode/serial/RestartMultiNode 125.42
202 TestMultiNode/serial/ValidateNameConflict 146.55
206 TestPreload 338.47
207 TestScheduledStopWindows 216.41
211 TestInsufficientStorage 111.21
214 TestKubernetesUpgrade 305.78
215 TestMissingContainerUpgrade 394.72
217 TestNoKubernetes/serial/StartNoK8sWithVersion 0.46
218 TestStoppedBinaryUpgrade/Setup 0.53
219 TestNoKubernetes/serial/StartWithK8s 190.79
220 TestStoppedBinaryUpgrade/Upgrade 406.24
221 TestNoKubernetes/serial/StartWithStopK8s 75.31
230 TestPause/serial/Start 519.94
232 TestStoppedBinaryUpgrade/MinikubeLogs 13.8
233 TestPause/serial/SecondStartNoReconfiguration 41.4
247 TestStartStop/group/old-k8s-version/serial/FirstStart 553.62
249 TestStartStop/group/no-preload/serial/FirstStart 183.38
251 TestStartStop/group/embed-certs/serial/FirstStart 126.9
252 TestStartStop/group/no-preload/serial/DeployApp 11.15
253 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 7
254 TestStartStop/group/no-preload/serial/Stop 18.34
255 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 5.97
256 TestStartStop/group/no-preload/serial/SecondStart 412.58
257 TestStartStop/group/embed-certs/serial/DeployApp 10.11
258 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 5.98
259 TestStartStop/group/embed-certs/serial/Stop 18.79
260 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 6.23
261 TestStartStop/group/embed-certs/serial/SecondStart 413.57
263 TestStartStop/group/default-k8s-different-port/serial/FirstStart 133.79
264 TestStartStop/group/old-k8s-version/serial/DeployApp 10.26
265 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 8
266 TestStartStop/group/old-k8s-version/serial/Stop 18.21
267 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 6.01
268 TestStartStop/group/old-k8s-version/serial/SecondStart 474.77
269 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 41.19
270 TestStartStop/group/default-k8s-different-port/serial/DeployApp 11.2
271 TestStartStop/group/default-k8s-different-port/serial/EnableAddonWhileActive 6.59
272 TestStartStop/group/default-k8s-different-port/serial/Stop 18.63
273 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.55
274 TestStartStop/group/default-k8s-different-port/serial/EnableAddonAfterStop 5.91
275 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 6.41
276 TestStartStop/group/default-k8s-different-port/serial/SecondStart 429.64
277 TestStartStop/group/no-preload/serial/Pause 41.65
278 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 52.14
280 TestStartStop/group/newest-cni/serial/FirstStart 135.15
281 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.52
282 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 6.57
284 TestStartStop/group/newest-cni/serial/DeployApp 0
285 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 5.9
286 TestStartStop/group/newest-cni/serial/Stop 18.98
287 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 6.1
288 TestStartStop/group/newest-cni/serial/SecondStart 86.13
289 TestNetworkPlugins/group/auto/Start 144.81
290 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
291 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
292 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 7.68
293 TestStartStop/group/newest-cni/serial/Pause 45.59
294 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 5.05
295 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.55
296 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 6.6
298 TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop 45.1
299 TestNetworkPlugins/group/auto/KubeletFlags 7.16
301 TestNetworkPlugins/group/auto/NetCatPod 37.32
302 TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop 5.73
303 TestStartStop/group/default-k8s-different-port/serial/VerifyKubernetesImages 6.93
305 TestStartStop/group/default-k8s-different-port/serial/Pause 44.67
306 TestNetworkPlugins/group/auto/DNS 0.65
307 TestNetworkPlugins/group/auto/Localhost 0.59
308 TestNetworkPlugins/group/auto/HairPin 5.52
310 TestNetworkPlugins/group/false/Start 138.4
311 TestNetworkPlugins/group/false/KubeletFlags 6.44
312 TestNetworkPlugins/group/false/NetCatPod 19.83
313 TestNetworkPlugins/group/false/DNS 0.6
314 TestNetworkPlugins/group/false/Localhost 0.58
315 TestNetworkPlugins/group/false/HairPin 5.52
316 TestNetworkPlugins/group/kindnet/Start 164.94
317 TestNetworkPlugins/group/kindnet/ControllerPod 5.06
318 TestNetworkPlugins/group/kindnet/KubeletFlags 7.2
319 TestNetworkPlugins/group/kindnet/NetCatPod 21.83
320 TestNetworkPlugins/group/kindnet/DNS 0.98
321 TestNetworkPlugins/group/kindnet/Localhost 0.84
322 TestNetworkPlugins/group/kindnet/HairPin 0.75
323 TestNetworkPlugins/group/enable-default-cni/Start 388.4
324 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 7.01
325 TestNetworkPlugins/group/enable-default-cni/NetCatPod 21.28
327 TestNetworkPlugins/group/bridge/Start 134.12
329 TestNetworkPlugins/group/bridge/KubeletFlags 6.56
330 TestNetworkPlugins/group/bridge/NetCatPod 20.99
331 TestNetworkPlugins/group/bridge/DNS 0.6
332 TestNetworkPlugins/group/bridge/Localhost 0.49
333 TestNetworkPlugins/group/bridge/HairPin 0.63
x
+
TestDownloadOnly/v1.16.0/json-events (16.62s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/json-events
aaa_download_only_test.go:71: (dbg) Run:  out/minikube-windows-amd64.exe start -o=json --download-only -p download-only-20220511225523-7184 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=docker --driver=docker
aaa_download_only_test.go:71: (dbg) Done: out/minikube-windows-amd64.exe start -o=json --download-only -p download-only-20220511225523-7184 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=docker --driver=docker: (16.6208978s)
--- PASS: TestDownloadOnly/v1.16.0/json-events (16.62s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/preload-exists (0.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/preload-exists
--- PASS: TestDownloadOnly/v1.16.0/preload-exists (0.08s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/kubectl
--- PASS: TestDownloadOnly/v1.16.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/LogsDuration (0.63s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/LogsDuration
aaa_download_only_test.go:173: (dbg) Run:  out/minikube-windows-amd64.exe logs -p download-only-20220511225523-7184
aaa_download_only_test.go:173: (dbg) Non-zero exit: out/minikube-windows-amd64.exe logs -p download-only-20220511225523-7184: exit status 85 (632.0716ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|------|---------|------|---------|------------|----------|
	| Command | Args | Profile | User | Version | Start Time | End Time |
	|---------|------|---------|------|---------|------------|----------|
	|---------|------|---------|------|---------|------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2022/05/11 22:55:25
	Running on machine: minikube4
	Binary: Built with gc go1.18.1 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0511 22:55:25.282303    5984 out.go:296] Setting OutFile to fd 608 ...
	I0511 22:55:25.344008    5984 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0511 22:55:25.344008    5984 out.go:309] Setting ErrFile to fd 632...
	I0511 22:55:25.344008    5984 out.go:343] TERM=,COLORTERM=, which probably does not support color
	W0511 22:55:25.355856    5984 root.go:300] Error reading config file at C:\Users\jenkins.minikube4\minikube-integration\.minikube\config\config.json: open C:\Users\jenkins.minikube4\minikube-integration\.minikube\config\config.json: The system cannot find the path specified.
	I0511 22:55:25.359792    5984 out.go:303] Setting JSON to true
	I0511 22:55:25.361141    5984 start.go:115] hostinfo: {"hostname":"minikube4","uptime":8178,"bootTime":1652301547,"procs":161,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19042 Build 19042","kernelVersion":"10.0.19042 Build 19042","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"df6bfb5f-73f2-4acb-9365-df7854ecbb28"}
	W0511 22:55:25.362116    5984 start.go:123] gopshost.Virtualization returned error: not implemented yet
	I0511 22:55:25.387903    5984 out.go:97] [download-only-20220511225523-7184] minikube v1.25.2 on Microsoft Windows 10 Enterprise N 10.0.19042 Build 19042
	I0511 22:55:25.388561    5984 notify.go:193] Checking for updates...
	W0511 22:55:25.388561    5984 preload.go:295] Failed to list preload files: open C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\preloaded-tarball: The system cannot find the file specified.
	I0511 22:55:25.391554    5984 out.go:169] KUBECONFIG=C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	I0511 22:55:25.394659    5984 out.go:169] MINIKUBE_HOME=C:\Users\jenkins.minikube4\minikube-integration\.minikube
	I0511 22:55:25.398132    5984 out.go:169] MINIKUBE_LOCATION=13639
	I0511 22:55:25.399225    5984 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	W0511 22:55:25.405330    5984 out.go:272] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0511 22:55:25.405569    5984 driver.go:358] Setting default libvirt URI to qemu:///system
	I0511 22:55:28.120320    5984 docker.go:137] docker version: linux-20.10.14
	I0511 22:55:28.129450    5984 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0511 22:55:30.227568    5984 cli_runner.go:217] Completed: docker system info --format "{{json .}}": (2.0979778s)
	I0511 22:55:30.228391    5984 info.go:265] docker info: {ID:5MOX:W55Z:6RSS:V5PU:46KT:D723:NTM4:N7FK:USOO:URA3:TW6J:2PNT Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:0 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:43 OomKillDisable:true NGoroutines:44 SystemTime:2022-05-11 22:55:29.1784756 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.102.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_
64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:53902323712 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.14 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3df54a852345ae127d1fa3092b95168e4a88e2f8 Expected:3df54a852345ae127d1fa3092b95168e4a88e2f8} RuncCommit:{ID:v1.0.3-0-gf46b6ba Expected:v1.0.3-0-gf46b6ba} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,
profile=default] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.5.0] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:C:\Program Files\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc.
Version:v0.17.0]] Warnings:<nil>}}
	I0511 22:55:30.232990    5984 out.go:97] Using the docker driver based on user configuration
	I0511 22:55:30.232990    5984 start.go:284] selected driver: docker
	I0511 22:55:30.232990    5984 start.go:801] validating driver "docker" against <nil>
	I0511 22:55:30.252703    5984 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0511 22:55:32.333049    5984 cli_runner.go:217] Completed: docker system info --format "{{json .}}": (2.0801179s)
	I0511 22:55:32.333403    5984 info.go:265] docker info: {ID:5MOX:W55Z:6RSS:V5PU:46KT:D723:NTM4:N7FK:USOO:URA3:TW6J:2PNT Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:0 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:43 OomKillDisable:true NGoroutines:44 SystemTime:2022-05-11 22:55:31.2976842 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.102.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_
64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:53902323712 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.14 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3df54a852345ae127d1fa3092b95168e4a88e2f8 Expected:3df54a852345ae127d1fa3092b95168e4a88e2f8} RuncCommit:{ID:v1.0.3-0-gf46b6ba Expected:v1.0.3-0-gf46b6ba} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,
profile=default] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.5.0] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:C:\Program Files\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc.
Version:v0.17.0]] Warnings:<nil>}}
	I0511 22:55:32.333698    5984 start_flags.go:292] no existing cluster config was found, will generate one from the flags 
	I0511 22:55:32.457790    5984 start_flags.go:373] Using suggested 16300MB memory alloc based on sys=65534MB, container=51405MB
	I0511 22:55:32.458750    5984 start_flags.go:829] Wait components to verify : map[apiserver:true system_pods:true]
	I0511 22:55:32.461409    5984 out.go:169] Using Docker Desktop driver with the root privilege
	I0511 22:55:32.464124    5984 cni.go:95] Creating CNI manager for ""
	I0511 22:55:32.464124    5984 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0511 22:55:32.464124    5984 start_flags.go:306] config:
	{Name:download-only-20220511225523-7184 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1652251400-14138@sha256:8c847a4aa2afc5a7fc659f9731046bf9cc7e788283deecc83c8633014fb0828a Memory:16300 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:download-only-20220511225523-7184 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomai
n:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube4:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0511 22:55:32.466364    5984 out.go:97] Starting control plane node download-only-20220511225523-7184 in cluster download-only-20220511225523-7184
	I0511 22:55:32.466364    5984 cache.go:120] Beginning downloading kic base image for docker with docker
	I0511 22:55:32.469430    5984 out.go:97] Pulling base image ...
	I0511 22:55:32.469430    5984 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0511 22:55:32.469888    5984 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1652251400-14138@sha256:8c847a4aa2afc5a7fc659f9731046bf9cc7e788283deecc83c8633014fb0828a in local docker daemon
	I0511 22:55:32.517429    5984 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4
	I0511 22:55:32.517429    5984 cache.go:57] Caching tarball of preloaded images
	I0511 22:55:32.519090    5984 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0511 22:55:32.521576    5984 out.go:97] Downloading Kubernetes v1.16.0 preload ...
	I0511 22:55:32.521653    5984 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4 ...
	I0511 22:55:32.587238    5984 download.go:101] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4?checksum=md5:326f3ce331abb64565b50b8c9e791244 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4
	I0511 22:55:33.548010    5984 cache.go:146] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1652251400-14138@sha256:8c847a4aa2afc5a7fc659f9731046bf9cc7e788283deecc83c8633014fb0828a to local cache
	I0511 22:55:33.548010    5984 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds:v0.0.30-1652251400-14138@sha256:8c847a4aa2afc5a7fc659f9731046bf9cc7e788283deecc83c8633014fb0828a.tar -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds_v0.0.30-1652251400-14138@sha256_8c847a4aa2afc5a7fc659f9731046bf9cc7e788283deecc83c8633014fb0828a.tar
	I0511 22:55:33.548010    5984 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds:v0.0.30-1652251400-14138@sha256:8c847a4aa2afc5a7fc659f9731046bf9cc7e788283deecc83c8633014fb0828a.tar -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds_v0.0.30-1652251400-14138@sha256_8c847a4aa2afc5a7fc659f9731046bf9cc7e788283deecc83c8633014fb0828a.tar
	I0511 22:55:33.548010    5984 image.go:59] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1652251400-14138@sha256:8c847a4aa2afc5a7fc659f9731046bf9cc7e788283deecc83c8633014fb0828a in local cache directory
	I0511 22:55:33.549005    5984 image.go:119] Writing gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1652251400-14138@sha256:8c847a4aa2afc5a7fc659f9731046bf9cc7e788283deecc83c8633014fb0828a to local cache
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-20220511225523-7184"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:174: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.16.0/LogsDuration (0.63s)

                                                
                                    
x
+
TestDownloadOnly/v1.23.5/json-events (13.68s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.23.5/json-events
aaa_download_only_test.go:71: (dbg) Run:  out/minikube-windows-amd64.exe start -o=json --download-only -p download-only-20220511225523-7184 --force --alsologtostderr --kubernetes-version=v1.23.5 --container-runtime=docker --driver=docker
aaa_download_only_test.go:71: (dbg) Done: out/minikube-windows-amd64.exe start -o=json --download-only -p download-only-20220511225523-7184 --force --alsologtostderr --kubernetes-version=v1.23.5 --container-runtime=docker --driver=docker: (13.679144s)
--- PASS: TestDownloadOnly/v1.23.5/json-events (13.68s)

                                                
                                    
x
+
TestDownloadOnly/v1.23.5/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.23.5/preload-exists
--- PASS: TestDownloadOnly/v1.23.5/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.23.5/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.23.5/kubectl
--- PASS: TestDownloadOnly/v1.23.5/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.23.5/LogsDuration (0.65s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.23.5/LogsDuration
aaa_download_only_test.go:173: (dbg) Run:  out/minikube-windows-amd64.exe logs -p download-only-20220511225523-7184
aaa_download_only_test.go:173: (dbg) Non-zero exit: out/minikube-windows-amd64.exe logs -p download-only-20220511225523-7184: exit status 85 (648.6918ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|------|---------|------|---------|------------|----------|
	| Command | Args | Profile | User | Version | Start Time | End Time |
	|---------|------|---------|------|---------|------------|----------|
	|---------|------|---------|------|---------|------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2022/05/11 22:55:41
	Running on machine: minikube4
	Binary: Built with gc go1.18.1 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0511 22:55:41.102062    7656 out.go:296] Setting OutFile to fd 628 ...
	I0511 22:55:41.154618    7656 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0511 22:55:41.154618    7656 out.go:309] Setting ErrFile to fd 644...
	I0511 22:55:41.154618    7656 out.go:343] TERM=,COLORTERM=, which probably does not support color
	W0511 22:55:41.170628    7656 root.go:300] Error reading config file at C:\Users\jenkins.minikube4\minikube-integration\.minikube\config\config.json: open C:\Users\jenkins.minikube4\minikube-integration\.minikube\config\config.json: The system cannot find the file specified.
	I0511 22:55:41.171628    7656 out.go:303] Setting JSON to true
	I0511 22:55:41.176620    7656 start.go:115] hostinfo: {"hostname":"minikube4","uptime":8194,"bootTime":1652301547,"procs":162,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19042 Build 19042","kernelVersion":"10.0.19042 Build 19042","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"df6bfb5f-73f2-4acb-9365-df7854ecbb28"}
	W0511 22:55:41.176620    7656 start.go:123] gopshost.Virtualization returned error: not implemented yet
	I0511 22:55:41.238631    7656 out.go:97] [download-only-20220511225523-7184] minikube v1.25.2 on Microsoft Windows 10 Enterprise N 10.0.19042 Build 19042
	I0511 22:55:41.239723    7656 notify.go:193] Checking for updates...
	I0511 22:55:41.486923    7656 out.go:169] KUBECONFIG=C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	I0511 22:55:41.490383    7656 out.go:169] MINIKUBE_HOME=C:\Users\jenkins.minikube4\minikube-integration\.minikube
	I0511 22:55:41.492757    7656 out.go:169] MINIKUBE_LOCATION=13639
	I0511 22:55:41.496702    7656 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	W0511 22:55:41.501238    7656 out.go:272] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0511 22:55:41.502361    7656 config.go:178] Loaded profile config "download-only-20220511225523-7184": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.16.0
	W0511 22:55:41.502361    7656 start.go:709] api.Load failed for download-only-20220511225523-7184: filestore "download-only-20220511225523-7184": Docker machine "download-only-20220511225523-7184" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I0511 22:55:41.503013    7656 driver.go:358] Setting default libvirt URI to qemu:///system
	W0511 22:55:41.503013    7656 start.go:709] api.Load failed for download-only-20220511225523-7184: filestore "download-only-20220511225523-7184": Docker machine "download-only-20220511225523-7184" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I0511 22:55:44.127337    7656 docker.go:137] docker version: linux-20.10.14
	I0511 22:55:44.137304    7656 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0511 22:55:46.248251    7656 cli_runner.go:217] Completed: docker system info --format "{{json .}}": (2.1107272s)
	I0511 22:55:46.282313    7656 info.go:265] docker info: {ID:5MOX:W55Z:6RSS:V5PU:46KT:D723:NTM4:N7FK:USOO:URA3:TW6J:2PNT Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:0 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:43 OomKillDisable:true NGoroutines:44 SystemTime:2022-05-11 22:55:45.1922372 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.102.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_
64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:53902323712 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.14 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3df54a852345ae127d1fa3092b95168e4a88e2f8 Expected:3df54a852345ae127d1fa3092b95168e4a88e2f8} RuncCommit:{ID:v1.0.3-0-gf46b6ba Expected:v1.0.3-0-gf46b6ba} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,
profile=default] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.5.0] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:C:\Program Files\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc.
Version:v0.17.0]] Warnings:<nil>}}
	I0511 22:55:46.285544    7656 out.go:97] Using the docker driver based on existing profile
	I0511 22:55:46.285544    7656 start.go:284] selected driver: docker
	I0511 22:55:46.285544    7656 start.go:801] validating driver "docker" against &{Name:download-only-20220511225523-7184 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1652251400-14138@sha256:8c847a4aa2afc5a7fc659f9731046bf9cc7e788283deecc83c8633014fb0828a Memory:16300 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:download-only-20220511225523-7184 Names
pace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube4:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0511 22:55:46.307067    7656 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0511 22:55:48.370376    7656 cli_runner.go:217] Completed: docker system info --format "{{json .}}": (2.063129s)
	I0511 22:55:48.370768    7656 info.go:265] docker info: {ID:5MOX:W55Z:6RSS:V5PU:46KT:D723:NTM4:N7FK:USOO:URA3:TW6J:2PNT Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:0 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:43 OomKillDisable:true NGoroutines:44 SystemTime:2022-05-11 22:55:47.3301748 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.102.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_
64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:53902323712 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.14 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3df54a852345ae127d1fa3092b95168e4a88e2f8 Expected:3df54a852345ae127d1fa3092b95168e4a88e2f8} RuncCommit:{ID:v1.0.3-0-gf46b6ba Expected:v1.0.3-0-gf46b6ba} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,
profile=default] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.5.0] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:C:\Program Files\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc.
Version:v0.17.0]] Warnings:<nil>}}
	I0511 22:55:48.421705    7656 cni.go:95] Creating CNI manager for ""
	I0511 22:55:48.421705    7656 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0511 22:55:48.421705    7656 start_flags.go:306] config:
	{Name:download-only-20220511225523-7184 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1652251400-14138@sha256:8c847a4aa2afc5a7fc659f9731046bf9cc7e788283deecc83c8633014fb0828a Memory:16300 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.5 ClusterName:download-only-20220511225523-7184 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomai
n:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube4:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0511 22:55:48.843002    7656 out.go:97] Starting control plane node download-only-20220511225523-7184 in cluster download-only-20220511225523-7184
	I0511 22:55:48.843002    7656 cache.go:120] Beginning downloading kic base image for docker with docker
	I0511 22:55:48.845685    7656 out.go:97] Pulling base image ...
	I0511 22:55:48.845783    7656 preload.go:132] Checking if preload exists for k8s version v1.23.5 and runtime docker
	I0511 22:55:48.845947    7656 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1652251400-14138@sha256:8c847a4aa2afc5a7fc659f9731046bf9cc7e788283deecc83c8633014fb0828a in local docker daemon
	I0511 22:55:48.896113    7656 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.23.5/preloaded-images-k8s-v18-v1.23.5-docker-overlay2-amd64.tar.lz4
	I0511 22:55:48.896113    7656 cache.go:57] Caching tarball of preloaded images
	I0511 22:55:48.896113    7656 preload.go:132] Checking if preload exists for k8s version v1.23.5 and runtime docker
	I0511 22:55:48.899144    7656 out.go:97] Downloading Kubernetes v1.23.5 preload ...
	I0511 22:55:48.899144    7656 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.23.5-docker-overlay2-amd64.tar.lz4 ...
	I0511 22:55:48.965956    7656 download.go:101] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.23.5/preloaded-images-k8s-v18-v1.23.5-docker-overlay2-amd64.tar.lz4?checksum=md5:d0fb3d86acaea9a7773bdef3468eac56 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.23.5-docker-overlay2-amd64.tar.lz4
	I0511 22:55:49.977503    7656 cache.go:146] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1652251400-14138@sha256:8c847a4aa2afc5a7fc659f9731046bf9cc7e788283deecc83c8633014fb0828a to local cache
	I0511 22:55:49.977503    7656 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds:v0.0.30-1652251400-14138@sha256:8c847a4aa2afc5a7fc659f9731046bf9cc7e788283deecc83c8633014fb0828a.tar -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds_v0.0.30-1652251400-14138@sha256_8c847a4aa2afc5a7fc659f9731046bf9cc7e788283deecc83c8633014fb0828a.tar
	I0511 22:55:49.978075    7656 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds:v0.0.30-1652251400-14138@sha256:8c847a4aa2afc5a7fc659f9731046bf9cc7e788283deecc83c8633014fb0828a.tar -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds_v0.0.30-1652251400-14138@sha256_8c847a4aa2afc5a7fc659f9731046bf9cc7e788283deecc83c8633014fb0828a.tar
	I0511 22:55:49.978247    7656 image.go:59] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1652251400-14138@sha256:8c847a4aa2afc5a7fc659f9731046bf9cc7e788283deecc83c8633014fb0828a in local cache directory
	I0511 22:55:49.978501    7656 image.go:62] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1652251400-14138@sha256:8c847a4aa2afc5a7fc659f9731046bf9cc7e788283deecc83c8633014fb0828a in local cache directory, skipping pull
	I0511 22:55:49.978590    7656 image.go:103] gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1652251400-14138@sha256:8c847a4aa2afc5a7fc659f9731046bf9cc7e788283deecc83c8633014fb0828a exists in cache, skipping pull
	I0511 22:55:49.978590    7656 cache.go:149] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1652251400-14138@sha256:8c847a4aa2afc5a7fc659f9731046bf9cc7e788283deecc83c8633014fb0828a as a tarball
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-20220511225523-7184"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:174: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.23.5/LogsDuration (0.65s)

                                                
                                    
x
+
TestDownloadOnly/v1.23.6-rc.0/json-events (13.36s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.23.6-rc.0/json-events
aaa_download_only_test.go:71: (dbg) Run:  out/minikube-windows-amd64.exe start -o=json --download-only -p download-only-20220511225523-7184 --force --alsologtostderr --kubernetes-version=v1.23.6-rc.0 --container-runtime=docker --driver=docker
aaa_download_only_test.go:71: (dbg) Done: out/minikube-windows-amd64.exe start -o=json --download-only -p download-only-20220511225523-7184 --force --alsologtostderr --kubernetes-version=v1.23.6-rc.0 --container-runtime=docker --driver=docker: (13.3581247s)
--- PASS: TestDownloadOnly/v1.23.6-rc.0/json-events (13.36s)

                                                
                                    
x
+
TestDownloadOnly/v1.23.6-rc.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.23.6-rc.0/preload-exists
--- PASS: TestDownloadOnly/v1.23.6-rc.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.23.6-rc.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.23.6-rc.0/kubectl
--- PASS: TestDownloadOnly/v1.23.6-rc.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.23.6-rc.0/LogsDuration (0.59s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.23.6-rc.0/LogsDuration
aaa_download_only_test.go:173: (dbg) Run:  out/minikube-windows-amd64.exe logs -p download-only-20220511225523-7184
aaa_download_only_test.go:173: (dbg) Non-zero exit: out/minikube-windows-amd64.exe logs -p download-only-20220511225523-7184: exit status 85 (584.2522ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|------|---------|------|---------|------------|----------|
	| Command | Args | Profile | User | Version | Start Time | End Time |
	|---------|------|---------|------|---------|------------|----------|
	|---------|------|---------|------|---------|------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2022/05/11 22:55:55
	Running on machine: minikube4
	Binary: Built with gc go1.18.1 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0511 22:55:55.413718    6760 out.go:296] Setting OutFile to fd 628 ...
	I0511 22:55:55.473274    6760 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0511 22:55:55.474288    6760 out.go:309] Setting ErrFile to fd 644...
	I0511 22:55:55.474329    6760 out.go:343] TERM=,COLORTERM=, which probably does not support color
	W0511 22:55:55.486099    6760 root.go:300] Error reading config file at C:\Users\jenkins.minikube4\minikube-integration\.minikube\config\config.json: open C:\Users\jenkins.minikube4\minikube-integration\.minikube\config\config.json: The system cannot find the file specified.
	I0511 22:55:55.486455    6760 out.go:303] Setting JSON to true
	I0511 22:55:55.488220    6760 start.go:115] hostinfo: {"hostname":"minikube4","uptime":8208,"bootTime":1652301547,"procs":162,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19042 Build 19042","kernelVersion":"10.0.19042 Build 19042","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"df6bfb5f-73f2-4acb-9365-df7854ecbb28"}
	W0511 22:55:55.488220    6760 start.go:123] gopshost.Virtualization returned error: not implemented yet
	I0511 22:55:55.495525    6760 out.go:97] [download-only-20220511225523-7184] minikube v1.25.2 on Microsoft Windows 10 Enterprise N 10.0.19042 Build 19042
	I0511 22:55:55.495565    6760 notify.go:193] Checking for updates...
	I0511 22:55:55.500333    6760 out.go:169] KUBECONFIG=C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	I0511 22:55:55.502511    6760 out.go:169] MINIKUBE_HOME=C:\Users\jenkins.minikube4\minikube-integration\.minikube
	I0511 22:55:55.504503    6760 out.go:169] MINIKUBE_LOCATION=13639
	I0511 22:55:55.507979    6760 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	W0511 22:55:55.514004    6760 out.go:272] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0511 22:55:55.514594    6760 config.go:178] Loaded profile config "download-only-20220511225523-7184": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.5
	W0511 22:55:55.514594    6760 start.go:709] api.Load failed for download-only-20220511225523-7184: filestore "download-only-20220511225523-7184": Docker machine "download-only-20220511225523-7184" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I0511 22:55:55.514594    6760 driver.go:358] Setting default libvirt URI to qemu:///system
	W0511 22:55:55.514594    6760 start.go:709] api.Load failed for download-only-20220511225523-7184: filestore "download-only-20220511225523-7184": Docker machine "download-only-20220511225523-7184" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I0511 22:55:58.160148    6760 docker.go:137] docker version: linux-20.10.14
	I0511 22:55:58.168292    6760 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0511 22:56:00.271764    6760 cli_runner.go:217] Completed: docker system info --format "{{json .}}": (2.1033709s)
	I0511 22:56:00.271764    6760 info.go:265] docker info: {ID:5MOX:W55Z:6RSS:V5PU:46KT:D723:NTM4:N7FK:USOO:URA3:TW6J:2PNT Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:0 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:43 OomKillDisable:true NGoroutines:44 SystemTime:2022-05-11 22:55:59.1974815 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.102.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_
64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:53902323712 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.14 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3df54a852345ae127d1fa3092b95168e4a88e2f8 Expected:3df54a852345ae127d1fa3092b95168e4a88e2f8} RuncCommit:{ID:v1.0.3-0-gf46b6ba Expected:v1.0.3-0-gf46b6ba} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,
profile=default] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.5.0] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:C:\Program Files\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc.
Version:v0.17.0]] Warnings:<nil>}}
	I0511 22:56:00.596214    6760 out.go:97] Using the docker driver based on existing profile
	I0511 22:56:00.743900    6760 start.go:284] selected driver: docker
	I0511 22:56:00.743900    6760 start.go:801] validating driver "docker" against &{Name:download-only-20220511225523-7184 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1652251400-14138@sha256:8c847a4aa2afc5a7fc659f9731046bf9cc7e788283deecc83c8633014fb0828a Memory:16300 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.5 ClusterName:download-only-20220511225523-7184 Names
pace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.23.5 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube4:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0511 22:56:00.766274    6760 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0511 22:56:02.823278    6760 cli_runner.go:217] Completed: docker system info --format "{{json .}}": (2.0569049s)
	I0511 22:56:02.823613    6760 info.go:265] docker info: {ID:5MOX:W55Z:6RSS:V5PU:46KT:D723:NTM4:N7FK:USOO:URA3:TW6J:2PNT Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:0 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:43 OomKillDisable:true NGoroutines:44 SystemTime:2022-05-11 22:56:01.7828168 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.102.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_
64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:53902323712 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.14 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3df54a852345ae127d1fa3092b95168e4a88e2f8 Expected:3df54a852345ae127d1fa3092b95168e4a88e2f8} RuncCommit:{ID:v1.0.3-0-gf46b6ba Expected:v1.0.3-0-gf46b6ba} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,
profile=default] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.5.0] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:C:\Program Files\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc.
Version:v0.17.0]] Warnings:<nil>}}
	I0511 22:56:02.871772    6760 cni.go:95] Creating CNI manager for ""
	I0511 22:56:02.871821    6760 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0511 22:56:02.871852    6760 start_flags.go:306] config:
	{Name:download-only-20220511225523-7184 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1652251400-14138@sha256:8c847a4aa2afc5a7fc659f9731046bf9cc7e788283deecc83c8633014fb0828a Memory:16300 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.6-rc.0 ClusterName:download-only-20220511225523-7184 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNS
Domain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.23.5 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube4:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0511 22:56:02.928921    6760 out.go:97] Starting control plane node download-only-20220511225523-7184 in cluster download-only-20220511225523-7184
	I0511 22:56:02.929197    6760 cache.go:120] Beginning downloading kic base image for docker with docker
	I0511 22:56:02.931475    6760 out.go:97] Pulling base image ...
	I0511 22:56:02.931475    6760 preload.go:132] Checking if preload exists for k8s version v1.23.6-rc.0 and runtime docker
	I0511 22:56:02.931475    6760 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1652251400-14138@sha256:8c847a4aa2afc5a7fc659f9731046bf9cc7e788283deecc83c8633014fb0828a in local docker daemon
	I0511 22:56:02.978925    6760 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.23.6-rc.0/preloaded-images-k8s-v18-v1.23.6-rc.0-docker-overlay2-amd64.tar.lz4
	I0511 22:56:02.978925    6760 cache.go:57] Caching tarball of preloaded images
	I0511 22:56:02.979556    6760 preload.go:132] Checking if preload exists for k8s version v1.23.6-rc.0 and runtime docker
	I0511 22:56:03.014704    6760 out.go:97] Downloading Kubernetes v1.23.6-rc.0 preload ...
	I0511 22:56:03.015678    6760 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.23.6-rc.0-docker-overlay2-amd64.tar.lz4 ...
	I0511 22:56:03.080582    6760 download.go:101] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.23.6-rc.0/preloaded-images-k8s-v18-v1.23.6-rc.0-docker-overlay2-amd64.tar.lz4?checksum=md5:8c474a02b5d7628fe0abb1816ff0a9c8 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.23.6-rc.0-docker-overlay2-amd64.tar.lz4
	I0511 22:56:04.029869    6760 cache.go:146] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1652251400-14138@sha256:8c847a4aa2afc5a7fc659f9731046bf9cc7e788283deecc83c8633014fb0828a to local cache
	I0511 22:56:04.029894    6760 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds:v0.0.30-1652251400-14138@sha256:8c847a4aa2afc5a7fc659f9731046bf9cc7e788283deecc83c8633014fb0828a.tar -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds_v0.0.30-1652251400-14138@sha256_8c847a4aa2afc5a7fc659f9731046bf9cc7e788283deecc83c8633014fb0828a.tar
	I0511 22:56:04.030484    6760 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds:v0.0.30-1652251400-14138@sha256:8c847a4aa2afc5a7fc659f9731046bf9cc7e788283deecc83c8633014fb0828a.tar -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds_v0.0.30-1652251400-14138@sha256_8c847a4aa2afc5a7fc659f9731046bf9cc7e788283deecc83c8633014fb0828a.tar
	I0511 22:56:04.030520    6760 image.go:59] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1652251400-14138@sha256:8c847a4aa2afc5a7fc659f9731046bf9cc7e788283deecc83c8633014fb0828a in local cache directory
	I0511 22:56:04.030551    6760 image.go:62] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1652251400-14138@sha256:8c847a4aa2afc5a7fc659f9731046bf9cc7e788283deecc83c8633014fb0828a in local cache directory, skipping pull
	I0511 22:56:04.030551    6760 image.go:103] gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1652251400-14138@sha256:8c847a4aa2afc5a7fc659f9731046bf9cc7e788283deecc83c8633014fb0828a exists in cache, skipping pull
	I0511 22:56:04.030551    6760 cache.go:149] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1652251400-14138@sha256:8c847a4aa2afc5a7fc659f9731046bf9cc7e788283deecc83c8633014fb0828a as a tarball
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-20220511225523-7184"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:174: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.23.6-rc.0/LogsDuration (0.59s)

                                                
                                    
x
+
TestDownloadOnly/DeleteAll (11.71s)

                                                
                                                
=== RUN   TestDownloadOnly/DeleteAll
aaa_download_only_test.go:191: (dbg) Run:  out/minikube-windows-amd64.exe delete --all
aaa_download_only_test.go:191: (dbg) Done: out/minikube-windows-amd64.exe delete --all: (11.709005s)
--- PASS: TestDownloadOnly/DeleteAll (11.71s)

                                                
                                    
x
+
TestDownloadOnly/DeleteAlwaysSucceeds (7.44s)

                                                
                                                
=== RUN   TestDownloadOnly/DeleteAlwaysSucceeds
aaa_download_only_test.go:203: (dbg) Run:  out/minikube-windows-amd64.exe delete -p download-only-20220511225523-7184
aaa_download_only_test.go:203: (dbg) Done: out/minikube-windows-amd64.exe delete -p download-only-20220511225523-7184: (7.4401662s)
--- PASS: TestDownloadOnly/DeleteAlwaysSucceeds (7.44s)

                                                
                                    
x
+
TestDownloadOnlyKic (46.12s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:228: (dbg) Run:  out/minikube-windows-amd64.exe start --download-only -p download-docker-20220511225635-7184 --force --alsologtostderr --driver=docker
aaa_download_only_test.go:228: (dbg) Done: out/minikube-windows-amd64.exe start --download-only -p download-docker-20220511225635-7184 --force --alsologtostderr --driver=docker: (36.4041125s)
helpers_test.go:175: Cleaning up "download-docker-20220511225635-7184" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p download-docker-20220511225635-7184
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p download-docker-20220511225635-7184: (8.5170437s)
--- PASS: TestDownloadOnlyKic (46.12s)

                                                
                                    
x
+
TestBinaryMirror (16.96s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:310: (dbg) Run:  out/minikube-windows-amd64.exe start --download-only -p binary-mirror-20220511225721-7184 --alsologtostderr --binary-mirror http://127.0.0.1:63208 --driver=docker
aaa_download_only_test.go:310: (dbg) Done: out/minikube-windows-amd64.exe start --download-only -p binary-mirror-20220511225721-7184 --alsologtostderr --binary-mirror http://127.0.0.1:63208 --driver=docker: (8.4418669s)
helpers_test.go:175: Cleaning up "binary-mirror-20220511225721-7184" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p binary-mirror-20220511225721-7184
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p binary-mirror-20220511225721-7184: (8.2839488s)
--- PASS: TestBinaryMirror (16.96s)

                                                
                                    
x
+
TestOffline (232.47s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-windows-amd64.exe start -p offline-docker-20220512004748-7184 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=docker

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Done: out/minikube-windows-amd64.exe start -p offline-docker-20220512004748-7184 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=docker: (3m15.7290854s)
helpers_test.go:175: Cleaning up "offline-docker-20220512004748-7184" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p offline-docker-20220512004748-7184
E0512 00:51:24.912013    7184 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-20220511231058-7184\client.crt: The system cannot find the path specified.

                                                
                                                
=== CONT  TestOffline
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p offline-docker-20220512004748-7184: (36.7428574s)
--- PASS: TestOffline (232.47s)

                                                
                                    
x
+
TestAddons/Setup (389.4s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:75: (dbg) Run:  out/minikube-windows-amd64.exe start -p addons-20220511225738-7184 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --driver=docker --addons=ingress --addons=ingress-dns --addons=helm-tiller
addons_test.go:75: (dbg) Done: out/minikube-windows-amd64.exe start -p addons-20220511225738-7184 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --driver=docker --addons=ingress --addons=ingress-dns --addons=helm-tiller: (6m29.3996691s)
--- PASS: TestAddons/Setup (389.40s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (12.96s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:357: metrics-server stabilized in 40.1808ms

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:359: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
helpers_test.go:342: "metrics-server-bd6f4dd56-2ckfn" [45ac1f8e-a527-4538-8c90-306e5c5b329f] Running

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:359: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.0605281s

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:365: (dbg) Run:  kubectl --context addons-20220511225738-7184 top pods -n kube-system

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:382: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-20220511225738-7184 addons disable metrics-server --alsologtostderr -v=1

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:382: (dbg) Done: out/minikube-windows-amd64.exe -p addons-20220511225738-7184 addons disable metrics-server --alsologtostderr -v=1: (7.5564727s)
--- PASS: TestAddons/parallel/MetricsServer (12.96s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (37.35s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:406: tiller-deploy stabilized in 39.253ms

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:408: (dbg) TestAddons/parallel/HelmTiller: waiting 6m0s for pods matching "app=helm" in namespace "kube-system" ...

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
helpers_test.go:342: "tiller-deploy-6d67d5465d-8kgvj" [e82ed113-61b6-4ab3-bfc4-ba611a3c54ae] Running

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:408: (dbg) TestAddons/parallel/HelmTiller: app=helm healthy within 5.051528s

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:423: (dbg) Run:  kubectl --context addons-20220511225738-7184 run --rm helm-test --restart=Never --image=alpine/helm:2.16.3 -it --namespace=kube-system -- version

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:423: (dbg) Done: kubectl --context addons-20220511225738-7184 run --rm helm-test --restart=Never --image=alpine/helm:2.16.3 -it --namespace=kube-system -- version: (15.9222287s)
addons_test.go:428: kubectl --context addons-20220511225738-7184 run --rm helm-test --restart=Never --image=alpine/helm:2.16.3 -it --namespace=kube-system -- version: unexpected stderr: Unable to use a TTY - input is not a terminal or the right kind of file
If you don't see a command prompt, try pressing enter.
Error attaching, falling back to logs: 
addons_test.go:423: (dbg) Run:  kubectl --context addons-20220511225738-7184 run --rm helm-test --restart=Never --image=alpine/helm:2.16.3 -it --namespace=kube-system -- version

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:423: (dbg) Done: kubectl --context addons-20220511225738-7184 run --rm helm-test --restart=Never --image=alpine/helm:2.16.3 -it --namespace=kube-system -- version: (9.5742032s)
addons_test.go:440: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-20220511225738-7184 addons disable helm-tiller --alsologtostderr -v=1
addons_test.go:440: (dbg) Done: out/minikube-windows-amd64.exe -p addons-20220511225738-7184 addons disable helm-tiller --alsologtostderr -v=1: (6.4192642s)
--- PASS: TestAddons/parallel/HelmTiller (37.35s)

                                                
                                    
x
+
TestAddons/parallel/CSI (94.56s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:511: csi-hostpath-driver pods stabilized in 48.6801ms

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:514: (dbg) Run:  kubectl --context addons-20220511225738-7184 create -f testdata\csi-hostpath-driver\pvc.yaml

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:514: (dbg) Done: kubectl --context addons-20220511225738-7184 create -f testdata\csi-hostpath-driver\pvc.yaml: (1.6135222s)
addons_test.go:519: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:392: (dbg) Run:  kubectl --context addons-20220511225738-7184 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:392: (dbg) Run:  kubectl --context addons-20220511225738-7184 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:524: (dbg) Run:  kubectl --context addons-20220511225738-7184 create -f testdata\csi-hostpath-driver\pv-pod.yaml
addons_test.go:529: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:342: "task-pv-pod" [7c2480df-5e39-4b1f-88ee-10c90f047575] Pending
helpers_test.go:342: "task-pv-pod" [7c2480df-5e39-4b1f-88ee-10c90f047575] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])

                                                
                                                
=== CONT  TestAddons/parallel/CSI
helpers_test.go:342: "task-pv-pod" [7c2480df-5e39-4b1f-88ee-10c90f047575] Running

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:529: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 46.1024108s
addons_test.go:534: (dbg) Run:  kubectl --context addons-20220511225738-7184 create -f testdata\csi-hostpath-driver\snapshot.yaml
addons_test.go:539: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:417: (dbg) Run:  kubectl --context addons-20220511225738-7184 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:417: (dbg) Run:  kubectl --context addons-20220511225738-7184 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:544: (dbg) Run:  kubectl --context addons-20220511225738-7184 delete pod task-pv-pod
addons_test.go:544: (dbg) Done: kubectl --context addons-20220511225738-7184 delete pod task-pv-pod: (2.2676748s)
addons_test.go:550: (dbg) Run:  kubectl --context addons-20220511225738-7184 delete pvc hpvc
addons_test.go:556: (dbg) Run:  kubectl --context addons-20220511225738-7184 create -f testdata\csi-hostpath-driver\pvc-restore.yaml

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:561: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:392: (dbg) Run:  kubectl --context addons-20220511225738-7184 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:566: (dbg) Run:  kubectl --context addons-20220511225738-7184 create -f testdata\csi-hostpath-driver\pv-pod-restore.yaml
addons_test.go:571: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:342: "task-pv-pod-restore" [6d9da125-0c84-41b3-bf26-c616439fd903] Pending
helpers_test.go:342: "task-pv-pod-restore" [6d9da125-0c84-41b3-bf26-c616439fd903] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])

                                                
                                                
=== CONT  TestAddons/parallel/CSI
helpers_test.go:342: "task-pv-pod-restore" [6d9da125-0c84-41b3-bf26-c616439fd903] Running
addons_test.go:571: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 16.0824193s
addons_test.go:576: (dbg) Run:  kubectl --context addons-20220511225738-7184 delete pod task-pv-pod-restore
addons_test.go:576: (dbg) Done: kubectl --context addons-20220511225738-7184 delete pod task-pv-pod-restore: (2.2163299s)
addons_test.go:580: (dbg) Run:  kubectl --context addons-20220511225738-7184 delete pvc hpvc-restore
addons_test.go:584: (dbg) Run:  kubectl --context addons-20220511225738-7184 delete volumesnapshot new-snapshot-demo
addons_test.go:588: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-20220511225738-7184 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:588: (dbg) Done: out/minikube-windows-amd64.exe -p addons-20220511225738-7184 addons disable csi-hostpath-driver --alsologtostderr -v=1: (13.5601656s)
addons_test.go:592: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-20220511225738-7184 addons disable volumesnapshots --alsologtostderr -v=1
addons_test.go:592: (dbg) Done: out/minikube-windows-amd64.exe -p addons-20220511225738-7184 addons disable volumesnapshots --alsologtostderr -v=1: (5.6488793s)
--- PASS: TestAddons/parallel/CSI (94.56s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth (26.37s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth
addons_test.go:603: (dbg) Run:  kubectl --context addons-20220511225738-7184 create -f testdata\busybox.yaml
addons_test.go:603: (dbg) Done: kubectl --context addons-20220511225738-7184 create -f testdata\busybox.yaml: (1.7021533s)
addons_test.go:609: (dbg) TestAddons/serial/GCPAuth: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:342: "busybox" [923e21cd-5471-4f49-af42-8ee319d16c3f] Pending
helpers_test.go:342: "busybox" [923e21cd-5471-4f49-af42-8ee319d16c3f] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:342: "busybox" [923e21cd-5471-4f49-af42-8ee319d16c3f] Running
addons_test.go:609: (dbg) TestAddons/serial/GCPAuth: integration-test=busybox healthy within 9.0444103s
addons_test.go:615: (dbg) Run:  kubectl --context addons-20220511225738-7184 exec busybox -- /bin/sh -c "printenv GOOGLE_APPLICATION_CREDENTIALS"
addons_test.go:628: (dbg) Run:  kubectl --context addons-20220511225738-7184 exec busybox -- /bin/sh -c "cat /google-app-creds.json"
addons_test.go:652: (dbg) Run:  kubectl --context addons-20220511225738-7184 exec busybox -- /bin/sh -c "printenv GOOGLE_CLOUD_PROJECT"
addons_test.go:665: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-20220511225738-7184 addons disable gcp-auth --alsologtostderr -v=1
addons_test.go:665: (dbg) Done: out/minikube-windows-amd64.exe -p addons-20220511225738-7184 addons disable gcp-auth --alsologtostderr -v=1: (13.9259056s)
--- PASS: TestAddons/serial/GCPAuth (26.37s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (24.26s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:132: (dbg) Run:  out/minikube-windows-amd64.exe stop -p addons-20220511225738-7184
addons_test.go:132: (dbg) Done: out/minikube-windows-amd64.exe stop -p addons-20220511225738-7184: (18.3387002s)
addons_test.go:136: (dbg) Run:  out/minikube-windows-amd64.exe addons enable dashboard -p addons-20220511225738-7184
addons_test.go:136: (dbg) Done: out/minikube-windows-amd64.exe addons enable dashboard -p addons-20220511225738-7184: (2.9964112s)
addons_test.go:140: (dbg) Run:  out/minikube-windows-amd64.exe addons disable dashboard -p addons-20220511225738-7184
addons_test.go:140: (dbg) Done: out/minikube-windows-amd64.exe addons disable dashboard -p addons-20220511225738-7184: (2.9257671s)
--- PASS: TestAddons/StoppedEnableDisable (24.26s)

                                                
                                    
x
+
TestCertOptions (182.42s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-windows-amd64.exe start -p cert-options-20220512010013-7184 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker --apiserver-name=localhost

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Done: out/minikube-windows-amd64.exe start -p cert-options-20220512010013-7184 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker --apiserver-name=localhost: (2m20.4923538s)
cert_options_test.go:60: (dbg) Run:  out/minikube-windows-amd64.exe -p cert-options-20220512010013-7184 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:60: (dbg) Done: out/minikube-windows-amd64.exe -p cert-options-20220512010013-7184 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt": (6.9560898s)
cert_options_test.go:100: (dbg) Run:  out/minikube-windows-amd64.exe ssh -p cert-options-20220512010013-7184 -- "sudo cat /etc/kubernetes/admin.conf"

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:100: (dbg) Done: out/minikube-windows-amd64.exe ssh -p cert-options-20220512010013-7184 -- "sudo cat /etc/kubernetes/admin.conf": (7.0387746s)
helpers_test.go:175: Cleaning up "cert-options-20220512010013-7184" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p cert-options-20220512010013-7184
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p cert-options-20220512010013-7184: (26.7581575s)
--- PASS: TestCertOptions (182.42s)

                                                
                                    
x
+
TestCertExpiration (379.88s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-windows-amd64.exe start -p cert-expiration-20220512005951-7184 --memory=2048 --cert-expiration=3m --driver=docker
E0512 00:59:52.895350    7184 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\ingress-addon-legacy-20220511235145-7184\client.crt: The system cannot find the path specified.

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Done: out/minikube-windows-amd64.exe start -p cert-expiration-20220512005951-7184 --memory=2048 --cert-expiration=3m --driver=docker: (2m13.3975913s)

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:131: (dbg) Run:  out/minikube-windows-amd64.exe start -p cert-expiration-20220512005951-7184 --memory=2048 --cert-expiration=8760h --driver=docker
E0512 01:05:31.804511    7184 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\addons-20220511225738-7184\client.crt: The system cannot find the path specified.
cert_options_test.go:131: (dbg) Done: out/minikube-windows-amd64.exe start -p cert-expiration-20220512005951-7184 --memory=2048 --cert-expiration=8760h --driver=docker: (41.8572693s)
helpers_test.go:175: Cleaning up "cert-expiration-20220512005951-7184" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p cert-expiration-20220512005951-7184
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p cert-expiration-20220512005951-7184: (24.6211891s)
--- PASS: TestCertExpiration (379.88s)

                                                
                                    
x
+
TestDockerFlags (166.32s)

                                                
                                                
=== RUN   TestDockerFlags
=== PAUSE TestDockerFlags

                                                
                                                

                                                
                                                
=== CONT  TestDockerFlags
docker_test.go:45: (dbg) Run:  out/minikube-windows-amd64.exe start -p docker-flags-20220512005959-7184 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=docker

                                                
                                                
=== CONT  TestDockerFlags
docker_test.go:45: (dbg) Done: out/minikube-windows-amd64.exe start -p docker-flags-20220512005959-7184 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=docker: (2m5.7148666s)
docker_test.go:50: (dbg) Run:  out/minikube-windows-amd64.exe -p docker-flags-20220512005959-7184 ssh "sudo systemctl show docker --property=Environment --no-pager"
docker_test.go:50: (dbg) Done: out/minikube-windows-amd64.exe -p docker-flags-20220512005959-7184 ssh "sudo systemctl show docker --property=Environment --no-pager": (7.8964402s)
docker_test.go:61: (dbg) Run:  out/minikube-windows-amd64.exe -p docker-flags-20220512005959-7184 ssh "sudo systemctl show docker --property=ExecStart --no-pager"
docker_test.go:61: (dbg) Done: out/minikube-windows-amd64.exe -p docker-flags-20220512005959-7184 ssh "sudo systemctl show docker --property=ExecStart --no-pager": (7.3267799s)
helpers_test.go:175: Cleaning up "docker-flags-20220512005959-7184" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p docker-flags-20220512005959-7184

                                                
                                                
=== CONT  TestDockerFlags
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p docker-flags-20220512005959-7184: (25.3812719s)
--- PASS: TestDockerFlags (166.32s)

                                                
                                    
x
+
TestForceSystemdFlag (229.24s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:85: (dbg) Run:  out/minikube-windows-amd64.exe start -p force-systemd-flag-20220512004748-7184 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:85: (dbg) Done: out/minikube-windows-amd64.exe start -p force-systemd-flag-20220512004748-7184 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker: (3m4.270156s)
docker_test.go:104: (dbg) Run:  out/minikube-windows-amd64.exe -p force-systemd-flag-20220512004748-7184 ssh "docker info --format {{.CgroupDriver}}"

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:104: (dbg) Done: out/minikube-windows-amd64.exe -p force-systemd-flag-20220512004748-7184 ssh "docker info --format {{.CgroupDriver}}": (11.3960791s)
helpers_test.go:175: Cleaning up "force-systemd-flag-20220512004748-7184" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p force-systemd-flag-20220512004748-7184

                                                
                                                
=== CONT  TestForceSystemdFlag
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p force-systemd-flag-20220512004748-7184: (33.5715974s)
--- PASS: TestForceSystemdFlag (229.24s)

                                                
                                    
x
+
TestForceSystemdEnv (530.08s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:150: (dbg) Run:  out/minikube-windows-amd64.exe start -p force-systemd-env-20220512010244-7184 --memory=2048 --alsologtostderr -v=5 --driver=docker

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:150: (dbg) Done: out/minikube-windows-amd64.exe start -p force-systemd-env-20220512010244-7184 --memory=2048 --alsologtostderr -v=5 --driver=docker: (8m15.2646859s)
docker_test.go:104: (dbg) Run:  out/minikube-windows-amd64.exe -p force-systemd-env-20220512010244-7184 ssh "docker info --format {{.CgroupDriver}}"
docker_test.go:104: (dbg) Done: out/minikube-windows-amd64.exe -p force-systemd-env-20220512010244-7184 ssh "docker info --format {{.CgroupDriver}}": (7.6495787s)
helpers_test.go:175: Cleaning up "force-systemd-env-20220512010244-7184" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p force-systemd-env-20220512010244-7184
E0512 01:11:24.968596    7184 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-20220511231058-7184\client.crt: The system cannot find the path specified.
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p force-systemd-env-20220512010244-7184: (27.1661949s)
--- PASS: TestForceSystemdEnv (530.08s)

                                                
                                    
x
+
TestErrorSpam/setup (109.95s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:78: (dbg) Run:  out/minikube-windows-amd64.exe start -p nospam-20220511230656-7184 -n=1 --memory=2250 --wait=false --log_dir=C:\Users\jenkins.minikube4\AppData\Local\Temp\nospam-20220511230656-7184 --driver=docker
error_spam_test.go:78: (dbg) Done: out/minikube-windows-amd64.exe start -p nospam-20220511230656-7184 -n=1 --memory=2250 --wait=false --log_dir=C:\Users\jenkins.minikube4\AppData\Local\Temp\nospam-20220511230656-7184 --driver=docker: (1m49.9511969s)
error_spam_test.go:88: acceptable stderr: "! C:\\ProgramData\\chocolatey\\bin\\kubectl.exe is version 1.18.2, which may have incompatibilites with Kubernetes 1.23.5."
--- PASS: TestErrorSpam/setup (109.95s)

                                                
                                    
x
+
TestErrorSpam/start (22.39s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:213: Cleaning up 1 logfile(s) ...
error_spam_test.go:156: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-20220511230656-7184 --log_dir C:\Users\jenkins.minikube4\AppData\Local\Temp\nospam-20220511230656-7184 start --dry-run
error_spam_test.go:156: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-20220511230656-7184 --log_dir C:\Users\jenkins.minikube4\AppData\Local\Temp\nospam-20220511230656-7184 start --dry-run: (7.5517814s)
error_spam_test.go:156: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-20220511230656-7184 --log_dir C:\Users\jenkins.minikube4\AppData\Local\Temp\nospam-20220511230656-7184 start --dry-run
error_spam_test.go:156: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-20220511230656-7184 --log_dir C:\Users\jenkins.minikube4\AppData\Local\Temp\nospam-20220511230656-7184 start --dry-run: (7.4087314s)
error_spam_test.go:179: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-20220511230656-7184 --log_dir C:\Users\jenkins.minikube4\AppData\Local\Temp\nospam-20220511230656-7184 start --dry-run
E0511 23:09:08.187322    7184 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\addons-20220511225738-7184\client.crt: The system cannot find the path specified.
E0511 23:09:08.202126    7184 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\addons-20220511225738-7184\client.crt: The system cannot find the path specified.
E0511 23:09:08.217808    7184 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\addons-20220511225738-7184\client.crt: The system cannot find the path specified.
E0511 23:09:08.249589    7184 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\addons-20220511225738-7184\client.crt: The system cannot find the path specified.
E0511 23:09:08.296550    7184 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\addons-20220511225738-7184\client.crt: The system cannot find the path specified.
E0511 23:09:08.392171    7184 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\addons-20220511225738-7184\client.crt: The system cannot find the path specified.
E0511 23:09:08.566286    7184 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\addons-20220511225738-7184\client.crt: The system cannot find the path specified.
E0511 23:09:08.895198    7184 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\addons-20220511225738-7184\client.crt: The system cannot find the path specified.
error_spam_test.go:179: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-20220511230656-7184 --log_dir C:\Users\jenkins.minikube4\AppData\Local\Temp\nospam-20220511230656-7184 start --dry-run: (7.4253145s)
--- PASS: TestErrorSpam/start (22.39s)

                                                
                                    
x
+
TestErrorSpam/status (19.89s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:213: Cleaning up 0 logfile(s) ...
error_spam_test.go:156: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-20220511230656-7184 --log_dir C:\Users\jenkins.minikube4\AppData\Local\Temp\nospam-20220511230656-7184 status
E0511 23:09:09.535334    7184 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\addons-20220511225738-7184\client.crt: The system cannot find the path specified.
E0511 23:09:10.827728    7184 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\addons-20220511225738-7184\client.crt: The system cannot find the path specified.
E0511 23:09:13.399761    7184 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\addons-20220511225738-7184\client.crt: The system cannot find the path specified.
error_spam_test.go:156: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-20220511230656-7184 --log_dir C:\Users\jenkins.minikube4\AppData\Local\Temp\nospam-20220511230656-7184 status: (6.6873503s)
error_spam_test.go:156: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-20220511230656-7184 --log_dir C:\Users\jenkins.minikube4\AppData\Local\Temp\nospam-20220511230656-7184 status
E0511 23:09:18.534907    7184 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\addons-20220511225738-7184\client.crt: The system cannot find the path specified.
error_spam_test.go:156: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-20220511230656-7184 --log_dir C:\Users\jenkins.minikube4\AppData\Local\Temp\nospam-20220511230656-7184 status: (6.6097971s)
error_spam_test.go:179: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-20220511230656-7184 --log_dir C:\Users\jenkins.minikube4\AppData\Local\Temp\nospam-20220511230656-7184 status
E0511 23:09:28.783542    7184 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\addons-20220511225738-7184\client.crt: The system cannot find the path specified.
error_spam_test.go:179: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-20220511230656-7184 --log_dir C:\Users\jenkins.minikube4\AppData\Local\Temp\nospam-20220511230656-7184 status: (6.5857038s)
--- PASS: TestErrorSpam/status (19.89s)

                                                
                                    
x
+
TestErrorSpam/pause (17.48s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:213: Cleaning up 0 logfile(s) ...
error_spam_test.go:156: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-20220511230656-7184 --log_dir C:\Users\jenkins.minikube4\AppData\Local\Temp\nospam-20220511230656-7184 pause
error_spam_test.go:156: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-20220511230656-7184 --log_dir C:\Users\jenkins.minikube4\AppData\Local\Temp\nospam-20220511230656-7184 pause: (6.252226s)
error_spam_test.go:156: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-20220511230656-7184 --log_dir C:\Users\jenkins.minikube4\AppData\Local\Temp\nospam-20220511230656-7184 pause
error_spam_test.go:156: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-20220511230656-7184 --log_dir C:\Users\jenkins.minikube4\AppData\Local\Temp\nospam-20220511230656-7184 pause: (5.6012057s)
error_spam_test.go:179: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-20220511230656-7184 --log_dir C:\Users\jenkins.minikube4\AppData\Local\Temp\nospam-20220511230656-7184 pause
error_spam_test.go:179: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-20220511230656-7184 --log_dir C:\Users\jenkins.minikube4\AppData\Local\Temp\nospam-20220511230656-7184 pause: (5.6281374s)
--- PASS: TestErrorSpam/pause (17.48s)

                                                
                                    
x
+
TestErrorSpam/unpause (18.05s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:213: Cleaning up 0 logfile(s) ...
error_spam_test.go:156: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-20220511230656-7184 --log_dir C:\Users\jenkins.minikube4\AppData\Local\Temp\nospam-20220511230656-7184 unpause
E0511 23:09:49.267283    7184 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\addons-20220511225738-7184\client.crt: The system cannot find the path specified.
error_spam_test.go:156: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-20220511230656-7184 --log_dir C:\Users\jenkins.minikube4\AppData\Local\Temp\nospam-20220511230656-7184 unpause: (6.7530643s)
error_spam_test.go:156: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-20220511230656-7184 --log_dir C:\Users\jenkins.minikube4\AppData\Local\Temp\nospam-20220511230656-7184 unpause
error_spam_test.go:156: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-20220511230656-7184 --log_dir C:\Users\jenkins.minikube4\AppData\Local\Temp\nospam-20220511230656-7184 unpause: (5.6159471s)
error_spam_test.go:179: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-20220511230656-7184 --log_dir C:\Users\jenkins.minikube4\AppData\Local\Temp\nospam-20220511230656-7184 unpause
error_spam_test.go:179: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-20220511230656-7184 --log_dir C:\Users\jenkins.minikube4\AppData\Local\Temp\nospam-20220511230656-7184 unpause: (5.6811881s)
--- PASS: TestErrorSpam/unpause (18.05s)

                                                
                                    
x
+
TestErrorSpam/stop (33.75s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:213: Cleaning up 0 logfile(s) ...
error_spam_test.go:156: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-20220511230656-7184 --log_dir C:\Users\jenkins.minikube4\AppData\Local\Temp\nospam-20220511230656-7184 stop
error_spam_test.go:156: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-20220511230656-7184 --log_dir C:\Users\jenkins.minikube4\AppData\Local\Temp\nospam-20220511230656-7184 stop: (18.1847086s)
error_spam_test.go:156: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-20220511230656-7184 --log_dir C:\Users\jenkins.minikube4\AppData\Local\Temp\nospam-20220511230656-7184 stop
E0511 23:10:30.231623    7184 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\addons-20220511225738-7184\client.crt: The system cannot find the path specified.
error_spam_test.go:156: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-20220511230656-7184 --log_dir C:\Users\jenkins.minikube4\AppData\Local\Temp\nospam-20220511230656-7184 stop: (7.8038179s)
error_spam_test.go:179: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-20220511230656-7184 --log_dir C:\Users\jenkins.minikube4\AppData\Local\Temp\nospam-20220511230656-7184 stop
error_spam_test.go:179: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-20220511230656-7184 --log_dir C:\Users\jenkins.minikube4\AppData\Local\Temp\nospam-20220511230656-7184 stop: (7.7588232s)
--- PASS: TestErrorSpam/stop (33.75s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0.03s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1784: local sync path: C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\test\nested\copy\7184\hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.03s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (131.36s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2163: (dbg) Run:  out/minikube-windows-amd64.exe start -p functional-20220511231058-7184 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker
E0511 23:11:52.159322    7184 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\addons-20220511225738-7184\client.crt: The system cannot find the path specified.
functional_test.go:2163: (dbg) Done: out/minikube-windows-amd64.exe start -p functional-20220511231058-7184 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker: (2m11.3585203s)
--- PASS: TestFunctional/serial/StartWithProxy (131.36s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (34.68s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:654: (dbg) Run:  out/minikube-windows-amd64.exe start -p functional-20220511231058-7184 --alsologtostderr -v=8
functional_test.go:654: (dbg) Done: out/minikube-windows-amd64.exe start -p functional-20220511231058-7184 --alsologtostderr -v=8: (34.6819692s)
functional_test.go:658: soft start took 34.6830138s for "functional-20220511231058-7184" cluster.
--- PASS: TestFunctional/serial/SoftStart (34.68s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.24s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:676: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.24s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.39s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:691: (dbg) Run:  kubectl --context functional-20220511231058-7184 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.39s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (18.62s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1044: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20220511231058-7184 cache add k8s.gcr.io/pause:3.1
functional_test.go:1044: (dbg) Done: out/minikube-windows-amd64.exe -p functional-20220511231058-7184 cache add k8s.gcr.io/pause:3.1: (6.3483433s)
functional_test.go:1044: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20220511231058-7184 cache add k8s.gcr.io/pause:3.3
functional_test.go:1044: (dbg) Done: out/minikube-windows-amd64.exe -p functional-20220511231058-7184 cache add k8s.gcr.io/pause:3.3: (6.1827526s)
functional_test.go:1044: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20220511231058-7184 cache add k8s.gcr.io/pause:latest
functional_test.go:1044: (dbg) Done: out/minikube-windows-amd64.exe -p functional-20220511231058-7184 cache add k8s.gcr.io/pause:latest: (6.0873685s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (18.62s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (9.27s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1072: (dbg) Run:  docker build -t minikube-local-cache-test:functional-20220511231058-7184 C:\Users\jenkins.minikube4\AppData\Local\Temp\TestFunctionalserialCacheCmdcacheadd_local2063075690\001
functional_test.go:1072: (dbg) Done: docker build -t minikube-local-cache-test:functional-20220511231058-7184 C:\Users\jenkins.minikube4\AppData\Local\Temp\TestFunctionalserialCacheCmdcacheadd_local2063075690\001: (2.3306383s)
functional_test.go:1084: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20220511231058-7184 cache add minikube-local-cache-test:functional-20220511231058-7184
E0511 23:14:08.201141    7184 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\addons-20220511225738-7184\client.crt: The system cannot find the path specified.
functional_test.go:1084: (dbg) Done: out/minikube-windows-amd64.exe -p functional-20220511231058-7184 cache add minikube-local-cache-test:functional-20220511231058-7184: (5.4243815s)
functional_test.go:1089: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20220511231058-7184 cache delete minikube-local-cache-test:functional-20220511231058-7184
functional_test.go:1078: (dbg) Run:  docker rmi minikube-local-cache-test:functional-20220511231058-7184
functional_test.go:1078: (dbg) Done: docker rmi minikube-local-cache-test:functional-20220511231058-7184: (1.1144348s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (9.27s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete_k8s.gcr.io/pause:3.3 (0.36s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete_k8s.gcr.io/pause:3.3
functional_test.go:1097: (dbg) Run:  out/minikube-windows-amd64.exe cache delete k8s.gcr.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/delete_k8s.gcr.io/pause:3.3 (0.36s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.34s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1105: (dbg) Run:  out/minikube-windows-amd64.exe cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.34s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (6.45s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1119: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20220511231058-7184 ssh sudo crictl images
functional_test.go:1119: (dbg) Done: out/minikube-windows-amd64.exe -p functional-20220511231058-7184 ssh sudo crictl images: (6.4523695s)
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (6.45s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (25.08s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1142: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20220511231058-7184 ssh sudo docker rmi k8s.gcr.io/pause:latest
functional_test.go:1142: (dbg) Done: out/minikube-windows-amd64.exe -p functional-20220511231058-7184 ssh sudo docker rmi k8s.gcr.io/pause:latest: (6.4671586s)
functional_test.go:1148: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20220511231058-7184 ssh sudo crictl inspecti k8s.gcr.io/pause:latest
functional_test.go:1148: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-20220511231058-7184 ssh sudo crictl inspecti k8s.gcr.io/pause:latest: exit status 1 (6.398972s)

                                                
                                                
-- stdout --
	FATA[0000] no such image "k8s.gcr.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1153: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20220511231058-7184 cache reload
E0511 23:14:36.022792    7184 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\addons-20220511225738-7184\client.crt: The system cannot find the path specified.
functional_test.go:1153: (dbg) Done: out/minikube-windows-amd64.exe -p functional-20220511231058-7184 cache reload: (5.8595955s)
functional_test.go:1158: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20220511231058-7184 ssh sudo crictl inspecti k8s.gcr.io/pause:latest
functional_test.go:1158: (dbg) Done: out/minikube-windows-amd64.exe -p functional-20220511231058-7184 ssh sudo crictl inspecti k8s.gcr.io/pause:latest: (6.3487108s)
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (25.08s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.74s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1167: (dbg) Run:  out/minikube-windows-amd64.exe cache delete k8s.gcr.io/pause:3.1
functional_test.go:1167: (dbg) Run:  out/minikube-windows-amd64.exe cache delete k8s.gcr.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.74s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (2.08s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:711: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20220511231058-7184 kubectl -- --context functional-20220511231058-7184 get pods
functional_test.go:711: (dbg) Done: out/minikube-windows-amd64.exe -p functional-20220511231058-7184 kubectl -- --context functional-20220511231058-7184 get pods: (2.0785942s)
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (2.08s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (2.09s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:736: (dbg) Run:  out\kubectl.exe --context functional-20220511231058-7184 get pods
functional_test.go:736: (dbg) Done: out\kubectl.exe --context functional-20220511231058-7184 get pods: (2.0850393s)
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (2.09s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (75.32s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:752: (dbg) Run:  out/minikube-windows-amd64.exe start -p functional-20220511231058-7184 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:752: (dbg) Done: out/minikube-windows-amd64.exe start -p functional-20220511231058-7184 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (1m15.3148731s)
functional_test.go:756: restart took 1m15.3150872s for "functional-20220511231058-7184" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (75.32s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.3s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:805: (dbg) Run:  kubectl --context functional-20220511231058-7184 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:820: etcd phase: Running
functional_test.go:830: etcd status: Ready
functional_test.go:820: kube-apiserver phase: Running
functional_test.go:830: kube-apiserver status: Ready
functional_test.go:820: kube-controller-manager phase: Running
functional_test.go:830: kube-controller-manager status: Ready
functional_test.go:820: kube-scheduler phase: Running
functional_test.go:830: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.30s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (7.68s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1231: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20220511231058-7184 logs
functional_test.go:1231: (dbg) Done: out/minikube-windows-amd64.exe -p functional-20220511231058-7184 logs: (7.678727s)
--- PASS: TestFunctional/serial/LogsCmd (7.68s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (8.76s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1245: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20220511231058-7184 logs --file C:\Users\jenkins.minikube4\AppData\Local\Temp\TestFunctionalserialLogsFileCmd4133432057\001\logs.txt
functional_test.go:1245: (dbg) Done: out/minikube-windows-amd64.exe -p functional-20220511231058-7184 logs --file C:\Users\jenkins.minikube4\AppData\Local\Temp\TestFunctionalserialLogsFileCmd4133432057\001\logs.txt: (8.7522853s)
--- PASS: TestFunctional/serial/LogsFileCmd (8.76s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (2.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1194: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20220511231058-7184 config unset cpus

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1194: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20220511231058-7184 config get cpus
functional_test.go:1194: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-20220511231058-7184 config get cpus: exit status 14 (359.2134ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1194: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20220511231058-7184 config set cpus 2
functional_test.go:1194: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20220511231058-7184 config get cpus
functional_test.go:1194: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20220511231058-7184 config unset cpus
functional_test.go:1194: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20220511231058-7184 config get cpus
functional_test.go:1194: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-20220511231058-7184 config get cpus: exit status 14 (386.2287ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (2.26s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (13.92s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:969: (dbg) Run:  out/minikube-windows-amd64.exe start -p functional-20220511231058-7184 --dry-run --memory 250MB --alsologtostderr --driver=docker

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:969: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p functional-20220511231058-7184 --dry-run --memory 250MB --alsologtostderr --driver=docker: exit status 23 (5.2834754s)

                                                
                                                
-- stdout --
	* [functional-20220511231058-7184] minikube v1.25.2 on Microsoft Windows 10 Enterprise N 10.0.19042 Build 19042
	  - KUBECONFIG=C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube4\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=13639
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0511 23:19:01.432368    8656 out.go:296] Setting OutFile to fd 780 ...
	I0511 23:19:01.512146    8656 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0511 23:19:01.512146    8656 out.go:309] Setting ErrFile to fd 912...
	I0511 23:19:01.512146    8656 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0511 23:19:01.526149    8656 out.go:303] Setting JSON to false
	I0511 23:19:01.529154    8656 start.go:115] hostinfo: {"hostname":"minikube4","uptime":9594,"bootTime":1652301547,"procs":167,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19042 Build 19042","kernelVersion":"10.0.19042 Build 19042","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"df6bfb5f-73f2-4acb-9365-df7854ecbb28"}
	W0511 23:19:01.529154    8656 start.go:123] gopshost.Virtualization returned error: not implemented yet
	I0511 23:19:01.534974    8656 out.go:177] * [functional-20220511231058-7184] minikube v1.25.2 on Microsoft Windows 10 Enterprise N 10.0.19042 Build 19042
	I0511 23:19:01.538658    8656 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	I0511 23:19:01.541235    8656 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube4\minikube-integration\.minikube
	I0511 23:19:01.543509    8656 out.go:177]   - MINIKUBE_LOCATION=13639
	I0511 23:19:01.545506    8656 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0511 23:19:01.548411    8656 config.go:178] Loaded profile config "functional-20220511231058-7184": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.5
	I0511 23:19:01.548411    8656 driver.go:358] Setting default libvirt URI to qemu:///system
	I0511 23:19:04.275667    8656 docker.go:137] docker version: linux-20.10.14
	I0511 23:19:04.282924    8656 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0511 23:19:06.354856    8656 cli_runner.go:217] Completed: docker system info --format "{{json .}}": (2.0716449s)
	I0511 23:19:06.355580    8656 info.go:265] docker info: {ID:5MOX:W55Z:6RSS:V5PU:46KT:D723:NTM4:N7FK:USOO:URA3:TW6J:2PNT Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:58 OomKillDisable:true NGoroutines:52 SystemTime:2022-05-11 23:19:05.2939878 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.102.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_
64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:53902323712 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.14 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3df54a852345ae127d1fa3092b95168e4a88e2f8 Expected:3df54a852345ae127d1fa3092b95168e4a88e2f8} RuncCommit:{ID:v1.0.3-0-gf46b6ba Expected:v1.0.3-0-gf46b6ba} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,
profile=default] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.5.0] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:C:\Program Files\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc.
Version:v0.17.0]] Warnings:<nil>}}
	I0511 23:19:06.359711    8656 out.go:177] * Using the docker driver based on existing profile
	I0511 23:19:06.361946    8656 start.go:284] selected driver: docker
	I0511 23:19:06.361985    8656 start.go:801] validating driver "docker" against &{Name:functional-20220511231058-7184 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1652251400-14138@sha256:8c847a4aa2afc5a7fc659f9731046bf9cc7e788283deecc83c8633014fb0828a Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.5 ClusterName:functional-20220511231058-7184 Namespace:de
fault APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.23.5 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false helm-tiller:false ingress:false ingress-dns:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registr
y-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube4:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0511 23:19:06.362353    8656 start.go:812] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0511 23:19:06.417105    8656 out.go:177] 
	W0511 23:19:06.419106    8656 out.go:239] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0511 23:19:06.421115    8656 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:986: (dbg) Run:  out/minikube-windows-amd64.exe start -p functional-20220511231058-7184 --dry-run --alsologtostderr -v=1 --driver=docker

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:986: (dbg) Done: out/minikube-windows-amd64.exe start -p functional-20220511231058-7184 --dry-run --alsologtostderr -v=1 --driver=docker: (8.6331653s)
--- PASS: TestFunctional/parallel/DryRun (13.92s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (6.78s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1015: (dbg) Run:  out/minikube-windows-amd64.exe start -p functional-20220511231058-7184 --dry-run --memory 250MB --alsologtostderr --driver=docker

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1015: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p functional-20220511231058-7184 --dry-run --memory 250MB --alsologtostderr --driver=docker: exit status 23 (6.7817828s)

                                                
                                                
-- stdout --
	* [functional-20220511231058-7184] minikube v1.25.2 sur Microsoft Windows 10 Enterprise N 10.0.19042 Build 19042
	  - KUBECONFIG=C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube4\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=13639
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0511 23:18:54.651729    3584 out.go:296] Setting OutFile to fd 812 ...
	I0511 23:18:54.711513    3584 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0511 23:18:54.711513    3584 out.go:309] Setting ErrFile to fd 828...
	I0511 23:18:54.711513    3584 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0511 23:18:54.738016    3584 out.go:303] Setting JSON to false
	I0511 23:18:54.741870    3584 start.go:115] hostinfo: {"hostname":"minikube4","uptime":9588,"bootTime":1652301546,"procs":166,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19042 Build 19042","kernelVersion":"10.0.19042 Build 19042","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"df6bfb5f-73f2-4acb-9365-df7854ecbb28"}
	W0511 23:18:54.742468    3584 start.go:123] gopshost.Virtualization returned error: not implemented yet
	I0511 23:18:54.746207    3584 out.go:177] * [functional-20220511231058-7184] minikube v1.25.2 sur Microsoft Windows 10 Enterprise N 10.0.19042 Build 19042
	I0511 23:18:54.750460    3584 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	I0511 23:18:54.752688    3584 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube4\minikube-integration\.minikube
	I0511 23:18:54.754885    3584 out.go:177]   - MINIKUBE_LOCATION=13639
	I0511 23:18:54.757207    3584 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0511 23:18:54.760632    3584 config.go:178] Loaded profile config "functional-20220511231058-7184": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.5
	I0511 23:18:54.761476    3584 driver.go:358] Setting default libvirt URI to qemu:///system
	I0511 23:18:57.506428    3584 docker.go:137] docker version: linux-20.10.14
	I0511 23:18:57.513971    3584 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0511 23:18:59.623851    3584 cli_runner.go:217] Completed: docker system info --format "{{json .}}": (2.1097753s)
	I0511 23:18:59.623851    3584 info.go:265] docker info: {ID:5MOX:W55Z:6RSS:V5PU:46KT:D723:NTM4:N7FK:USOO:URA3:TW6J:2PNT Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:58 OomKillDisable:true NGoroutines:52 SystemTime:2022-05-11 23:18:58.5446631 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.102.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_
64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:53902323712 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.14 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3df54a852345ae127d1fa3092b95168e4a88e2f8 Expected:3df54a852345ae127d1fa3092b95168e4a88e2f8} RuncCommit:{ID:v1.0.3-0-gf46b6ba Expected:v1.0.3-0-gf46b6ba} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,
profile=default] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.5.0] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:C:\Program Files\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc.
Version:v0.17.0]] Warnings:<nil>}}
	I0511 23:18:59.629185    3584 out.go:177] * Utilisation du pilote docker basé sur le profil existant
	I0511 23:18:59.634932    3584 start.go:284] selected driver: docker
	I0511 23:18:59.634932    3584 start.go:801] validating driver "docker" against &{Name:functional-20220511231058-7184 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1652251400-14138@sha256:8c847a4aa2afc5a7fc659f9731046bf9cc7e788283deecc83c8633014fb0828a Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.5 ClusterName:functional-20220511231058-7184 Namespace:de
fault APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.23.5 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false helm-tiller:false ingress:false ingress-dns:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registr
y-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube4:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0511 23:18:59.634932    3584 start.go:812] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0511 23:19:01.127393    3584 out.go:177] 
	W0511 23:19:01.129970    3584 out.go:239] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0511 23:19:01.132471    3584 out.go:177] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (6.78s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (19.89s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:849: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20220511231058-7184 status

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:849: (dbg) Done: out/minikube-windows-amd64.exe -p functional-20220511231058-7184 status: (6.6643836s)
functional_test.go:855: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20220511231058-7184 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:855: (dbg) Done: out/minikube-windows-amd64.exe -p functional-20220511231058-7184 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}: (6.6840826s)
functional_test.go:867: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20220511231058-7184 status -o json

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:867: (dbg) Done: out/minikube-windows-amd64.exe -p functional-20220511231058-7184 status -o json: (6.5392437s)
--- PASS: TestFunctional/parallel/StatusCmd (19.89s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (3.56s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1622: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20220511231058-7184 addons list

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1622: (dbg) Done: out/minikube-windows-amd64.exe -p functional-20220511231058-7184 addons list: (3.156566s)
functional_test.go:1634: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20220511231058-7184 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (3.56s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (47.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
helpers_test.go:342: "storage-provisioner" [22e06448-629c-4013-9d6b-3f6047f76fcd] Running

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.170957s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-20220511231058-7184 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-20220511231058-7184 apply -f testdata/storage-provisioner/pvc.yaml

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-20220511231058-7184 get pvc myclaim -o=json

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-20220511231058-7184 apply -f testdata/storage-provisioner/pod.yaml

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:342: "sp-pod" [272f56c6-8f14-42bb-b135-9d158f7d9cbc] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
helpers_test.go:342: "sp-pod" [272f56c6-8f14-42bb-b135-9d158f7d9cbc] Running

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 26.0353222s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-20220511231058-7184 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-20220511231058-7184 delete -f testdata/storage-provisioner/pod.yaml

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:106: (dbg) Done: kubectl --context functional-20220511231058-7184 delete -f testdata/storage-provisioner/pod.yaml: (1.9049114s)
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-20220511231058-7184 apply -f testdata/storage-provisioner/pod.yaml

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:342: "sp-pod" [c65e4a64-b719-4c72-88be-ed08a6fd6042] Pending
helpers_test.go:342: "sp-pod" [c65e4a64-b719-4c72-88be-ed08a6fd6042] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
helpers_test.go:342: "sp-pod" [c65e4a64-b719-4c72-88be-ed08a6fd6042] Running

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 10.0404621s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-20220511231058-7184 exec sp-pod -- ls /tmp/mount
functional_test_pvc_test.go:114: (dbg) Done: kubectl --context functional-20220511231058-7184 exec sp-pod -- ls /tmp/mount: (1.5275763s)
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (47.52s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (14.87s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1657: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20220511231058-7184 ssh "echo hello"

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1657: (dbg) Done: out/minikube-windows-amd64.exe -p functional-20220511231058-7184 ssh "echo hello": (7.7068877s)
functional_test.go:1674: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20220511231058-7184 ssh "cat /etc/hostname"

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1674: (dbg) Done: out/minikube-windows-amd64.exe -p functional-20220511231058-7184 ssh "cat /etc/hostname": (7.1615963s)
--- PASS: TestFunctional/parallel/SSHCmd (14.87s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (25.66s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:554: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20220511231058-7184 cp testdata\cp-test.txt /home/docker/cp-test.txt

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:554: (dbg) Done: out/minikube-windows-amd64.exe -p functional-20220511231058-7184 cp testdata\cp-test.txt /home/docker/cp-test.txt: (5.7882062s)
helpers_test.go:532: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20220511231058-7184 ssh -n functional-20220511231058-7184 "sudo cat /home/docker/cp-test.txt"

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:532: (dbg) Done: out/minikube-windows-amd64.exe -p functional-20220511231058-7184 ssh -n functional-20220511231058-7184 "sudo cat /home/docker/cp-test.txt": (6.6776629s)
helpers_test.go:554: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20220511231058-7184 cp functional-20220511231058-7184:/home/docker/cp-test.txt C:\Users\jenkins.minikube4\AppData\Local\Temp\TestFunctionalparallelCpCmd3903165102\001\cp-test.txt

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:554: (dbg) Done: out/minikube-windows-amd64.exe -p functional-20220511231058-7184 cp functional-20220511231058-7184:/home/docker/cp-test.txt C:\Users\jenkins.minikube4\AppData\Local\Temp\TestFunctionalparallelCpCmd3903165102\001\cp-test.txt: (6.8364914s)
helpers_test.go:532: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20220511231058-7184 ssh -n functional-20220511231058-7184 "sudo cat /home/docker/cp-test.txt"

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:532: (dbg) Done: out/minikube-windows-amd64.exe -p functional-20220511231058-7184 ssh -n functional-20220511231058-7184 "sudo cat /home/docker/cp-test.txt": (6.3522632s)
--- PASS: TestFunctional/parallel/CpCmd (25.66s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (84.78s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1722: (dbg) Run:  kubectl --context functional-20220511231058-7184 replace --force -f testdata\mysql.yaml
functional_test.go:1728: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
helpers_test.go:342: "mysql-b87c45988-v7bjw" [13b3d752-6f39-45ca-88ec-1269924c718e] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
helpers_test.go:342: "mysql-b87c45988-v7bjw" [13b3d752-6f39-45ca-88ec-1269924c718e] Running

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1728: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 56.0705152s
functional_test.go:1736: (dbg) Run:  kubectl --context functional-20220511231058-7184 exec mysql-b87c45988-v7bjw -- mysql -ppassword -e "show databases;"

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1736: (dbg) Non-zero exit: kubectl --context functional-20220511231058-7184 exec mysql-b87c45988-v7bjw -- mysql -ppassword -e "show databases;": exit status 1 (709.6105ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1736: (dbg) Run:  kubectl --context functional-20220511231058-7184 exec mysql-b87c45988-v7bjw -- mysql -ppassword -e "show databases;"
functional_test.go:1736: (dbg) Non-zero exit: kubectl --context functional-20220511231058-7184 exec mysql-b87c45988-v7bjw -- mysql -ppassword -e "show databases;": exit status 1 (1.1106745s)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1736: (dbg) Run:  kubectl --context functional-20220511231058-7184 exec mysql-b87c45988-v7bjw -- mysql -ppassword -e "show databases;"
functional_test.go:1736: (dbg) Non-zero exit: kubectl --context functional-20220511231058-7184 exec mysql-b87c45988-v7bjw -- mysql -ppassword -e "show databases;": exit status 1 (1.1207861s)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1736: (dbg) Run:  kubectl --context functional-20220511231058-7184 exec mysql-b87c45988-v7bjw -- mysql -ppassword -e "show databases;"
functional_test.go:1736: (dbg) Non-zero exit: kubectl --context functional-20220511231058-7184 exec mysql-b87c45988-v7bjw -- mysql -ppassword -e "show databases;": exit status 1 (688.3079ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1736: (dbg) Run:  kubectl --context functional-20220511231058-7184 exec mysql-b87c45988-v7bjw -- mysql -ppassword -e "show databases;"

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1736: (dbg) Non-zero exit: kubectl --context functional-20220511231058-7184 exec mysql-b87c45988-v7bjw -- mysql -ppassword -e "show databases;": exit status 1 (948.7112ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1736: (dbg) Run:  kubectl --context functional-20220511231058-7184 exec mysql-b87c45988-v7bjw -- mysql -ppassword -e "show databases;"
functional_test.go:1736: (dbg) Non-zero exit: kubectl --context functional-20220511231058-7184 exec mysql-b87c45988-v7bjw -- mysql -ppassword -e "show databases;": exit status 1 (589.2582ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1736: (dbg) Run:  kubectl --context functional-20220511231058-7184 exec mysql-b87c45988-v7bjw -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (84.78s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (6.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1858: Checking for existence of /etc/test/nested/copy/7184/hosts within VM
functional_test.go:1860: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20220511231058-7184 ssh "sudo cat /etc/test/nested/copy/7184/hosts"

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1860: (dbg) Done: out/minikube-windows-amd64.exe -p functional-20220511231058-7184 ssh "sudo cat /etc/test/nested/copy/7184/hosts": (6.4857754s)
functional_test.go:1865: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (6.49s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (42.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1901: Checking for existence of /etc/ssl/certs/7184.pem within VM
functional_test.go:1902: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20220511231058-7184 ssh "sudo cat /etc/ssl/certs/7184.pem"

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1902: (dbg) Done: out/minikube-windows-amd64.exe -p functional-20220511231058-7184 ssh "sudo cat /etc/ssl/certs/7184.pem": (6.9814724s)
functional_test.go:1901: Checking for existence of /usr/share/ca-certificates/7184.pem within VM
functional_test.go:1902: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20220511231058-7184 ssh "sudo cat /usr/share/ca-certificates/7184.pem"

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1902: (dbg) Done: out/minikube-windows-amd64.exe -p functional-20220511231058-7184 ssh "sudo cat /usr/share/ca-certificates/7184.pem": (6.9651387s)
functional_test.go:1901: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1902: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20220511231058-7184 ssh "sudo cat /etc/ssl/certs/51391683.0"

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1902: (dbg) Done: out/minikube-windows-amd64.exe -p functional-20220511231058-7184 ssh "sudo cat /etc/ssl/certs/51391683.0": (6.6284511s)
functional_test.go:1928: Checking for existence of /etc/ssl/certs/71842.pem within VM
functional_test.go:1929: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20220511231058-7184 ssh "sudo cat /etc/ssl/certs/71842.pem"

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1929: (dbg) Done: out/minikube-windows-amd64.exe -p functional-20220511231058-7184 ssh "sudo cat /etc/ssl/certs/71842.pem": (7.7654916s)
functional_test.go:1928: Checking for existence of /usr/share/ca-certificates/71842.pem within VM
functional_test.go:1929: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20220511231058-7184 ssh "sudo cat /usr/share/ca-certificates/71842.pem"

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1929: (dbg) Done: out/minikube-windows-amd64.exe -p functional-20220511231058-7184 ssh "sudo cat /usr/share/ca-certificates/71842.pem": (7.1249475s)
functional_test.go:1928: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:1929: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20220511231058-7184 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1929: (dbg) Done: out/minikube-windows-amd64.exe -p functional-20220511231058-7184 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0": (6.8909773s)
--- PASS: TestFunctional/parallel/CertSync (42.36s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:214: (dbg) Run:  kubectl --context functional-20220511231058-7184 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (6.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:1956: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20220511231058-7184 ssh "sudo systemctl is-active crio"

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:1956: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-20220511231058-7184 ssh "sudo systemctl is-active crio": exit status 1 (6.353186s)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (6.35s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv/powershell (28.78s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv/powershell

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv/powershell
functional_test.go:494: (dbg) Run:  powershell.exe -NoProfile -NonInteractive "out/minikube-windows-amd64.exe -p functional-20220511231058-7184 docker-env | Invoke-Expression ; out/minikube-windows-amd64.exe status -p functional-20220511231058-7184"

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv/powershell
functional_test.go:494: (dbg) Done: powershell.exe -NoProfile -NonInteractive "out/minikube-windows-amd64.exe -p functional-20220511231058-7184 docker-env | Invoke-Expression ; out/minikube-windows-amd64.exe status -p functional-20220511231058-7184": (18.0338694s)
functional_test.go:517: (dbg) Run:  powershell.exe -NoProfile -NonInteractive "out/minikube-windows-amd64.exe -p functional-20220511231058-7184 docker-env | Invoke-Expression ; docker images"

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv/powershell
functional_test.go:517: (dbg) Done: powershell.exe -NoProfile -NonInteractive "out/minikube-windows-amd64.exe -p functional-20220511231058-7184 docker-env | Invoke-Expression ; docker images": (10.7353638s)
--- PASS: TestFunctional/parallel/DockerEnv/powershell (28.78s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (4.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:256: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20220511231058-7184 image ls --format short

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:256: (dbg) Done: out/minikube-windows-amd64.exe -p functional-20220511231058-7184 image ls --format short: (4.2306414s)
functional_test.go:261: (dbg) Stdout: out/minikube-windows-amd64.exe -p functional-20220511231058-7184 image ls --format short:
k8s.gcr.io/pause:latest
k8s.gcr.io/pause:3.6
k8s.gcr.io/pause:3.3
k8s.gcr.io/pause:3.1
k8s.gcr.io/kube-scheduler:v1.23.5
k8s.gcr.io/kube-proxy:v1.23.5
k8s.gcr.io/kube-controller-manager:v1.23.5
k8s.gcr.io/kube-apiserver:v1.23.5
k8s.gcr.io/etcd:3.5.1-0
k8s.gcr.io/echoserver:1.8
k8s.gcr.io/coredns/coredns:v1.8.6
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/google-containers/addon-resizer:functional-20220511231058-7184
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/library/mysql:5.7
docker.io/library/minikube-local-cache-test:functional-20220511231058-7184
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (4.23s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (4.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:256: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20220511231058-7184 image ls --format table

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:256: (dbg) Done: out/minikube-windows-amd64.exe -p functional-20220511231058-7184 image ls --format table: (4.2037148s)
functional_test.go:261: (dbg) Stdout: out/minikube-windows-amd64.exe -p functional-20220511231058-7184 image ls --format table:
|---------------------------------------------|--------------------------------|---------------|--------|
|                    Image                    |              Tag               |   Image ID    |  Size  |
|---------------------------------------------|--------------------------------|---------------|--------|
| docker.io/library/mysql                     | 5.7                            | a3d35804fa376 | 462MB  |
| k8s.gcr.io/kube-apiserver                   | v1.23.5                        | 3fc1d62d65872 | 135MB  |
| k8s.gcr.io/kube-proxy                       | v1.23.5                        | 3c53fa8541f95 | 112MB  |
| k8s.gcr.io/pause                            | latest                         | 350b164e7ae1d | 240kB  |
| k8s.gcr.io/kube-controller-manager          | v1.23.5                        | b0c9e5e4dbb14 | 125MB  |
| k8s.gcr.io/pause                            | 3.6                            | 6270bb605e12e | 683kB  |
| k8s.gcr.io/pause                            | 3.3                            | 0184c1613d929 | 683kB  |
| k8s.gcr.io/pause                            | 3.1                            | da86e6ba6ca19 | 742kB  |
| gcr.io/google-containers/addon-resizer      | functional-20220511231058-7184 | ffd4cfbbe753e | 32.9MB |
| k8s.gcr.io/echoserver                       | 1.8                            | 82e4c8a736a4f | 95.4MB |
| k8s.gcr.io/etcd                             | 3.5.1-0                        | 25f8c7f3da61c | 293MB  |
| k8s.gcr.io/coredns/coredns                  | v1.8.6                         | a4ca41631cc7a | 46.8MB |
| gcr.io/k8s-minikube/storage-provisioner     | v5                             | 6e38f40d628db | 31.5MB |
| docker.io/library/minikube-local-cache-test | functional-20220511231058-7184 | 1bbb68b6e0b67 | 30B    |
| docker.io/library/nginx                     | latest                         | 7425d3a7c478e | 142MB  |
| docker.io/library/nginx                     | alpine                         | 51696c87e77e4 | 23.4MB |
| k8s.gcr.io/kube-scheduler                   | v1.23.5                        | 884d49d6d8c9f | 53.5MB |
|---------------------------------------------|--------------------------------|---------------|--------|
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (4.20s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (4.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:256: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20220511231058-7184 image ls --format json
functional_test.go:256: (dbg) Done: out/minikube-windows-amd64.exe -p functional-20220511231058-7184 image ls --format json: (4.2750716s)
functional_test.go:261: (dbg) Stdout: out/minikube-windows-amd64.exe -p functional-20220511231058-7184 image ls --format json:
[{"id":"b0c9e5e4dbb14459edc593b39add54f5497e42d4eecc8d03bee5daf9537b0dae","repoDigests":[],"repoTags":["k8s.gcr.io/kube-controller-manager:v1.23.5"],"size":"125000000"},{"id":"6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee","repoDigests":[],"repoTags":["k8s.gcr.io/pause:3.6"],"size":"683000"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31500000"},{"id":"82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410","repoDigests":[],"repoTags":["k8s.gcr.io/echoserver:1.8"],"size":"95400000"},{"id":"1bbb68b6e0b675f6fe50ba55590a7215e046c1a1537aadece483e766a4aac8ec","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-20220511231058-7184"],"size":"30"},{"id":"7425d3a7c478efbeb75f0937060117343a9a510f72f5f7ad9f14b1501a36940c","repoDigests":[],"repoTags":["docker.io/library/nginx:latest"],"size":"142000000"},{"id":"3fc1d62d65872296462b198ab7842d0
faf8c336b236c4a0dacfce67bec95257f","repoDigests":[],"repoTags":["k8s.gcr.io/kube-apiserver:v1.23.5"],"size":"135000000"},{"id":"884d49d6d8c9f40672d20c78e300ffee238d01c1ccb2c132937125d97a596fd7","repoDigests":[],"repoTags":["k8s.gcr.io/kube-scheduler:v1.23.5"],"size":"53500000"},{"id":"ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91","repoDigests":[],"repoTags":["gcr.io/google-containers/addon-resizer:functional-20220511231058-7184"],"size":"32900000"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":[],"repoTags":["k8s.gcr.io/pause:3.1"],"size":"742000"},{"id":"3c53fa8541f95165d3def81704febb85e2e13f90872667f9939dd856dc88e874","repoDigests":[],"repoTags":["k8s.gcr.io/kube-proxy:v1.23.5"],"size":"112000000"},{"id":"a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03","repoDigests":[],"repoTags":["k8s.gcr.io/coredns/coredns:v1.8.6"],"size":"46800000"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":[],"repo
Tags":["k8s.gcr.io/pause:3.3"],"size":"683000"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":[],"repoTags":["k8s.gcr.io/pause:latest"],"size":"240000"},{"id":"a3d35804fa376a141b9a9dad8f5534c3179f4c328d6efc67c5c5145d257c291a","repoDigests":[],"repoTags":["docker.io/library/mysql:5.7"],"size":"462000000"},{"id":"51696c87e77e4ff7a53af9be837f35d4eacdb47b4ca83ba5fd5e4b5101d98502","repoDigests":[],"repoTags":["docker.io/library/nginx:alpine"],"size":"23400000"},{"id":"25f8c7f3da61c2a810effe5fa779cf80ca171afb0adf94c7cb51eb9a8546629d","repoDigests":[],"repoTags":["k8s.gcr.io/etcd:3.5.1-0"],"size":"293000000"}]
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (4.28s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (4.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:256: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20220511231058-7184 image ls --format yaml

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:256: (dbg) Done: out/minikube-windows-amd64.exe -p functional-20220511231058-7184 image ls --format yaml: (4.3240209s)
functional_test.go:261: (dbg) Stdout: out/minikube-windows-amd64.exe -p functional-20220511231058-7184 image ls --format yaml:
- id: a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03
repoDigests: []
repoTags:
- k8s.gcr.io/coredns/coredns:v1.8.6
size: "46800000"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests: []
repoTags:
- k8s.gcr.io/pause:3.3
size: "683000"
- id: 3fc1d62d65872296462b198ab7842d0faf8c336b236c4a0dacfce67bec95257f
repoDigests: []
repoTags:
- k8s.gcr.io/kube-apiserver:v1.23.5
size: "135000000"
- id: 3c53fa8541f95165d3def81704febb85e2e13f90872667f9939dd856dc88e874
repoDigests: []
repoTags:
- k8s.gcr.io/kube-proxy:v1.23.5
size: "112000000"
- id: 884d49d6d8c9f40672d20c78e300ffee238d01c1ccb2c132937125d97a596fd7
repoDigests: []
repoTags:
- k8s.gcr.io/kube-scheduler:v1.23.5
size: "53500000"
- id: 6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee
repoDigests: []
repoTags:
- k8s.gcr.io/pause:3.6
size: "683000"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31500000"
- id: a3d35804fa376a141b9a9dad8f5534c3179f4c328d6efc67c5c5145d257c291a
repoDigests: []
repoTags:
- docker.io/library/mysql:5.7
size: "462000000"
- id: 25f8c7f3da61c2a810effe5fa779cf80ca171afb0adf94c7cb51eb9a8546629d
repoDigests: []
repoTags:
- k8s.gcr.io/etcd:3.5.1-0
size: "293000000"
- id: ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91
repoDigests: []
repoTags:
- gcr.io/google-containers/addon-resizer:functional-20220511231058-7184
size: "32900000"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests: []
repoTags:
- k8s.gcr.io/pause:3.1
size: "742000"
- id: 1bbb68b6e0b675f6fe50ba55590a7215e046c1a1537aadece483e766a4aac8ec
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-20220511231058-7184
size: "30"
- id: 51696c87e77e4ff7a53af9be837f35d4eacdb47b4ca83ba5fd5e4b5101d98502
repoDigests: []
repoTags:
- docker.io/library/nginx:alpine
size: "23400000"
- id: b0c9e5e4dbb14459edc593b39add54f5497e42d4eecc8d03bee5daf9537b0dae
repoDigests: []
repoTags:
- k8s.gcr.io/kube-controller-manager:v1.23.5
size: "125000000"
- id: 82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410
repoDigests: []
repoTags:
- k8s.gcr.io/echoserver:1.8
size: "95400000"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests: []
repoTags:
- k8s.gcr.io/pause:latest
size: "240000"
- id: 7425d3a7c478efbeb75f0937060117343a9a510f72f5f7ad9f14b1501a36940c
repoDigests: []
repoTags:
- docker.io/library/nginx:latest
size: "142000000"

                                                
                                                
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (4.32s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (18.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:303: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20220511231058-7184 ssh pgrep buildkitd

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:303: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-20220511231058-7184 ssh pgrep buildkitd: exit status 1 (6.2474473s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:310: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20220511231058-7184 image build -t localhost/my-image:functional-20220511231058-7184 testdata\build

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:310: (dbg) Done: out/minikube-windows-amd64.exe -p functional-20220511231058-7184 image build -t localhost/my-image:functional-20220511231058-7184 testdata\build: (7.7022317s)
functional_test.go:315: (dbg) Stdout: out/minikube-windows-amd64.exe -p functional-20220511231058-7184 image build -t localhost/my-image:functional-20220511231058-7184 testdata\build:
Sending build context to Docker daemon  3.072kB

Step 1/3 : FROM gcr.io/k8s-minikube/busybox
latest: Pulling from k8s-minikube/busybox
5cc84ad355aa: Pulling fs layer
5cc84ad355aa: Download complete
5cc84ad355aa: Pull complete
Digest: sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
Status: Downloaded newer image for gcr.io/k8s-minikube/busybox:latest
---> beae173ccac6
Step 2/3 : RUN true
---> Running in d83413bface6
Removing intermediate container d83413bface6
---> 3d5dc6d82b74
Step 3/3 : ADD content.txt /
---> a435d2c0045f
Successfully built a435d2c0045f
Successfully tagged localhost/my-image:functional-20220511231058-7184
functional_test.go:443: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20220511231058-7184 image ls

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:443: (dbg) Done: out/minikube-windows-amd64.exe -p functional-20220511231058-7184 image ls: (4.2906385s)
E0511 23:24:08.230275    7184 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\addons-20220511225738-7184\client.crt: The system cannot find the path specified.
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (18.24s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (6.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/Setup
functional_test.go:337: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.8

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/Setup
functional_test.go:337: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.8: (5.0683974s)
functional_test.go:342: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.8 gcr.io/google-containers/addon-resizer:functional-20220511231058-7184
functional_test.go:342: (dbg) Done: docker tag gcr.io/google-containers/addon-resizer:1.8.8 gcr.io/google-containers/addon-resizer:functional-20220511231058-7184: (1.2378535s)
--- PASS: TestFunctional/parallel/ImageCommands/Setup (6.33s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (4.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2048: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20220511231058-7184 update-context --alsologtostderr -v=2

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2048: (dbg) Done: out/minikube-windows-amd64.exe -p functional-20220511231058-7184 update-context --alsologtostderr -v=2: (4.0484307s)
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (4.05s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (4.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2048: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20220511231058-7184 update-context --alsologtostderr -v=2

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2048: (dbg) Done: out/minikube-windows-amd64.exe -p functional-20220511231058-7184 update-context --alsologtostderr -v=2: (4.1550064s)
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (4.16s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (4.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2048: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20220511231058-7184 update-context --alsologtostderr -v=2

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2048: (dbg) Done: out/minikube-windows-amd64.exe -p functional-20220511231058-7184 update-context --alsologtostderr -v=2: (4.0431793s)
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (4.05s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:127: (dbg) daemon: [out/minikube-windows-amd64.exe -p functional-20220511231058-7184 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (16.88s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:147: (dbg) Run:  kubectl --context functional-20220511231058-7184 apply -f testdata\testsvc.yaml
functional_test_tunnel_test.go:151: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:342: "nginx-svc" [19039f8e-372f-4a08-9d16-ccf696799016] Pending
helpers_test.go:342: "nginx-svc" [19039f8e-372f-4a08-9d16-ccf696799016] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])

                                                
                                                
=== CONT  TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
helpers_test.go:342: "nginx-svc" [19039f8e-372f-4a08-9d16-ccf696799016] Running

                                                
                                                
=== CONT  TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:151: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 16.1048017s
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (16.88s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (19.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:350: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20220511231058-7184 image load --daemon gcr.io/google-containers/addon-resizer:functional-20220511231058-7184

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:350: (dbg) Done: out/minikube-windows-amd64.exe -p functional-20220511231058-7184 image load --daemon gcr.io/google-containers/addon-resizer:functional-20220511231058-7184: (14.646331s)
functional_test.go:443: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20220511231058-7184 image ls

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:443: (dbg) Done: out/minikube-windows-amd64.exe -p functional-20220511231058-7184 image ls: (4.521248s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (19.17s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (14.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:360: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20220511231058-7184 image load --daemon gcr.io/google-containers/addon-resizer:functional-20220511231058-7184

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:360: (dbg) Done: out/minikube-windows-amd64.exe -p functional-20220511231058-7184 image load --daemon gcr.io/google-containers/addon-resizer:functional-20220511231058-7184: (9.5216666s)
functional_test.go:443: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20220511231058-7184 image ls

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:443: (dbg) Done: out/minikube-windows-amd64.exe -p functional-20220511231058-7184 image ls: (4.9078975s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (14.43s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (24.71s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:230: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.9

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:230: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.9: (4.7630495s)
functional_test.go:235: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.9 gcr.io/google-containers/addon-resizer:functional-20220511231058-7184
functional_test.go:235: (dbg) Done: docker tag gcr.io/google-containers/addon-resizer:1.8.9 gcr.io/google-containers/addon-resizer:functional-20220511231058-7184: (1.206282s)
functional_test.go:240: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20220511231058-7184 image load --daemon gcr.io/google-containers/addon-resizer:functional-20220511231058-7184

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:240: (dbg) Done: out/minikube-windows-amd64.exe -p functional-20220511231058-7184 image load --daemon gcr.io/google-containers/addon-resizer:functional-20220511231058-7184: (14.1706144s)
functional_test.go:443: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20220511231058-7184 image ls

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:443: (dbg) Done: out/minikube-windows-amd64.exe -p functional-20220511231058-7184 image ls: (4.5463262s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (24.71s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (8.96s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:375: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20220511231058-7184 image save gcr.io/google-containers/addon-resizer:functional-20220511231058-7184 C:\jenkins\workspace\Docker_Windows_integration\addon-resizer-save.tar

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:375: (dbg) Done: out/minikube-windows-amd64.exe -p functional-20220511231058-7184 image save gcr.io/google-containers/addon-resizer:functional-20220511231058-7184 C:\jenkins\workspace\Docker_Windows_integration\addon-resizer-save.tar: (8.9613041s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (8.96s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (14.7s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:387: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20220511231058-7184 image rm gcr.io/google-containers/addon-resizer:functional-20220511231058-7184

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:387: (dbg) Done: out/minikube-windows-amd64.exe -p functional-20220511231058-7184 image rm gcr.io/google-containers/addon-resizer:functional-20220511231058-7184: (9.7937168s)
functional_test.go:443: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20220511231058-7184 image ls

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:443: (dbg) Done: out/minikube-windows-amd64.exe -p functional-20220511231058-7184 image ls: (4.9096675s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (14.70s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (18s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:404: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20220511231058-7184 image load C:\jenkins\workspace\Docker_Windows_integration\addon-resizer-save.tar

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:404: (dbg) Done: out/minikube-windows-amd64.exe -p functional-20220511231058-7184 image load C:\jenkins\workspace\Docker_Windows_integration\addon-resizer-save.tar: (12.9964474s)
functional_test.go:443: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20220511231058-7184 image ls

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:443: (dbg) Done: out/minikube-windows-amd64.exe -p functional-20220511231058-7184 image ls: (5.0042481s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (18.00s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (15.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:414: (dbg) Run:  docker rmi gcr.io/google-containers/addon-resizer:functional-20220511231058-7184
functional_test.go:414: (dbg) Done: docker rmi gcr.io/google-containers/addon-resizer:functional-20220511231058-7184: (1.1392368s)
functional_test.go:419: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20220511231058-7184 image save --daemon gcr.io/google-containers/addon-resizer:functional-20220511231058-7184

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:419: (dbg) Done: out/minikube-windows-amd64.exe -p functional-20220511231058-7184 image save --daemon gcr.io/google-containers/addon-resizer:functional-20220511231058-7184: (13.0706019s)
functional_test.go:424: (dbg) Run:  docker image inspect gcr.io/google-containers/addon-resizer:functional-20220511231058-7184
functional_test.go:424: (dbg) Done: docker image inspect gcr.io/google-containers/addon-resizer:functional-20220511231058-7184: (1.1593815s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (15.39s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (9.63s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1268: (dbg) Run:  out/minikube-windows-amd64.exe profile lis

                                                
                                                
=== CONT  TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1268: (dbg) Done: out/minikube-windows-amd64.exe profile lis: (3.027831s)
functional_test.go:1273: (dbg) Run:  out/minikube-windows-amd64.exe profile list --output json

                                                
                                                
=== CONT  TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1273: (dbg) Done: out/minikube-windows-amd64.exe profile list --output json: (6.6059039s)
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (9.63s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (7.03s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1308: (dbg) Run:  out/minikube-windows-amd64.exe profile list

                                                
                                                
=== CONT  TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1308: (dbg) Done: out/minikube-windows-amd64.exe profile list: (6.6570622s)
functional_test.go:1313: Took "6.6571821s" to run "out/minikube-windows-amd64.exe profile list"
functional_test.go:1322: (dbg) Run:  out/minikube-windows-amd64.exe profile list -l
functional_test.go:1327: Took "371.3087ms" to run "out/minikube-windows-amd64.exe profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (7.03s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (6.81s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1359: (dbg) Run:  out/minikube-windows-amd64.exe profile list -o json

                                                
                                                
=== CONT  TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1359: (dbg) Done: out/minikube-windows-amd64.exe profile list -o json: (6.466822s)
functional_test.go:1364: Took "6.4668958s" to run "out/minikube-windows-amd64.exe profile list -o json"
functional_test.go:1372: (dbg) Run:  out/minikube-windows-amd64.exe profile list -o json --light
functional_test.go:1377: Took "346.2559ms" to run "out/minikube-windows-amd64.exe profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (6.81s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2185: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20220511231058-7184 version --short
--- PASS: TestFunctional/parallel/Version/short (0.38s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (6.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2199: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-20220511231058-7184 version -o=json --components

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2199: (dbg) Done: out/minikube-windows-amd64.exe -p functional-20220511231058-7184 version -o=json --components: (6.1326603s)
--- PASS: TestFunctional/parallel/Version/components (6.13s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:369: (dbg) stopping [out/minikube-windows-amd64.exe -p functional-20220511231058-7184 tunnel --alsologtostderr] ...
helpers_test.go:506: unable to kill pid 8856: TerminateProcess: Access is denied.
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctional/delete_addon-resizer_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_addon-resizer_images
functional_test.go:185: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:185: (dbg) Non-zero exit: docker rmi -f gcr.io/google-containers/addon-resizer:1.8.8: context deadline exceeded (0s)
functional_test.go:187: failed to remove image "gcr.io/google-containers/addon-resizer:1.8.8" from docker images. args "docker rmi -f gcr.io/google-containers/addon-resizer:1.8.8": context deadline exceeded
functional_test.go:185: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:functional-20220511231058-7184
functional_test.go:185: (dbg) Non-zero exit: docker rmi -f gcr.io/google-containers/addon-resizer:functional-20220511231058-7184: context deadline exceeded (36.6µs)
functional_test.go:187: failed to remove image "gcr.io/google-containers/addon-resizer:functional-20220511231058-7184" from docker images. args "docker rmi -f gcr.io/google-containers/addon-resizer:functional-20220511231058-7184": context deadline exceeded
--- PASS: TestFunctional/delete_addon-resizer_images (0.02s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.01s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:193: (dbg) Run:  docker rmi -f localhost/my-image:functional-20220511231058-7184
functional_test.go:193: (dbg) Non-zero exit: docker rmi -f localhost/my-image:functional-20220511231058-7184: context deadline exceeded (33.7µs)
functional_test.go:195: failed to remove image my-image from docker images. args "docker rmi -f localhost/my-image:functional-20220511231058-7184": context deadline exceeded
--- PASS: TestFunctional/delete_my-image_image (0.01s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.01s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:201: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-20220511231058-7184
functional_test.go:201: (dbg) Non-zero exit: docker rmi -f minikube-local-cache-test:functional-20220511231058-7184: context deadline exceeded (0s)
functional_test.go:203: failed to remove image minikube local cache test images from docker. args "docker rmi -f minikube-local-cache-test:functional-20220511231058-7184": context deadline exceeded
--- PASS: TestFunctional/delete_minikube_cached_images (0.01s)

                                                
                                    
x
+
TestIngressAddonLegacy/StartLegacyK8sCluster (133.52s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/StartLegacyK8sCluster
ingress_addon_legacy_test.go:39: (dbg) Run:  out/minikube-windows-amd64.exe start -p ingress-addon-legacy-20220511235145-7184 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=docker
ingress_addon_legacy_test.go:39: (dbg) Done: out/minikube-windows-amd64.exe start -p ingress-addon-legacy-20220511235145-7184 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=docker: (2m13.5193567s)
--- PASS: TestIngressAddonLegacy/StartLegacyK8sCluster (133.52s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (48.36s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddonActivation
ingress_addon_legacy_test.go:70: (dbg) Run:  out/minikube-windows-amd64.exe -p ingress-addon-legacy-20220511235145-7184 addons enable ingress --alsologtostderr -v=5
E0511 23:54:08.314215    7184 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\addons-20220511225738-7184\client.crt: The system cannot find the path specified.
ingress_addon_legacy_test.go:70: (dbg) Done: out/minikube-windows-amd64.exe -p ingress-addon-legacy-20220511235145-7184 addons enable ingress --alsologtostderr -v=5: (48.3575145s)
--- PASS: TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (48.36s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (4.8s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation
ingress_addon_legacy_test.go:79: (dbg) Run:  out/minikube-windows-amd64.exe -p ingress-addon-legacy-20220511235145-7184 addons enable ingress-dns --alsologtostderr -v=5
ingress_addon_legacy_test.go:79: (dbg) Done: out/minikube-windows-amd64.exe -p ingress-addon-legacy-20220511235145-7184 addons enable ingress-dns --alsologtostderr -v=5: (4.8004555s)
--- PASS: TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (4.80s)

                                                
                                    
x
+
TestJSONOutput/start/Command (130.42s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-windows-amd64.exe start -p json-output-20220511235600-7184 --output=json --user=testUser --memory=2200 --wait=true --driver=docker
E0511 23:56:24.733016    7184 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-20220511231058-7184\client.crt: The system cannot find the path specified.
E0511 23:56:24.747998    7184 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-20220511231058-7184\client.crt: The system cannot find the path specified.
E0511 23:56:24.763292    7184 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-20220511231058-7184\client.crt: The system cannot find the path specified.
E0511 23:56:24.795132    7184 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-20220511231058-7184\client.crt: The system cannot find the path specified.
E0511 23:56:24.842621    7184 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-20220511231058-7184\client.crt: The system cannot find the path specified.
E0511 23:56:24.938276    7184 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-20220511231058-7184\client.crt: The system cannot find the path specified.
E0511 23:56:25.111803    7184 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-20220511231058-7184\client.crt: The system cannot find the path specified.
E0511 23:56:25.442911    7184 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-20220511231058-7184\client.crt: The system cannot find the path specified.
E0511 23:56:26.089851    7184 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-20220511231058-7184\client.crt: The system cannot find the path specified.
E0511 23:56:27.370415    7184 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-20220511231058-7184\client.crt: The system cannot find the path specified.
E0511 23:56:29.936508    7184 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-20220511231058-7184\client.crt: The system cannot find the path specified.
E0511 23:56:35.063428    7184 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-20220511231058-7184\client.crt: The system cannot find the path specified.
E0511 23:56:45.307487    7184 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-20220511231058-7184\client.crt: The system cannot find the path specified.
E0511 23:57:05.792415    7184 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-20220511231058-7184\client.crt: The system cannot find the path specified.
E0511 23:57:46.763939    7184 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-20220511231058-7184\client.crt: The system cannot find the path specified.
json_output_test.go:63: (dbg) Done: out/minikube-windows-amd64.exe start -p json-output-20220511235600-7184 --output=json --user=testUser --memory=2200 --wait=true --driver=docker: (2m10.4210604s)
--- PASS: TestJSONOutput/start/Command (130.42s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (6.12s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-windows-amd64.exe pause -p json-output-20220511235600-7184 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-windows-amd64.exe pause -p json-output-20220511235600-7184 --output=json --user=testUser: (6.1155695s)
--- PASS: TestJSONOutput/pause/Command (6.12s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (6.08s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-windows-amd64.exe unpause -p json-output-20220511235600-7184 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-windows-amd64.exe unpause -p json-output-20220511235600-7184 --output=json --user=testUser: (6.0775755s)
--- PASS: TestJSONOutput/unpause/Command (6.08s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (18.02s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-windows-amd64.exe stop -p json-output-20220511235600-7184 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-windows-amd64.exe stop -p json-output-20220511235600-7184 --output=json --user=testUser: (18.0212785s)
--- PASS: TestJSONOutput/stop/Command (18.02s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (7.64s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:149: (dbg) Run:  out/minikube-windows-amd64.exe start -p json-output-error-20220511235901-7184 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:149: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p json-output-error-20220511235901-7184 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (393.0871ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"ad445c6b-0934-4a77-9142-6bce13fffdac","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-20220511235901-7184] minikube v1.25.2 on Microsoft Windows 10 Enterprise N 10.0.19042 Build 19042","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"6eb396f9-0314-4ff5-ac30-1342e16265ad","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=C:\\Users\\jenkins.minikube4\\minikube-integration\\kubeconfig"}}
	{"specversion":"1.0","id":"b2c393ff-56c7-4abb-8a41-be4757ce07f6","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube"}}
	{"specversion":"1.0","id":"cb04832b-0bae-4db4-b363-3958ca6a195c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=13639"}}
	{"specversion":"1.0","id":"073c289f-30f0-4a20-836c-c4840c68896a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"835e8c9f-d2bc-4a37-b62f-6777daac431b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on windows/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-20220511235901-7184" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p json-output-error-20220511235901-7184
E0511 23:59:08.337200    7184 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\addons-20220511225738-7184\client.crt: The system cannot find the path specified.
E0511 23:59:08.695782    7184 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-20220511231058-7184\client.crt: The system cannot find the path specified.
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p json-output-error-20220511235901-7184: (7.2464511s)
--- PASS: TestErrorJSONOutput (7.64s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (139.52s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-windows-amd64.exe start -p docker-network-20220511235909-7184 --network=
E0511 23:59:52.706665    7184 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\ingress-addon-legacy-20220511235145-7184\client.crt: The system cannot find the path specified.
E0511 23:59:52.721873    7184 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\ingress-addon-legacy-20220511235145-7184\client.crt: The system cannot find the path specified.
E0511 23:59:52.737595    7184 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\ingress-addon-legacy-20220511235145-7184\client.crt: The system cannot find the path specified.
E0511 23:59:52.768121    7184 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\ingress-addon-legacy-20220511235145-7184\client.crt: The system cannot find the path specified.
E0511 23:59:52.814026    7184 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\ingress-addon-legacy-20220511235145-7184\client.crt: The system cannot find the path specified.
E0511 23:59:52.907102    7184 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\ingress-addon-legacy-20220511235145-7184\client.crt: The system cannot find the path specified.
E0511 23:59:53.077628    7184 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\ingress-addon-legacy-20220511235145-7184\client.crt: The system cannot find the path specified.
E0511 23:59:53.402799    7184 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\ingress-addon-legacy-20220511235145-7184\client.crt: The system cannot find the path specified.
E0511 23:59:54.050115    7184 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\ingress-addon-legacy-20220511235145-7184\client.crt: The system cannot find the path specified.
E0511 23:59:55.340271    7184 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\ingress-addon-legacy-20220511235145-7184\client.crt: The system cannot find the path specified.
E0511 23:59:58.272434    7184 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\ingress-addon-legacy-20220511235145-7184\client.crt: The system cannot find the path specified.
E0512 00:00:03.403364    7184 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\ingress-addon-legacy-20220511235145-7184\client.crt: The system cannot find the path specified.
E0512 00:00:13.652074    7184 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\ingress-addon-legacy-20220511235145-7184\client.crt: The system cannot find the path specified.
E0512 00:00:34.147749    7184 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\ingress-addon-legacy-20220511235145-7184\client.crt: The system cannot find the path specified.
kic_custom_network_test.go:57: (dbg) Done: out/minikube-windows-amd64.exe start -p docker-network-20220511235909-7184 --network=: (1m57.1173103s)
kic_custom_network_test.go:122: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:122: (dbg) Done: docker network ls --format {{.Name}}: (1.0717588s)
helpers_test.go:175: Cleaning up "docker-network-20220511235909-7184" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p docker-network-20220511235909-7184
E0512 00:01:15.115244    7184 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\ingress-addon-legacy-20220511235145-7184\client.crt: The system cannot find the path specified.
E0512 00:01:24.761889    7184 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-20220511231058-7184\client.crt: The system cannot find the path specified.
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p docker-network-20220511235909-7184: (21.3162596s)
--- PASS: TestKicCustomNetwork/create_custom_network (139.52s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (125.55s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-windows-amd64.exe start -p docker-network-20220512000129-7184 --network=bridge
E0512 00:01:52.560713    7184 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-20220511231058-7184\client.crt: The system cannot find the path specified.
E0512 00:02:37.045974    7184 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\ingress-addon-legacy-20220511235145-7184\client.crt: The system cannot find the path specified.
kic_custom_network_test.go:57: (dbg) Done: out/minikube-windows-amd64.exe start -p docker-network-20220512000129-7184 --network=bridge: (1m47.4404088s)
kic_custom_network_test.go:122: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:122: (dbg) Done: docker network ls --format {{.Name}}: (1.0975867s)
helpers_test.go:175: Cleaning up "docker-network-20220512000129-7184" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p docker-network-20220512000129-7184
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p docker-network-20220512000129-7184: (17.0045884s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (125.55s)

                                                
                                    
x
+
TestKicExistingNetwork (140.99s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:122: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:122: (dbg) Done: docker network ls --format {{.Name}}: (1.0980387s)
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-windows-amd64.exe start -p existing-network-20220512000339-7184 --network=existing-network
E0512 00:04:08.353532    7184 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\addons-20220511225738-7184\client.crt: The system cannot find the path specified.
E0512 00:04:52.729023    7184 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\ingress-addon-legacy-20220511235145-7184\client.crt: The system cannot find the path specified.
E0512 00:05:20.905437    7184 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\ingress-addon-legacy-20220511235145-7184\client.crt: The system cannot find the path specified.
kic_custom_network_test.go:93: (dbg) Done: out/minikube-windows-amd64.exe start -p existing-network-20220512000339-7184 --network=existing-network: (1m52.571336s)
helpers_test.go:175: Cleaning up "existing-network-20220512000339-7184" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p existing-network-20220512000339-7184
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p existing-network-20220512000339-7184: (21.602379s)
--- PASS: TestKicExistingNetwork (140.99s)

                                                
                                    
x
+
TestKicCustomSubnet (143.48s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-windows-amd64.exe start -p custom-subnet-20220512000555-7184 --subnet=192.168.60.0/24
E0512 00:06:24.774072    7184 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-20220511231058-7184\client.crt: The system cannot find the path specified.
kic_custom_network_test.go:112: (dbg) Done: out/minikube-windows-amd64.exe start -p custom-subnet-20220512000555-7184 --subnet=192.168.60.0/24: (2m1.1141787s)
kic_custom_network_test.go:133: (dbg) Run:  docker network inspect custom-subnet-20220512000555-7184 --format "{{(index .IPAM.Config 0).Subnet}}"
kic_custom_network_test.go:133: (dbg) Done: docker network inspect custom-subnet-20220512000555-7184 --format "{{(index .IPAM.Config 0).Subnet}}": (1.1088133s)
helpers_test.go:175: Cleaning up "custom-subnet-20220512000555-7184" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p custom-subnet-20220512000555-7184
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p custom-subnet-20220512000555-7184: (21.2524016s)
--- PASS: TestKicCustomSubnet (143.48s)

                                                
                                    
x
+
TestMainNoArgs (0.35s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-windows-amd64.exe
--- PASS: TestMainNoArgs (0.35s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (51.76s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-windows-amd64.exe start -p mount-start-1-20220512000819-7184 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker
E0512 00:09:08.362439    7184 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\addons-20220511225738-7184\client.crt: The system cannot find the path specified.
mount_start_test.go:98: (dbg) Done: out/minikube-windows-amd64.exe start -p mount-start-1-20220512000819-7184 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker: (50.7575356s)
--- PASS: TestMountStart/serial/StartWithMountFirst (51.76s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (6.43s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-windows-amd64.exe -p mount-start-1-20220512000819-7184 ssh -- ls /minikube-host
mount_start_test.go:114: (dbg) Done: out/minikube-windows-amd64.exe -p mount-start-1-20220512000819-7184 ssh -- ls /minikube-host: (6.430066s)
--- PASS: TestMountStart/serial/VerifyMountFirst (6.43s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (52.39s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-windows-amd64.exe start -p mount-start-2-20220512000819-7184 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker
E0512 00:09:52.741916    7184 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\ingress-addon-legacy-20220511235145-7184\client.crt: The system cannot find the path specified.
mount_start_test.go:98: (dbg) Done: out/minikube-windows-amd64.exe start -p mount-start-2-20220512000819-7184 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker: (51.372975s)
--- PASS: TestMountStart/serial/StartWithMountSecond (52.39s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (6.42s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-windows-amd64.exe -p mount-start-2-20220512000819-7184 ssh -- ls /minikube-host
mount_start_test.go:114: (dbg) Done: out/minikube-windows-amd64.exe -p mount-start-2-20220512000819-7184 ssh -- ls /minikube-host: (6.4176631s)
--- PASS: TestMountStart/serial/VerifyMountSecond (6.42s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (19.94s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-windows-amd64.exe delete -p mount-start-1-20220512000819-7184 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-windows-amd64.exe delete -p mount-start-1-20220512000819-7184 --alsologtostderr -v=5: (19.941342s)
--- PASS: TestMountStart/serial/DeleteFirst (19.94s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (6.47s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-windows-amd64.exe -p mount-start-2-20220512000819-7184 ssh -- ls /minikube-host
mount_start_test.go:114: (dbg) Done: out/minikube-windows-amd64.exe -p mount-start-2-20220512000819-7184 ssh -- ls /minikube-host: (6.4713817s)
--- PASS: TestMountStart/serial/VerifyMountPostDelete (6.47s)

                                                
                                    
x
+
TestMountStart/serial/Stop (8.92s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-windows-amd64.exe stop -p mount-start-2-20220512000819-7184
mount_start_test.go:155: (dbg) Done: out/minikube-windows-amd64.exe stop -p mount-start-2-20220512000819-7184: (8.9161793s)
--- PASS: TestMountStart/serial/Stop (8.92s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (29.86s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-windows-amd64.exe start -p mount-start-2-20220512000819-7184
mount_start_test.go:166: (dbg) Done: out/minikube-windows-amd64.exe start -p mount-start-2-20220512000819-7184: (28.8566452s)
--- PASS: TestMountStart/serial/RestartStopped (29.86s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (6.45s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-windows-amd64.exe -p mount-start-2-20220512000819-7184 ssh -- ls /minikube-host
E0512 00:11:24.784237    7184 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-20220511231058-7184\client.crt: The system cannot find the path specified.
mount_start_test.go:114: (dbg) Done: out/minikube-windows-amd64.exe -p mount-start-2-20220512000819-7184 ssh -- ls /minikube-host: (6.4465129s)
--- PASS: TestMountStart/serial/VerifyMountPostStop (6.45s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (250.65s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:83: (dbg) Run:  out/minikube-windows-amd64.exe start -p multinode-20220512001153-7184 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker
E0512 00:12:47.967874    7184 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-20220511231058-7184\client.crt: The system cannot find the path specified.
E0512 00:14:08.377944    7184 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\addons-20220511225738-7184\client.crt: The system cannot find the path specified.
E0512 00:14:52.752212    7184 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\ingress-addon-legacy-20220511235145-7184\client.crt: The system cannot find the path specified.
E0512 00:15:31.608521    7184 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\addons-20220511225738-7184\client.crt: The system cannot find the path specified.
multinode_test.go:83: (dbg) Done: out/minikube-windows-amd64.exe start -p multinode-20220512001153-7184 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker: (4m0.6340326s)
multinode_test.go:89: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-20220512001153-7184 status --alsologtostderr
multinode_test.go:89: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-20220512001153-7184 status --alsologtostderr: (10.0126475s)
--- PASS: TestMultiNode/serial/FreshStart2Nodes (250.65s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (26.24s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:479: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-20220512001153-7184 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:479: (dbg) Done: out/minikube-windows-amd64.exe kubectl -p multinode-20220512001153-7184 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml: (2.6450141s)
multinode_test.go:484: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-20220512001153-7184 -- rollout status deployment/busybox
multinode_test.go:484: (dbg) Done: out/minikube-windows-amd64.exe kubectl -p multinode-20220512001153-7184 -- rollout status deployment/busybox: (3.9106704s)
multinode_test.go:490: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-20220512001153-7184 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:490: (dbg) Done: out/minikube-windows-amd64.exe kubectl -p multinode-20220512001153-7184 -- get pods -o jsonpath='{.items[*].status.podIP}': (2.0274674s)
multinode_test.go:502: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-20220512001153-7184 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:502: (dbg) Done: out/minikube-windows-amd64.exe kubectl -p multinode-20220512001153-7184 -- get pods -o jsonpath='{.items[*].metadata.name}': (1.9954247s)
multinode_test.go:510: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-20220512001153-7184 -- exec busybox-7978565885-bxjgl -- nslookup kubernetes.io
E0512 00:16:16.308935    7184 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\ingress-addon-legacy-20220511235145-7184\client.crt: The system cannot find the path specified.
multinode_test.go:510: (dbg) Done: out/minikube-windows-amd64.exe kubectl -p multinode-20220512001153-7184 -- exec busybox-7978565885-bxjgl -- nslookup kubernetes.io: (3.4998755s)
multinode_test.go:510: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-20220512001153-7184 -- exec busybox-7978565885-gzkt6 -- nslookup kubernetes.io
multinode_test.go:510: (dbg) Done: out/minikube-windows-amd64.exe kubectl -p multinode-20220512001153-7184 -- exec busybox-7978565885-gzkt6 -- nslookup kubernetes.io: (3.3083299s)
multinode_test.go:520: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-20220512001153-7184 -- exec busybox-7978565885-bxjgl -- nslookup kubernetes.default
multinode_test.go:520: (dbg) Done: out/minikube-windows-amd64.exe kubectl -p multinode-20220512001153-7184 -- exec busybox-7978565885-bxjgl -- nslookup kubernetes.default: (2.1715649s)
multinode_test.go:520: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-20220512001153-7184 -- exec busybox-7978565885-gzkt6 -- nslookup kubernetes.default
E0512 00:16:24.805124    7184 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-20220511231058-7184\client.crt: The system cannot find the path specified.
multinode_test.go:520: (dbg) Done: out/minikube-windows-amd64.exe kubectl -p multinode-20220512001153-7184 -- exec busybox-7978565885-gzkt6 -- nslookup kubernetes.default: (2.1800613s)
multinode_test.go:528: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-20220512001153-7184 -- exec busybox-7978565885-bxjgl -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:528: (dbg) Done: out/minikube-windows-amd64.exe kubectl -p multinode-20220512001153-7184 -- exec busybox-7978565885-bxjgl -- nslookup kubernetes.default.svc.cluster.local: (2.2636031s)
multinode_test.go:528: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-20220512001153-7184 -- exec busybox-7978565885-gzkt6 -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:528: (dbg) Done: out/minikube-windows-amd64.exe kubectl -p multinode-20220512001153-7184 -- exec busybox-7978565885-gzkt6 -- nslookup kubernetes.default.svc.cluster.local: (2.232352s)
--- PASS: TestMultiNode/serial/DeployApp2Nodes (26.24s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (10.91s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:538: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-20220512001153-7184 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:538: (dbg) Done: out/minikube-windows-amd64.exe kubectl -p multinode-20220512001153-7184 -- get pods -o jsonpath='{.items[*].metadata.name}': (1.9811869s)
multinode_test.go:546: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-20220512001153-7184 -- exec busybox-7978565885-bxjgl -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:546: (dbg) Done: out/minikube-windows-amd64.exe kubectl -p multinode-20220512001153-7184 -- exec busybox-7978565885-bxjgl -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3": (2.2257082s)
multinode_test.go:554: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-20220512001153-7184 -- exec busybox-7978565885-bxjgl -- sh -c "ping -c 1 192.168.65.2"
multinode_test.go:554: (dbg) Done: out/minikube-windows-amd64.exe kubectl -p multinode-20220512001153-7184 -- exec busybox-7978565885-bxjgl -- sh -c "ping -c 1 192.168.65.2": (2.2456295s)
multinode_test.go:546: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-20220512001153-7184 -- exec busybox-7978565885-gzkt6 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:546: (dbg) Done: out/minikube-windows-amd64.exe kubectl -p multinode-20220512001153-7184 -- exec busybox-7978565885-gzkt6 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3": (2.2628513s)
multinode_test.go:554: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-20220512001153-7184 -- exec busybox-7978565885-gzkt6 -- sh -c "ping -c 1 192.168.65.2"
multinode_test.go:554: (dbg) Done: out/minikube-windows-amd64.exe kubectl -p multinode-20220512001153-7184 -- exec busybox-7978565885-gzkt6 -- sh -c "ping -c 1 192.168.65.2": (2.1951934s)
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (10.91s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (119.7s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:108: (dbg) Run:  out/minikube-windows-amd64.exe node add -p multinode-20220512001153-7184 -v 3 --alsologtostderr
multinode_test.go:108: (dbg) Done: out/minikube-windows-amd64.exe node add -p multinode-20220512001153-7184 -v 3 --alsologtostderr: (1m46.1052983s)
multinode_test.go:114: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-20220512001153-7184 status --alsologtostderr
multinode_test.go:114: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-20220512001153-7184 status --alsologtostderr: (13.5966297s)
--- PASS: TestMultiNode/serial/AddNode (119.70s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (6.53s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:130: (dbg) Run:  out/minikube-windows-amd64.exe profile list --output json
multinode_test.go:130: (dbg) Done: out/minikube-windows-amd64.exe profile list --output json: (6.5303157s)
--- PASS: TestMultiNode/serial/ProfileList (6.53s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (218.71s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:171: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-20220512001153-7184 status --output json --alsologtostderr
multinode_test.go:171: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-20220512001153-7184 status --output json --alsologtostderr: (13.2912802s)
helpers_test.go:554: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-20220512001153-7184 cp testdata\cp-test.txt multinode-20220512001153-7184:/home/docker/cp-test.txt
helpers_test.go:554: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-20220512001153-7184 cp testdata\cp-test.txt multinode-20220512001153-7184:/home/docker/cp-test.txt: (6.4718924s)
helpers_test.go:532: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-20220512001153-7184 ssh -n multinode-20220512001153-7184 "sudo cat /home/docker/cp-test.txt"
E0512 00:19:08.398751    7184 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\addons-20220511225738-7184\client.crt: The system cannot find the path specified.
helpers_test.go:532: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-20220512001153-7184 ssh -n multinode-20220512001153-7184 "sudo cat /home/docker/cp-test.txt": (6.4764815s)
helpers_test.go:554: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-20220512001153-7184 cp multinode-20220512001153-7184:/home/docker/cp-test.txt C:\Users\jenkins.minikube4\AppData\Local\Temp\TestMultiNodeserialCopyFile3413073136\001\cp-test_multinode-20220512001153-7184.txt
helpers_test.go:554: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-20220512001153-7184 cp multinode-20220512001153-7184:/home/docker/cp-test.txt C:\Users\jenkins.minikube4\AppData\Local\Temp\TestMultiNodeserialCopyFile3413073136\001\cp-test_multinode-20220512001153-7184.txt: (6.3211912s)
helpers_test.go:532: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-20220512001153-7184 ssh -n multinode-20220512001153-7184 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:532: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-20220512001153-7184 ssh -n multinode-20220512001153-7184 "sudo cat /home/docker/cp-test.txt": (6.3817923s)
helpers_test.go:554: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-20220512001153-7184 cp multinode-20220512001153-7184:/home/docker/cp-test.txt multinode-20220512001153-7184-m02:/home/docker/cp-test_multinode-20220512001153-7184_multinode-20220512001153-7184-m02.txt
helpers_test.go:554: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-20220512001153-7184 cp multinode-20220512001153-7184:/home/docker/cp-test.txt multinode-20220512001153-7184-m02:/home/docker/cp-test_multinode-20220512001153-7184_multinode-20220512001153-7184-m02.txt: (8.8007407s)
helpers_test.go:532: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-20220512001153-7184 ssh -n multinode-20220512001153-7184 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:532: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-20220512001153-7184 ssh -n multinode-20220512001153-7184 "sudo cat /home/docker/cp-test.txt": (6.3907896s)
helpers_test.go:532: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-20220512001153-7184 ssh -n multinode-20220512001153-7184-m02 "sudo cat /home/docker/cp-test_multinode-20220512001153-7184_multinode-20220512001153-7184-m02.txt"
helpers_test.go:532: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-20220512001153-7184 ssh -n multinode-20220512001153-7184-m02 "sudo cat /home/docker/cp-test_multinode-20220512001153-7184_multinode-20220512001153-7184-m02.txt": (6.2899486s)
helpers_test.go:554: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-20220512001153-7184 cp multinode-20220512001153-7184:/home/docker/cp-test.txt multinode-20220512001153-7184-m03:/home/docker/cp-test_multinode-20220512001153-7184_multinode-20220512001153-7184-m03.txt
E0512 00:19:52.778182    7184 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\ingress-addon-legacy-20220511235145-7184\client.crt: The system cannot find the path specified.
helpers_test.go:554: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-20220512001153-7184 cp multinode-20220512001153-7184:/home/docker/cp-test.txt multinode-20220512001153-7184-m03:/home/docker/cp-test_multinode-20220512001153-7184_multinode-20220512001153-7184-m03.txt: (8.7279157s)
helpers_test.go:532: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-20220512001153-7184 ssh -n multinode-20220512001153-7184 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:532: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-20220512001153-7184 ssh -n multinode-20220512001153-7184 "sudo cat /home/docker/cp-test.txt": (6.3400038s)
helpers_test.go:532: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-20220512001153-7184 ssh -n multinode-20220512001153-7184-m03 "sudo cat /home/docker/cp-test_multinode-20220512001153-7184_multinode-20220512001153-7184-m03.txt"
helpers_test.go:532: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-20220512001153-7184 ssh -n multinode-20220512001153-7184-m03 "sudo cat /home/docker/cp-test_multinode-20220512001153-7184_multinode-20220512001153-7184-m03.txt": (6.3004496s)
helpers_test.go:554: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-20220512001153-7184 cp testdata\cp-test.txt multinode-20220512001153-7184-m02:/home/docker/cp-test.txt
helpers_test.go:554: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-20220512001153-7184 cp testdata\cp-test.txt multinode-20220512001153-7184-m02:/home/docker/cp-test.txt: (6.3037047s)
helpers_test.go:532: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-20220512001153-7184 ssh -n multinode-20220512001153-7184-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:532: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-20220512001153-7184 ssh -n multinode-20220512001153-7184-m02 "sudo cat /home/docker/cp-test.txt": (6.2677317s)
helpers_test.go:554: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-20220512001153-7184 cp multinode-20220512001153-7184-m02:/home/docker/cp-test.txt C:\Users\jenkins.minikube4\AppData\Local\Temp\TestMultiNodeserialCopyFile3413073136\001\cp-test_multinode-20220512001153-7184-m02.txt
helpers_test.go:554: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-20220512001153-7184 cp multinode-20220512001153-7184-m02:/home/docker/cp-test.txt C:\Users\jenkins.minikube4\AppData\Local\Temp\TestMultiNodeserialCopyFile3413073136\001\cp-test_multinode-20220512001153-7184-m02.txt: (6.4013384s)
helpers_test.go:532: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-20220512001153-7184 ssh -n multinode-20220512001153-7184-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:532: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-20220512001153-7184 ssh -n multinode-20220512001153-7184-m02 "sudo cat /home/docker/cp-test.txt": (6.3695771s)
helpers_test.go:554: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-20220512001153-7184 cp multinode-20220512001153-7184-m02:/home/docker/cp-test.txt multinode-20220512001153-7184:/home/docker/cp-test_multinode-20220512001153-7184-m02_multinode-20220512001153-7184.txt
helpers_test.go:554: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-20220512001153-7184 cp multinode-20220512001153-7184-m02:/home/docker/cp-test.txt multinode-20220512001153-7184:/home/docker/cp-test_multinode-20220512001153-7184-m02_multinode-20220512001153-7184.txt: (8.6619679s)
helpers_test.go:532: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-20220512001153-7184 ssh -n multinode-20220512001153-7184-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:532: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-20220512001153-7184 ssh -n multinode-20220512001153-7184-m02 "sudo cat /home/docker/cp-test.txt": (6.3518834s)
helpers_test.go:532: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-20220512001153-7184 ssh -n multinode-20220512001153-7184 "sudo cat /home/docker/cp-test_multinode-20220512001153-7184-m02_multinode-20220512001153-7184.txt"
helpers_test.go:532: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-20220512001153-7184 ssh -n multinode-20220512001153-7184 "sudo cat /home/docker/cp-test_multinode-20220512001153-7184-m02_multinode-20220512001153-7184.txt": (6.3676666s)
helpers_test.go:554: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-20220512001153-7184 cp multinode-20220512001153-7184-m02:/home/docker/cp-test.txt multinode-20220512001153-7184-m03:/home/docker/cp-test_multinode-20220512001153-7184-m02_multinode-20220512001153-7184-m03.txt
helpers_test.go:554: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-20220512001153-7184 cp multinode-20220512001153-7184-m02:/home/docker/cp-test.txt multinode-20220512001153-7184-m03:/home/docker/cp-test_multinode-20220512001153-7184-m02_multinode-20220512001153-7184-m03.txt: (8.5638067s)
helpers_test.go:532: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-20220512001153-7184 ssh -n multinode-20220512001153-7184-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:532: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-20220512001153-7184 ssh -n multinode-20220512001153-7184-m02 "sudo cat /home/docker/cp-test.txt": (6.3208135s)
helpers_test.go:532: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-20220512001153-7184 ssh -n multinode-20220512001153-7184-m03 "sudo cat /home/docker/cp-test_multinode-20220512001153-7184-m02_multinode-20220512001153-7184-m03.txt"
helpers_test.go:532: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-20220512001153-7184 ssh -n multinode-20220512001153-7184-m03 "sudo cat /home/docker/cp-test_multinode-20220512001153-7184-m02_multinode-20220512001153-7184-m03.txt": (6.3381676s)
helpers_test.go:554: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-20220512001153-7184 cp testdata\cp-test.txt multinode-20220512001153-7184-m03:/home/docker/cp-test.txt
helpers_test.go:554: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-20220512001153-7184 cp testdata\cp-test.txt multinode-20220512001153-7184-m03:/home/docker/cp-test.txt: (6.5109532s)
helpers_test.go:532: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-20220512001153-7184 ssh -n multinode-20220512001153-7184-m03 "sudo cat /home/docker/cp-test.txt"
E0512 00:21:24.816963    7184 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-20220511231058-7184\client.crt: The system cannot find the path specified.
helpers_test.go:532: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-20220512001153-7184 ssh -n multinode-20220512001153-7184-m03 "sudo cat /home/docker/cp-test.txt": (6.3941108s)
helpers_test.go:554: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-20220512001153-7184 cp multinode-20220512001153-7184-m03:/home/docker/cp-test.txt C:\Users\jenkins.minikube4\AppData\Local\Temp\TestMultiNodeserialCopyFile3413073136\001\cp-test_multinode-20220512001153-7184-m03.txt
helpers_test.go:554: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-20220512001153-7184 cp multinode-20220512001153-7184-m03:/home/docker/cp-test.txt C:\Users\jenkins.minikube4\AppData\Local\Temp\TestMultiNodeserialCopyFile3413073136\001\cp-test_multinode-20220512001153-7184-m03.txt: (6.505259s)
helpers_test.go:532: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-20220512001153-7184 ssh -n multinode-20220512001153-7184-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:532: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-20220512001153-7184 ssh -n multinode-20220512001153-7184-m03 "sudo cat /home/docker/cp-test.txt": (6.3857738s)
helpers_test.go:554: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-20220512001153-7184 cp multinode-20220512001153-7184-m03:/home/docker/cp-test.txt multinode-20220512001153-7184:/home/docker/cp-test_multinode-20220512001153-7184-m03_multinode-20220512001153-7184.txt
helpers_test.go:554: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-20220512001153-7184 cp multinode-20220512001153-7184-m03:/home/docker/cp-test.txt multinode-20220512001153-7184:/home/docker/cp-test_multinode-20220512001153-7184-m03_multinode-20220512001153-7184.txt: (8.659508s)
helpers_test.go:532: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-20220512001153-7184 ssh -n multinode-20220512001153-7184-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:532: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-20220512001153-7184 ssh -n multinode-20220512001153-7184-m03 "sudo cat /home/docker/cp-test.txt": (6.4669898s)
helpers_test.go:532: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-20220512001153-7184 ssh -n multinode-20220512001153-7184 "sudo cat /home/docker/cp-test_multinode-20220512001153-7184-m03_multinode-20220512001153-7184.txt"
helpers_test.go:532: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-20220512001153-7184 ssh -n multinode-20220512001153-7184 "sudo cat /home/docker/cp-test_multinode-20220512001153-7184-m03_multinode-20220512001153-7184.txt": (6.4169837s)
helpers_test.go:554: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-20220512001153-7184 cp multinode-20220512001153-7184-m03:/home/docker/cp-test.txt multinode-20220512001153-7184-m02:/home/docker/cp-test_multinode-20220512001153-7184-m03_multinode-20220512001153-7184-m02.txt
helpers_test.go:554: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-20220512001153-7184 cp multinode-20220512001153-7184-m03:/home/docker/cp-test.txt multinode-20220512001153-7184-m02:/home/docker/cp-test_multinode-20220512001153-7184-m03_multinode-20220512001153-7184-m02.txt: (8.7913254s)
helpers_test.go:532: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-20220512001153-7184 ssh -n multinode-20220512001153-7184-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:532: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-20220512001153-7184 ssh -n multinode-20220512001153-7184-m03 "sudo cat /home/docker/cp-test.txt": (6.4702222s)
helpers_test.go:532: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-20220512001153-7184 ssh -n multinode-20220512001153-7184-m02 "sudo cat /home/docker/cp-test_multinode-20220512001153-7184-m03_multinode-20220512001153-7184-m02.txt"
helpers_test.go:532: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-20220512001153-7184 ssh -n multinode-20220512001153-7184-m02 "sudo cat /home/docker/cp-test_multinode-20220512001153-7184-m03_multinode-20220512001153-7184-m02.txt": (6.3558103s)
--- PASS: TestMultiNode/serial/CopyFile (218.71s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (30.4s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:208: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-20220512001153-7184 node stop m03
multinode_test.go:208: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-20220512001153-7184 node stop m03: (7.7318324s)
multinode_test.go:214: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-20220512001153-7184 status
multinode_test.go:214: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p multinode-20220512001153-7184 status: exit status 7 (11.3466295s)

                                                
                                                
-- stdout --
	multinode-20220512001153-7184
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-20220512001153-7184-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-20220512001153-7184-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:221: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-20220512001153-7184 status --alsologtostderr
multinode_test.go:221: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p multinode-20220512001153-7184 status --alsologtostderr: exit status 7 (11.3196304s)

                                                
                                                
-- stdout --
	multinode-20220512001153-7184
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-20220512001153-7184-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-20220512001153-7184-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0512 00:22:45.235488   10184 out.go:296] Setting OutFile to fd 836 ...
	I0512 00:22:45.293290   10184 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0512 00:22:45.293290   10184 out.go:309] Setting ErrFile to fd 980...
	I0512 00:22:45.293290   10184 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0512 00:22:45.305867   10184 out.go:303] Setting JSON to false
	I0512 00:22:45.305867   10184 mustload.go:65] Loading cluster: multinode-20220512001153-7184
	I0512 00:22:45.307520   10184 config.go:178] Loaded profile config "multinode-20220512001153-7184": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.5
	I0512 00:22:45.307602   10184 status.go:253] checking status of multinode-20220512001153-7184 ...
	I0512 00:22:45.322776   10184 cli_runner.go:164] Run: docker container inspect multinode-20220512001153-7184 --format={{.State.Status}}
	I0512 00:22:47.938738   10184 cli_runner.go:217] Completed: docker container inspect multinode-20220512001153-7184 --format={{.State.Status}}: (2.6158278s)
	I0512 00:22:47.938738   10184 status.go:328] multinode-20220512001153-7184 host status = "Running" (err=<nil>)
	I0512 00:22:47.938738   10184 host.go:66] Checking if "multinode-20220512001153-7184" exists ...
	I0512 00:22:47.951738   10184 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-20220512001153-7184
	I0512 00:22:49.034408   10184 cli_runner.go:217] Completed: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-20220512001153-7184: (1.0826142s)
	I0512 00:22:49.034408   10184 host.go:66] Checking if "multinode-20220512001153-7184" exists ...
	I0512 00:22:49.048283   10184 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0512 00:22:49.058953   10184 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220512001153-7184
	I0512 00:22:50.174068   10184 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220512001153-7184: (1.1144637s)
	I0512 00:22:50.174068   10184 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:64818 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\multinode-20220512001153-7184\id_rsa Username:docker}
	I0512 00:22:50.323949   10184 ssh_runner.go:235] Completed: sh -c "df -h /var | awk 'NR==2{print $5}'": (1.2756005s)
	I0512 00:22:50.336505   10184 ssh_runner.go:195] Run: systemctl --version
	I0512 00:22:50.364843   10184 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0512 00:22:50.414498   10184 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" multinode-20220512001153-7184
	I0512 00:22:51.497920   10184 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" multinode-20220512001153-7184: (1.0832584s)
	I0512 00:22:51.498911   10184 kubeconfig.go:92] found "multinode-20220512001153-7184" server: "https://127.0.0.1:64822"
	I0512 00:22:51.498911   10184 api_server.go:165] Checking apiserver status ...
	I0512 00:22:51.509847   10184 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0512 00:22:51.560147   10184 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1755/cgroup
	I0512 00:22:51.587716   10184 api_server.go:181] apiserver freezer: "20:freezer:/docker/aa137b362ac3ab83988333cb06a33a1933d6cbed69cb05303e39908ece105a9b/kubepods/burstable/pod17b418ee50da37b10a1d446fc0226fab/e80bdad59c89115dde3b3c15149cbf5a20fe6b187d18b8e4904b124c29568313"
	I0512 00:22:51.599374   10184 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/aa137b362ac3ab83988333cb06a33a1933d6cbed69cb05303e39908ece105a9b/kubepods/burstable/pod17b418ee50da37b10a1d446fc0226fab/e80bdad59c89115dde3b3c15149cbf5a20fe6b187d18b8e4904b124c29568313/freezer.state
	I0512 00:22:51.634038   10184 api_server.go:203] freezer state: "THAWED"
	I0512 00:22:51.634038   10184 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:64822/healthz ...
	I0512 00:22:51.658757   10184 api_server.go:266] https://127.0.0.1:64822/healthz returned 200:
	ok
	I0512 00:22:51.658757   10184 status.go:419] multinode-20220512001153-7184 apiserver status = Running (err=<nil>)
	I0512 00:22:51.658757   10184 status.go:255] multinode-20220512001153-7184 status: &{Name:multinode-20220512001153-7184 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0512 00:22:51.658757   10184 status.go:253] checking status of multinode-20220512001153-7184-m02 ...
	I0512 00:22:51.679759   10184 cli_runner.go:164] Run: docker container inspect multinode-20220512001153-7184-m02 --format={{.State.Status}}
	I0512 00:22:52.740842   10184 cli_runner.go:217] Completed: docker container inspect multinode-20220512001153-7184-m02 --format={{.State.Status}}: (1.0608141s)
	I0512 00:22:52.740842   10184 status.go:328] multinode-20220512001153-7184-m02 host status = "Running" (err=<nil>)
	I0512 00:22:52.740842   10184 host.go:66] Checking if "multinode-20220512001153-7184-m02" exists ...
	I0512 00:22:52.749424   10184 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-20220512001153-7184-m02
	I0512 00:22:53.872516   10184 cli_runner.go:217] Completed: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-20220512001153-7184-m02: (1.1228767s)
	I0512 00:22:53.872516   10184 host.go:66] Checking if "multinode-20220512001153-7184-m02" exists ...
	I0512 00:22:53.884102   10184 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0512 00:22:53.891214   10184 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220512001153-7184-m02
	I0512 00:22:54.966148   10184 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220512001153-7184-m02: (1.0743503s)
	I0512 00:22:54.966148   10184 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:64877 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\multinode-20220512001153-7184-m02\id_rsa Username:docker}
	I0512 00:22:55.099533   10184 ssh_runner.go:235] Completed: sh -c "df -h /var | awk 'NR==2{print $5}'": (1.2153677s)
	I0512 00:22:55.110242   10184 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0512 00:22:55.141912   10184 status.go:255] multinode-20220512001153-7184-m02 status: &{Name:multinode-20220512001153-7184-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0512 00:22:55.141912   10184 status.go:253] checking status of multinode-20220512001153-7184-m03 ...
	I0512 00:22:55.159008   10184 cli_runner.go:164] Run: docker container inspect multinode-20220512001153-7184-m03 --format={{.State.Status}}
	I0512 00:22:56.275901   10184 cli_runner.go:217] Completed: docker container inspect multinode-20220512001153-7184-m03 --format={{.State.Status}}: (1.1168364s)
	I0512 00:22:56.275901   10184 status.go:328] multinode-20220512001153-7184-m03 host status = "Stopped" (err=<nil>)
	I0512 00:22:56.275901   10184 status.go:341] host is not running, skipping remaining checks
	I0512 00:22:56.275901   10184 status.go:255] multinode-20220512001153-7184-m03 status: &{Name:multinode-20220512001153-7184-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (30.40s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (61.82s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:242: (dbg) Run:  docker version -f {{.Server.Version}}
multinode_test.go:242: (dbg) Done: docker version -f {{.Server.Version}}: (1.1618898s)
multinode_test.go:252: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-20220512001153-7184 node start m03 --alsologtostderr
multinode_test.go:252: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-20220512001153-7184 node start m03 --alsologtostderr: (46.8154073s)
multinode_test.go:259: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-20220512001153-7184 status
multinode_test.go:259: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-20220512001153-7184 status: (13.5392084s)
multinode_test.go:273: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (61.82s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (191.27s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:281: (dbg) Run:  out/minikube-windows-amd64.exe node list -p multinode-20220512001153-7184
multinode_test.go:288: (dbg) Run:  out/minikube-windows-amd64.exe stop -p multinode-20220512001153-7184
E0512 00:24:08.415741    7184 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\addons-20220511225738-7184\client.crt: The system cannot find the path specified.
multinode_test.go:288: (dbg) Done: out/minikube-windows-amd64.exe stop -p multinode-20220512001153-7184: (38.359019s)
multinode_test.go:293: (dbg) Run:  out/minikube-windows-amd64.exe start -p multinode-20220512001153-7184 --wait=true -v=8 --alsologtostderr
E0512 00:24:52.784422    7184 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\ingress-addon-legacy-20220511235145-7184\client.crt: The system cannot find the path specified.
E0512 00:26:24.834471    7184 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-20220511231058-7184\client.crt: The system cannot find the path specified.
multinode_test.go:293: (dbg) Done: out/minikube-windows-amd64.exe start -p multinode-20220512001153-7184 --wait=true -v=8 --alsologtostderr: (2m32.2243973s)
multinode_test.go:298: (dbg) Run:  out/minikube-windows-amd64.exe node list -p multinode-20220512001153-7184
--- PASS: TestMultiNode/serial/RestartKeepsNodes (191.27s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (44.68s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:392: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-20220512001153-7184 node delete m03
multinode_test.go:392: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-20220512001153-7184 node delete m03: (32.7700958s)
multinode_test.go:398: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-20220512001153-7184 status --alsologtostderr
multinode_test.go:398: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-20220512001153-7184 status --alsologtostderr: (10.1295285s)
multinode_test.go:412: (dbg) Run:  docker volume ls
multinode_test.go:412: (dbg) Done: docker volume ls: (1.0695108s)
multinode_test.go:422: (dbg) Run:  kubectl get nodes
multinode_test.go:430: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (44.68s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (40.63s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:312: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-20220512001153-7184 stop
multinode_test.go:312: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-20220512001153-7184 stop: (32.6435394s)
multinode_test.go:318: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-20220512001153-7184 status
multinode_test.go:318: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p multinode-20220512001153-7184 status: exit status 7 (3.9999353s)

                                                
                                                
-- stdout --
	multinode-20220512001153-7184
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-20220512001153-7184-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:325: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-20220512001153-7184 status --alsologtostderr
multinode_test.go:325: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p multinode-20220512001153-7184 status --alsologtostderr: exit status 7 (3.9905252s)

                                                
                                                
-- stdout --
	multinode-20220512001153-7184
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-20220512001153-7184-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0512 00:28:30.973350    4568 out.go:296] Setting OutFile to fd 736 ...
	I0512 00:28:31.037355    4568 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0512 00:28:31.037355    4568 out.go:309] Setting ErrFile to fd 692...
	I0512 00:28:31.037355    4568 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0512 00:28:31.049358    4568 out.go:303] Setting JSON to false
	I0512 00:28:31.049358    4568 mustload.go:65] Loading cluster: multinode-20220512001153-7184
	I0512 00:28:31.049358    4568 config.go:178] Loaded profile config "multinode-20220512001153-7184": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.5
	I0512 00:28:31.049358    4568 status.go:253] checking status of multinode-20220512001153-7184 ...
	I0512 00:28:31.065352    4568 cli_runner.go:164] Run: docker container inspect multinode-20220512001153-7184 --format={{.State.Status}}
	I0512 00:28:33.610070    4568 cli_runner.go:217] Completed: docker container inspect multinode-20220512001153-7184 --format={{.State.Status}}: (2.5445874s)
	I0512 00:28:33.610256    4568 status.go:328] multinode-20220512001153-7184 host status = "Stopped" (err=<nil>)
	I0512 00:28:33.610387    4568 status.go:341] host is not running, skipping remaining checks
	I0512 00:28:33.610387    4568 status.go:255] multinode-20220512001153-7184 status: &{Name:multinode-20220512001153-7184 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0512 00:28:33.610432    4568 status.go:253] checking status of multinode-20220512001153-7184-m02 ...
	I0512 00:28:33.626644    4568 cli_runner.go:164] Run: docker container inspect multinode-20220512001153-7184-m02 --format={{.State.Status}}
	I0512 00:28:34.700650    4568 cli_runner.go:217] Completed: docker container inspect multinode-20220512001153-7184-m02 --format={{.State.Status}}: (1.0739508s)
	I0512 00:28:34.700650    4568 status.go:328] multinode-20220512001153-7184-m02 host status = "Stopped" (err=<nil>)
	I0512 00:28:34.700650    4568 status.go:341] host is not running, skipping remaining checks
	I0512 00:28:34.700650    4568 status.go:255] multinode-20220512001153-7184-m02 status: &{Name:multinode-20220512001153-7184-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (40.63s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (125.42s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:342: (dbg) Run:  docker version -f {{.Server.Version}}
multinode_test.go:342: (dbg) Done: docker version -f {{.Server.Version}}: (1.169951s)
multinode_test.go:352: (dbg) Run:  out/minikube-windows-amd64.exe start -p multinode-20220512001153-7184 --wait=true -v=8 --alsologtostderr --driver=docker
E0512 00:29:08.427017    7184 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\addons-20220511225738-7184\client.crt: The system cannot find the path specified.
E0512 00:29:28.034689    7184 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-20220511231058-7184\client.crt: The system cannot find the path specified.
E0512 00:29:52.800971    7184 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\ingress-addon-legacy-20220511235145-7184\client.crt: The system cannot find the path specified.
multinode_test.go:352: (dbg) Done: out/minikube-windows-amd64.exe start -p multinode-20220512001153-7184 --wait=true -v=8 --alsologtostderr --driver=docker: (1m53.4490745s)
multinode_test.go:358: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-20220512001153-7184 status --alsologtostderr
multinode_test.go:358: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-20220512001153-7184 status --alsologtostderr: (10.1413032s)
multinode_test.go:372: (dbg) Run:  kubectl get nodes
multinode_test.go:380: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (125.42s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (146.55s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:441: (dbg) Run:  out/minikube-windows-amd64.exe node list -p multinode-20220512001153-7184
multinode_test.go:450: (dbg) Run:  out/minikube-windows-amd64.exe start -p multinode-20220512001153-7184-m02 --driver=docker
multinode_test.go:450: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p multinode-20220512001153-7184-m02 --driver=docker: exit status 14 (378.0404ms)

                                                
                                                
-- stdout --
	* [multinode-20220512001153-7184-m02] minikube v1.25.2 on Microsoft Windows 10 Enterprise N 10.0.19042 Build 19042
	  - KUBECONFIG=C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube4\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=13639
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-20220512001153-7184-m02' is duplicated with machine name 'multinode-20220512001153-7184-m02' in profile 'multinode-20220512001153-7184'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:458: (dbg) Run:  out/minikube-windows-amd64.exe start -p multinode-20220512001153-7184-m03 --driver=docker
E0512 00:31:24.853537    7184 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-20220511231058-7184\client.crt: The system cannot find the path specified.
E0512 00:32:11.674643    7184 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\addons-20220511225738-7184\client.crt: The system cannot find the path specified.
multinode_test.go:458: (dbg) Done: out/minikube-windows-amd64.exe start -p multinode-20220512001153-7184-m03 --driver=docker: (1m58.7639087s)
multinode_test.go:465: (dbg) Run:  out/minikube-windows-amd64.exe node add -p multinode-20220512001153-7184
multinode_test.go:465: (dbg) Non-zero exit: out/minikube-windows-amd64.exe node add -p multinode-20220512001153-7184: exit status 80 (5.5484904s)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-20220512001153-7184
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: Node multinode-20220512001153-7184-m03 already exists in multinode-20220512001153-7184-m03 profile
	* 
	╭──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                      │
	│    * If the above advice does not help, please let us know:                                                          │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                        │
	│                                                                                                                      │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                             │
	│    * Please also attach the following file to the GitHub issue:                                                      │
	│    * - C:\Users\jenkins.minikube4\AppData\Local\Temp\minikube_node_6ccce2fc44e3bb58d6c4f91e09ae7c7eaaf65535_3.log    │
	│                                                                                                                      │
	╰──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:470: (dbg) Run:  out/minikube-windows-amd64.exe delete -p multinode-20220512001153-7184-m03
E0512 00:32:56.371258    7184 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\ingress-addon-legacy-20220511235145-7184\client.crt: The system cannot find the path specified.
multinode_test.go:470: (dbg) Done: out/minikube-windows-amd64.exe delete -p multinode-20220512001153-7184-m03: (21.5255341s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (146.55s)

                                                
                                    
x
+
TestPreload (338.47s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:48: (dbg) Run:  out/minikube-windows-amd64.exe start -p test-preload-20220512003344-7184 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker --kubernetes-version=v1.17.0
E0512 00:34:08.435383    7184 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\addons-20220511225738-7184\client.crt: The system cannot find the path specified.
E0512 00:34:52.810367    7184 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\ingress-addon-legacy-20220511235145-7184\client.crt: The system cannot find the path specified.
E0512 00:36:24.861195    7184 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-20220511231058-7184\client.crt: The system cannot find the path specified.
preload_test.go:48: (dbg) Done: out/minikube-windows-amd64.exe start -p test-preload-20220512003344-7184 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker --kubernetes-version=v1.17.0: (2m46.5931698s)
preload_test.go:61: (dbg) Run:  out/minikube-windows-amd64.exe ssh -p test-preload-20220512003344-7184 -- docker pull gcr.io/k8s-minikube/busybox
preload_test.go:61: (dbg) Done: out/minikube-windows-amd64.exe ssh -p test-preload-20220512003344-7184 -- docker pull gcr.io/k8s-minikube/busybox: (7.646866s)
preload_test.go:71: (dbg) Run:  out/minikube-windows-amd64.exe start -p test-preload-20220512003344-7184 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker --kubernetes-version=v1.17.3
preload_test.go:71: (dbg) Done: out/minikube-windows-amd64.exe start -p test-preload-20220512003344-7184 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker --kubernetes-version=v1.17.3: (2m14.1177986s)
preload_test.go:80: (dbg) Run:  out/minikube-windows-amd64.exe ssh -p test-preload-20220512003344-7184 -- docker images
preload_test.go:80: (dbg) Done: out/minikube-windows-amd64.exe ssh -p test-preload-20220512003344-7184 -- docker images: (6.4496394s)
helpers_test.go:175: Cleaning up "test-preload-20220512003344-7184" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p test-preload-20220512003344-7184
E0512 00:39:08.449780    7184 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\addons-20220511225738-7184\client.crt: The system cannot find the path specified.
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p test-preload-20220512003344-7184: (23.6631263s)
--- PASS: TestPreload (338.47s)

                                                
                                    
x
+
TestScheduledStopWindows (216.41s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-windows-amd64.exe start -p scheduled-stop-20220512003922-7184 --memory=2048 --driver=docker
E0512 00:39:52.826640    7184 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\ingress-addon-legacy-20220511235145-7184\client.crt: The system cannot find the path specified.
scheduled_stop_test.go:128: (dbg) Done: out/minikube-windows-amd64.exe start -p scheduled-stop-20220512003922-7184 --memory=2048 --driver=docker: (1m47.0934463s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-windows-amd64.exe stop -p scheduled-stop-20220512003922-7184 --schedule 5m
scheduled_stop_test.go:137: (dbg) Done: out/minikube-windows-amd64.exe stop -p scheduled-stop-20220512003922-7184 --schedule 5m: (5.3078781s)
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.TimeToStop}} -p scheduled-stop-20220512003922-7184 -n scheduled-stop-20220512003922-7184
scheduled_stop_test.go:191: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.TimeToStop}} -p scheduled-stop-20220512003922-7184 -n scheduled-stop-20220512003922-7184: (6.9312141s)
scheduled_stop_test.go:54: (dbg) Run:  out/minikube-windows-amd64.exe ssh -p scheduled-stop-20220512003922-7184 -- sudo systemctl show minikube-scheduled-stop --no-page
E0512 00:41:24.870810    7184 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-20220511231058-7184\client.crt: The system cannot find the path specified.
scheduled_stop_test.go:54: (dbg) Done: out/minikube-windows-amd64.exe ssh -p scheduled-stop-20220512003922-7184 -- sudo systemctl show minikube-scheduled-stop --no-page: (6.4579323s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-windows-amd64.exe stop -p scheduled-stop-20220512003922-7184 --schedule 5s
scheduled_stop_test.go:137: (dbg) Done: out/minikube-windows-amd64.exe stop -p scheduled-stop-20220512003922-7184 --schedule 5s: (4.8480455s)
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-windows-amd64.exe status -p scheduled-stop-20220512003922-7184
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status -p scheduled-stop-20220512003922-7184: exit status 7 (2.9280845s)

                                                
                                                
-- stdout --
	scheduled-stop-20220512003922-7184
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p scheduled-stop-20220512003922-7184 -n scheduled-stop-20220512003922-7184
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p scheduled-stop-20220512003922-7184 -n scheduled-stop-20220512003922-7184: exit status 7 (2.904381s)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-20220512003922-7184" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p scheduled-stop-20220512003922-7184
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p scheduled-stop-20220512003922-7184: (19.9293262s)
--- PASS: TestScheduledStopWindows (216.41s)

                                                
                                    
x
+
TestInsufficientStorage (111.21s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-windows-amd64.exe start -p insufficient-storage-20220512004557-7184 --memory=2048 --output=json --wait=true --driver=docker
E0512 00:46:08.088264    7184 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-20220511231058-7184\client.crt: The system cannot find the path specified.
E0512 00:46:24.887961    7184 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-20220511231058-7184\client.crt: The system cannot find the path specified.
status_test.go:50: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p insufficient-storage-20220512004557-7184 --memory=2048 --output=json --wait=true --driver=docker: exit status 26 (1m18.1739366s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"5f1bee5f-e849-48ca-9590-816632c35096","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-20220512004557-7184] minikube v1.25.2 on Microsoft Windows 10 Enterprise N 10.0.19042 Build 19042","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"341ca3a6-f955-4083-a6e5-aecb3cd0d720","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=C:\\Users\\jenkins.minikube4\\minikube-integration\\kubeconfig"}}
	{"specversion":"1.0","id":"19dae754-2cc3-45e6-981f-535adf8b7fee","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube"}}
	{"specversion":"1.0","id":"427b99ac-724a-4334-a7a0-b99257a4eed3","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=13639"}}
	{"specversion":"1.0","id":"d7d98db2-187a-4f21-ad8b-4b94602a4098","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"c4c99d91-bf11-4e7a-a3a4-cd0e80496c9e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"8ea13fcb-a998-4f2e-a1bb-e420069b2df6","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"aea12b98-ed9e-4b28-b359-d34378a7f93b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"fa26a586-281e-4e3a-bcf5-20dd37df9ca1","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker Desktop driver with the root privilege"}}
	{"specversion":"1.0","id":"50cc21c9-28a3-4df6-8fc0-c3294713d14f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting control plane node insufficient-storage-20220512004557-7184 in cluster insufficient-storage-20220512004557-7184","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"494993fe-6ff1-43a3-8519-6caafc775219","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"8a5fde19-b746-4c9a-8e42-405ab84ea4dc","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=2048MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"972a9859-590b-4fb9-bdf6-8de536618707","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\t\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100%% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-windows-amd64.exe status -p insufficient-storage-20220512004557-7184 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status -p insufficient-storage-20220512004557-7184 --output=json --layout=cluster: exit status 7 (6.4806229s)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-20220512004557-7184","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=2048MB) ...","BinaryVersion":"v1.25.2","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-20220512004557-7184","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0512 00:47:21.858341    3844 status.go:413] kubeconfig endpoint: extract IP: "insufficient-storage-20220512004557-7184" does not appear in C:\Users\jenkins.minikube4\minikube-integration\kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-windows-amd64.exe status -p insufficient-storage-20220512004557-7184 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status -p insufficient-storage-20220512004557-7184 --output=json --layout=cluster: exit status 7 (6.5787872s)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-20220512004557-7184","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.25.2","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-20220512004557-7184","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0512 00:47:28.424207    8228 status.go:413] kubeconfig endpoint: extract IP: "insufficient-storage-20220512004557-7184" does not appear in C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	E0512 00:47:28.461791    8228 status.go:557] unable to read event log: stat: CreateFile C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\insufficient-storage-20220512004557-7184\events.json: The system cannot find the file specified.

                                                
                                                
** /stderr **
helpers_test.go:175: Cleaning up "insufficient-storage-20220512004557-7184" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p insufficient-storage-20220512004557-7184
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p insufficient-storage-20220512004557-7184: (19.9755701s)
--- PASS: TestInsufficientStorage (111.21s)

                                                
                                    
x
+
TestKubernetesUpgrade (305.78s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:229: (dbg) Run:  out/minikube-windows-amd64.exe start -p kubernetes-upgrade-20220512005507-7184 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=docker

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:229: (dbg) Done: out/minikube-windows-amd64.exe start -p kubernetes-upgrade-20220512005507-7184 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=docker: (2m9.7422349s)
version_upgrade_test.go:234: (dbg) Run:  out/minikube-windows-amd64.exe stop -p kubernetes-upgrade-20220512005507-7184
version_upgrade_test.go:234: (dbg) Done: out/minikube-windows-amd64.exe stop -p kubernetes-upgrade-20220512005507-7184: (9.472709s)
version_upgrade_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe -p kubernetes-upgrade-20220512005507-7184 status --format={{.Host}}
version_upgrade_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p kubernetes-upgrade-20220512005507-7184 status --format={{.Host}}: exit status 7 (3.0431126s)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:241: status error: exit status 7 (may be ok)
version_upgrade_test.go:250: (dbg) Run:  out/minikube-windows-amd64.exe start -p kubernetes-upgrade-20220512005507-7184 --memory=2200 --kubernetes-version=v1.23.6-rc.0 --alsologtostderr -v=1 --driver=docker

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:250: (dbg) Done: out/minikube-windows-amd64.exe start -p kubernetes-upgrade-20220512005507-7184 --memory=2200 --kubernetes-version=v1.23.6-rc.0 --alsologtostderr -v=1 --driver=docker: (1m37.8086355s)
version_upgrade_test.go:255: (dbg) Run:  kubectl --context kubernetes-upgrade-20220512005507-7184 version --output=json
version_upgrade_test.go:274: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:276: (dbg) Run:  out/minikube-windows-amd64.exe start -p kubernetes-upgrade-20220512005507-7184 --memory=2200 --kubernetes-version=v1.16.0 --driver=docker
version_upgrade_test.go:276: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p kubernetes-upgrade-20220512005507-7184 --memory=2200 --kubernetes-version=v1.16.0 --driver=docker: exit status 106 (443.2721ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-20220512005507-7184] minikube v1.25.2 on Microsoft Windows 10 Enterprise N 10.0.19042 Build 19042
	  - KUBECONFIG=C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube4\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=13639
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.23.6-rc.0 cluster to v1.16.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.16.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-20220512005507-7184
	    minikube start -p kubernetes-upgrade-20220512005507-7184 --kubernetes-version=v1.16.0
	    
	    2) Create a second cluster with Kubernetes 1.16.0, by running:
	    
	    minikube start -p kubernetes-upgrade-20220512005507-71842 --kubernetes-version=v1.16.0
	    
	    3) Use the existing cluster at version Kubernetes 1.23.6-rc.0, by running:
	    
	    minikube start -p kubernetes-upgrade-20220512005507-7184 --kubernetes-version=v1.23.6-rc.0
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:280: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:282: (dbg) Run:  out/minikube-windows-amd64.exe start -p kubernetes-upgrade-20220512005507-7184 --memory=2200 --kubernetes-version=v1.23.6-rc.0 --alsologtostderr -v=1 --driver=docker
E0512 00:59:08.511688    7184 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\addons-20220511225738-7184\client.crt: The system cannot find the path specified.

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:282: (dbg) Done: out/minikube-windows-amd64.exe start -p kubernetes-upgrade-20220512005507-7184 --memory=2200 --kubernetes-version=v1.23.6-rc.0 --alsologtostderr -v=1 --driver=docker: (39.8756551s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-20220512005507-7184" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p kubernetes-upgrade-20220512005507-7184

                                                
                                                
=== CONT  TestKubernetesUpgrade
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p kubernetes-upgrade-20220512005507-7184: (25.0788919s)
--- PASS: TestKubernetesUpgrade (305.78s)

                                                
                                    
x
+
TestMissingContainerUpgrade (394.72s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:316: (dbg) Run:  C:\Users\jenkins.minikube4\AppData\Local\Temp\minikube-v1.9.1.4174074927.exe start -p missing-upgrade-20220512005316-7184 --memory=2200 --driver=docker
E0512 00:54:08.495761    7184 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\addons-20220511225738-7184\client.crt: The system cannot find the path specified.

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:316: (dbg) Done: C:\Users\jenkins.minikube4\AppData\Local\Temp\minikube-v1.9.1.4174074927.exe start -p missing-upgrade-20220512005316-7184 --memory=2200 --driver=docker: (3m15.4944135s)
version_upgrade_test.go:325: (dbg) Run:  docker stop missing-upgrade-20220512005316-7184
version_upgrade_test.go:325: (dbg) Done: docker stop missing-upgrade-20220512005316-7184: (12.1602327s)
version_upgrade_test.go:330: (dbg) Run:  docker rm missing-upgrade-20220512005316-7184
version_upgrade_test.go:330: (dbg) Done: docker rm missing-upgrade-20220512005316-7184: (1.1775662s)
version_upgrade_test.go:336: (dbg) Run:  out/minikube-windows-amd64.exe start -p missing-upgrade-20220512005316-7184 --memory=2200 --alsologtostderr -v=1 --driver=docker

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:336: (dbg) Done: out/minikube-windows-amd64.exe start -p missing-upgrade-20220512005316-7184 --memory=2200 --alsologtostderr -v=1 --driver=docker: (2m35.0824917s)
helpers_test.go:175: Cleaning up "missing-upgrade-20220512005316-7184" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p missing-upgrade-20220512005316-7184

                                                
                                                
=== CONT  TestMissingContainerUpgrade
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p missing-upgrade-20220512005316-7184: (30.3287517s)
--- PASS: TestMissingContainerUpgrade (394.72s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.46s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion

                                                
                                                
=== CONT  TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-windows-amd64.exe start -p NoKubernetes-20220512004748-7184 --no-kubernetes --kubernetes-version=1.20 --driver=docker

                                                
                                                
=== CONT  TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p NoKubernetes-20220512004748-7184 --no-kubernetes --kubernetes-version=1.20 --driver=docker: exit status 14 (458.2147ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-20220512004748-7184] minikube v1.25.2 on Microsoft Windows 10 Enterprise N 10.0.19042 Build 19042
	  - KUBECONFIG=C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube4\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=13639
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.46s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (0.53s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (0.53s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (190.79s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-windows-amd64.exe start -p NoKubernetes-20220512004748-7184 --driver=docker

                                                
                                                
=== CONT  TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Done: out/minikube-windows-amd64.exe start -p NoKubernetes-20220512004748-7184 --driver=docker: (3m0.9961589s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-windows-amd64.exe -p NoKubernetes-20220512004748-7184 status -o json

                                                
                                                
=== CONT  TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:200: (dbg) Done: out/minikube-windows-amd64.exe -p NoKubernetes-20220512004748-7184 status -o json: (9.7938754s)
--- PASS: TestNoKubernetes/serial/StartWithK8s (190.79s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (406.24s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:190: (dbg) Run:  C:\Users\jenkins.minikube4\AppData\Local\Temp\minikube-v1.9.0.319510575.exe start -p stopped-upgrade-20220512004748-7184 --memory=2200 --vm-driver=docker
E0512 00:48:51.739443    7184 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\addons-20220511225738-7184\client.crt: The system cannot find the path specified.
E0512 00:49:08.486056    7184 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\addons-20220511225738-7184\client.crt: The system cannot find the path specified.
E0512 00:49:36.428206    7184 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\ingress-addon-legacy-20220511235145-7184\client.crt: The system cannot find the path specified.
E0512 00:49:52.862794    7184 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\ingress-addon-legacy-20220511235145-7184\client.crt: The system cannot find the path specified.

                                                
                                                
=== CONT  TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:190: (dbg) Done: C:\Users\jenkins.minikube4\AppData\Local\Temp\minikube-v1.9.0.319510575.exe start -p stopped-upgrade-20220512004748-7184 --memory=2200 --vm-driver=docker: (4m56.1770256s)
version_upgrade_test.go:199: (dbg) Run:  C:\Users\jenkins.minikube4\AppData\Local\Temp\minikube-v1.9.0.319510575.exe -p stopped-upgrade-20220512004748-7184 stop

                                                
                                                
=== CONT  TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:199: (dbg) Done: C:\Users\jenkins.minikube4\AppData\Local\Temp\minikube-v1.9.0.319510575.exe -p stopped-upgrade-20220512004748-7184 stop: (22.2998343s)
version_upgrade_test.go:205: (dbg) Run:  out/minikube-windows-amd64.exe start -p stopped-upgrade-20220512004748-7184 --memory=2200 --alsologtostderr -v=1 --driver=docker

                                                
                                                
=== CONT  TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:205: (dbg) Done: out/minikube-windows-amd64.exe start -p stopped-upgrade-20220512004748-7184 --memory=2200 --alsologtostderr -v=1 --driver=docker: (1m27.7652333s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (406.24s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (75.31s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-windows-amd64.exe start -p NoKubernetes-20220512004748-7184 --no-kubernetes --driver=docker

                                                
                                                
=== CONT  TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Done: out/minikube-windows-amd64.exe start -p NoKubernetes-20220512004748-7184 --no-kubernetes --driver=docker: (44.3722871s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-windows-amd64.exe -p NoKubernetes-20220512004748-7184 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p NoKubernetes-20220512004748-7184 status -o json: exit status 2 (7.0024322s)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-20220512004748-7184","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-windows-amd64.exe delete -p NoKubernetes-20220512004748-7184
no_kubernetes_test.go:124: (dbg) Done: out/minikube-windows-amd64.exe delete -p NoKubernetes-20220512004748-7184: (23.9363798s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (75.31s)

                                                
                                    
x
+
TestPause/serial/Start (519.94s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-windows-amd64.exe start -p pause-20220512005140-7184 --memory=2048 --install-addons=false --wait=all --driver=docker

                                                
                                                
=== CONT  TestPause/serial/Start
pause_test.go:80: (dbg) Done: out/minikube-windows-amd64.exe start -p pause-20220512005140-7184 --memory=2048 --install-addons=false --wait=all --driver=docker: (8m39.9417855s)
--- PASS: TestPause/serial/Start (519.94s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (13.8s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:213: (dbg) Run:  out/minikube-windows-amd64.exe logs -p stopped-upgrade-20220512004748-7184
version_upgrade_test.go:213: (dbg) Done: out/minikube-windows-amd64.exe logs -p stopped-upgrade-20220512004748-7184: (13.8003429s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (13.80s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (41.4s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-windows-amd64.exe start -p pause-20220512005140-7184 --alsologtostderr -v=1 --driver=docker
pause_test.go:92: (dbg) Done: out/minikube-windows-amd64.exe start -p pause-20220512005140-7184 --alsologtostderr -v=1 --driver=docker: (41.3831877s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (41.40s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (553.62s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:188: (dbg) Run:  out/minikube-windows-amd64.exe start -p old-k8s-version-20220512010246-7184 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker --kubernetes-version=v1.16.0
E0512 01:02:48.143960    7184 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-20220511231058-7184\client.crt: The system cannot find the path specified.

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:188: (dbg) Done: out/minikube-windows-amd64.exe start -p old-k8s-version-20220512010246-7184 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker --kubernetes-version=v1.16.0: (9m13.6211753s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (553.62s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (183.38s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:188: (dbg) Run:  out/minikube-windows-amd64.exe start -p no-preload-20220512010315-7184 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker --kubernetes-version=v1.23.6-rc.0
E0512 01:04:08.539574    7184 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\addons-20220511225738-7184\client.crt: The system cannot find the path specified.
E0512 01:04:52.902704    7184 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\ingress-addon-legacy-20220511235145-7184\client.crt: The system cannot find the path specified.

                                                
                                                
=== CONT  TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:188: (dbg) Done: out/minikube-windows-amd64.exe start -p no-preload-20220512010315-7184 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker --kubernetes-version=v1.23.6-rc.0: (3m3.3804881s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (183.38s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (126.9s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:188: (dbg) Run:  out/minikube-windows-amd64.exe start -p embed-certs-20220512010611-7184 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker --kubernetes-version=v1.23.5
E0512 01:06:16.483403    7184 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\ingress-addon-legacy-20220511235145-7184\client.crt: The system cannot find the path specified.

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:188: (dbg) Done: out/minikube-windows-amd64.exe start -p embed-certs-20220512010611-7184 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker --kubernetes-version=v1.23.5: (2m6.8948688s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (126.90s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (11.15s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:198: (dbg) Run:  kubectl --context no-preload-20220512010315-7184 create -f testdata\busybox.yaml
start_stop_delete_test.go:198: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:342: "busybox" [1e1d035d-d4a8-4b74-bd63-6eebb5abac78] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:342: "busybox" [1e1d035d-d4a8-4b74-bd63-6eebb5abac78] Running
E0512 01:06:24.953818    7184 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-20220511231058-7184\client.crt: The system cannot find the path specified.
start_stop_delete_test.go:198: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 10.0540994s
start_stop_delete_test.go:198: (dbg) Run:  kubectl --context no-preload-20220512010315-7184 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (11.15s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (7s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:207: (dbg) Run:  out/minikube-windows-amd64.exe addons enable metrics-server -p no-preload-20220512010315-7184 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:207: (dbg) Done: out/minikube-windows-amd64.exe addons enable metrics-server -p no-preload-20220512010315-7184 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (5.403721s)
start_stop_delete_test.go:217: (dbg) Run:  kubectl --context no-preload-20220512010315-7184 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:217: (dbg) Done: kubectl --context no-preload-20220512010315-7184 describe deploy/metrics-server -n kube-system: (1.5843528s)
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (7.00s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (18.34s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:230: (dbg) Run:  out/minikube-windows-amd64.exe stop -p no-preload-20220512010315-7184 --alsologtostderr -v=3
start_stop_delete_test.go:230: (dbg) Done: out/minikube-windows-amd64.exe stop -p no-preload-20220512010315-7184 --alsologtostderr -v=3: (18.3350455s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (18.34s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (5.97s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:241: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p no-preload-20220512010315-7184 -n no-preload-20220512010315-7184
start_stop_delete_test.go:241: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p no-preload-20220512010315-7184 -n no-preload-20220512010315-7184: exit status 7 (2.8856075s)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:241: status error: exit status 7 (may be ok)
start_stop_delete_test.go:248: (dbg) Run:  out/minikube-windows-amd64.exe addons enable dashboard -p no-preload-20220512010315-7184 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4
start_stop_delete_test.go:248: (dbg) Done: out/minikube-windows-amd64.exe addons enable dashboard -p no-preload-20220512010315-7184 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4: (3.081042s)
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (5.97s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (412.58s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:258: (dbg) Run:  out/minikube-windows-amd64.exe start -p no-preload-20220512010315-7184 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker --kubernetes-version=v1.23.6-rc.0

                                                
                                                
=== CONT  TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:258: (dbg) Done: out/minikube-windows-amd64.exe start -p no-preload-20220512010315-7184 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker --kubernetes-version=v1.23.6-rc.0: (6m43.6552206s)
start_stop_delete_test.go:264: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p no-preload-20220512010315-7184 -n no-preload-20220512010315-7184
start_stop_delete_test.go:264: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.Host}} -p no-preload-20220512010315-7184 -n no-preload-20220512010315-7184: (8.9232805s)
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (412.58s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (10.11s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:198: (dbg) Run:  kubectl --context embed-certs-20220512010611-7184 create -f testdata\busybox.yaml
start_stop_delete_test.go:198: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:342: "busybox" [cf03003c-21ae-452a-9ba0-fd8532eb4928] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:342: "busybox" [cf03003c-21ae-452a-9ba0-fd8532eb4928] Running
start_stop_delete_test.go:198: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 9.0368528s
start_stop_delete_test.go:198: (dbg) Run:  kubectl --context embed-certs-20220512010611-7184 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (10.11s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (5.98s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:207: (dbg) Run:  out/minikube-windows-amd64.exe addons enable metrics-server -p embed-certs-20220512010611-7184 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:207: (dbg) Done: out/minikube-windows-amd64.exe addons enable metrics-server -p embed-certs-20220512010611-7184 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (5.396766s)
start_stop_delete_test.go:217: (dbg) Run:  kubectl --context embed-certs-20220512010611-7184 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (5.98s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (18.79s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:230: (dbg) Run:  out/minikube-windows-amd64.exe stop -p embed-certs-20220512010611-7184 --alsologtostderr -v=3
start_stop_delete_test.go:230: (dbg) Done: out/minikube-windows-amd64.exe stop -p embed-certs-20220512010611-7184 --alsologtostderr -v=3: (18.7889856s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (18.79s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (6.23s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:241: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p embed-certs-20220512010611-7184 -n embed-certs-20220512010611-7184
start_stop_delete_test.go:241: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p embed-certs-20220512010611-7184 -n embed-certs-20220512010611-7184: exit status 7 (3.0711635s)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:241: status error: exit status 7 (may be ok)
start_stop_delete_test.go:248: (dbg) Run:  out/minikube-windows-amd64.exe addons enable dashboard -p embed-certs-20220512010611-7184 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4
start_stop_delete_test.go:248: (dbg) Done: out/minikube-windows-amd64.exe addons enable dashboard -p embed-certs-20220512010611-7184 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4: (3.1605012s)
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (6.23s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (413.57s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:258: (dbg) Run:  out/minikube-windows-amd64.exe start -p embed-certs-20220512010611-7184 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker --kubernetes-version=v1.23.5
E0512 01:09:08.556314    7184 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\addons-20220511225738-7184\client.crt: The system cannot find the path specified.
E0512 01:09:52.933594    7184 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\ingress-addon-legacy-20220511235145-7184\client.crt: The system cannot find the path specified.

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:258: (dbg) Done: out/minikube-windows-amd64.exe start -p embed-certs-20220512010611-7184 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker --kubernetes-version=v1.23.5: (6m45.7660502s)
start_stop_delete_test.go:264: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p embed-certs-20220512010611-7184 -n embed-certs-20220512010611-7184
start_stop_delete_test.go:264: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.Host}} -p embed-certs-20220512010611-7184 -n embed-certs-20220512010611-7184: (7.8012525s)
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (413.57s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/FirstStart (133.79s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/FirstStart
start_stop_delete_test.go:188: (dbg) Run:  out/minikube-windows-amd64.exe start -p default-k8s-different-port-20220512011148-7184 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker --kubernetes-version=v1.23.5

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/FirstStart
start_stop_delete_test.go:188: (dbg) Done: out/minikube-windows-amd64.exe start -p default-k8s-different-port-20220512011148-7184 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker --kubernetes-version=v1.23.5: (2m13.7932741s)
--- PASS: TestStartStop/group/default-k8s-different-port/serial/FirstStart (133.79s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (10.26s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:198: (dbg) Run:  kubectl --context old-k8s-version-20220512010246-7184 create -f testdata\busybox.yaml
start_stop_delete_test.go:198: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:342: "busybox" [070fb71a-1145-4881-a9cd-076ab7a6d77b] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:342: "busybox" [070fb71a-1145-4881-a9cd-076ab7a6d77b] Running
start_stop_delete_test.go:198: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 9.0549737s
start_stop_delete_test.go:198: (dbg) Run:  kubectl --context old-k8s-version-20220512010246-7184 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (10.26s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (8s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:207: (dbg) Run:  out/minikube-windows-amd64.exe addons enable metrics-server -p old-k8s-version-20220512010246-7184 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:207: (dbg) Done: out/minikube-windows-amd64.exe addons enable metrics-server -p old-k8s-version-20220512010246-7184 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (5.4034765s)
start_stop_delete_test.go:217: (dbg) Run:  kubectl --context old-k8s-version-20220512010246-7184 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:217: (dbg) Done: kubectl --context old-k8s-version-20220512010246-7184 describe deploy/metrics-server -n kube-system: (2.5865157s)
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (8.00s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (18.21s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:230: (dbg) Run:  out/minikube-windows-amd64.exe stop -p old-k8s-version-20220512010246-7184 --alsologtostderr -v=3
start_stop_delete_test.go:230: (dbg) Done: out/minikube-windows-amd64.exe stop -p old-k8s-version-20220512010246-7184 --alsologtostderr -v=3: (18.2107666s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (18.21s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:241: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p old-k8s-version-20220512010246-7184 -n old-k8s-version-20220512010246-7184
start_stop_delete_test.go:241: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p old-k8s-version-20220512010246-7184 -n old-k8s-version-20220512010246-7184: exit status 7 (2.924307s)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:241: status error: exit status 7 (may be ok)
start_stop_delete_test.go:248: (dbg) Run:  out/minikube-windows-amd64.exe addons enable dashboard -p old-k8s-version-20220512010246-7184 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4
start_stop_delete_test.go:248: (dbg) Done: out/minikube-windows-amd64.exe addons enable dashboard -p old-k8s-version-20220512010246-7184 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4: (3.0830501s)
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (474.77s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:258: (dbg) Run:  out/minikube-windows-amd64.exe start -p old-k8s-version-20220512010246-7184 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker --kubernetes-version=v1.16.0

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:258: (dbg) Done: out/minikube-windows-amd64.exe start -p old-k8s-version-20220512010246-7184 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker --kubernetes-version=v1.16.0: (7m47.5419885s)
start_stop_delete_test.go:264: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p old-k8s-version-20220512010246-7184 -n old-k8s-version-20220512010246-7184

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:264: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.Host}} -p old-k8s-version-20220512010246-7184 -n old-k8s-version-20220512010246-7184: (7.2304692s)
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (474.77s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (41.19s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:276: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:342: "kubernetes-dashboard-8469778f77-8m4mq" [1e98361a-a9f1-46f4-9918-87c2c08dcb90] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])

                                                
                                                
=== CONT  TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
helpers_test.go:342: "kubernetes-dashboard-8469778f77-8m4mq" [1e98361a-a9f1-46f4-9918-87c2c08dcb90] Running
start_stop_delete_test.go:276: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 41.1838561s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (41.19s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/DeployApp (11.2s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/DeployApp
start_stop_delete_test.go:198: (dbg) Run:  kubectl --context default-k8s-different-port-20220512011148-7184 create -f testdata\busybox.yaml
start_stop_delete_test.go:198: (dbg) TestStartStop/group/default-k8s-different-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:342: "busybox" [88181eaf-3164-49ec-a268-6e0f32698745] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:342: "busybox" [88181eaf-3164-49ec-a268-6e0f32698745] Running
E0512 01:14:08.562911    7184 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\addons-20220511225738-7184\client.crt: The system cannot find the path specified.
start_stop_delete_test.go:198: (dbg) TestStartStop/group/default-k8s-different-port/serial/DeployApp: integration-test=busybox healthy within 10.0361096s
start_stop_delete_test.go:198: (dbg) Run:  kubectl --context default-k8s-different-port-20220512011148-7184 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-different-port/serial/DeployApp (11.20s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/EnableAddonWhileActive (6.59s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:207: (dbg) Run:  out/minikube-windows-amd64.exe addons enable metrics-server -p default-k8s-different-port-20220512011148-7184 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:207: (dbg) Done: out/minikube-windows-amd64.exe addons enable metrics-server -p default-k8s-different-port-20220512011148-7184 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (6.1292939s)
start_stop_delete_test.go:217: (dbg) Run:  kubectl --context default-k8s-different-port-20220512011148-7184 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-different-port/serial/EnableAddonWhileActive (6.59s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/Stop (18.63s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/Stop
start_stop_delete_test.go:230: (dbg) Run:  out/minikube-windows-amd64.exe stop -p default-k8s-different-port-20220512011148-7184 --alsologtostderr -v=3

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/Stop
start_stop_delete_test.go:230: (dbg) Done: out/minikube-windows-amd64.exe stop -p default-k8s-different-port-20220512011148-7184 --alsologtostderr -v=3: (18.6344693s)
--- PASS: TestStartStop/group/default-k8s-different-port/serial/Stop (18.63s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.55s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:289: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:342: "kubernetes-dashboard-8469778f77-8m4mq" [1e98361a-a9f1-46f4-9918-87c2c08dcb90] Running

                                                
                                                
=== CONT  TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:289: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.0325097s
start_stop_delete_test.go:293: (dbg) Run:  kubectl --context no-preload-20220512010315-7184 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.55s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/EnableAddonAfterStop (5.91s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:241: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p default-k8s-different-port-20220512011148-7184 -n default-k8s-different-port-20220512011148-7184

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:241: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p default-k8s-different-port-20220512011148-7184 -n default-k8s-different-port-20220512011148-7184: exit status 7 (2.9686278s)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:241: status error: exit status 7 (may be ok)
start_stop_delete_test.go:248: (dbg) Run:  out/minikube-windows-amd64.exe addons enable dashboard -p default-k8s-different-port-20220512011148-7184 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4
start_stop_delete_test.go:248: (dbg) Done: out/minikube-windows-amd64.exe addons enable dashboard -p default-k8s-different-port-20220512011148-7184 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4: (2.9373367s)
--- PASS: TestStartStop/group/default-k8s-different-port/serial/EnableAddonAfterStop (5.91s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (6.41s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:306: (dbg) Run:  out/minikube-windows-amd64.exe ssh -p no-preload-20220512010315-7184 "sudo crictl images -o json"

                                                
                                                
=== CONT  TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:306: (dbg) Done: out/minikube-windows-amd64.exe ssh -p no-preload-20220512010315-7184 "sudo crictl images -o json": (6.407639s)
start_stop_delete_test.go:306: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (6.41s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/SecondStart (429.64s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/SecondStart
start_stop_delete_test.go:258: (dbg) Run:  out/minikube-windows-amd64.exe start -p default-k8s-different-port-20220512011148-7184 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker --kubernetes-version=v1.23.5

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/SecondStart
start_stop_delete_test.go:258: (dbg) Done: out/minikube-windows-amd64.exe start -p default-k8s-different-port-20220512011148-7184 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker --kubernetes-version=v1.23.5: (7m0.1325872s)
start_stop_delete_test.go:264: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p default-k8s-different-port-20220512011148-7184 -n default-k8s-different-port-20220512011148-7184
E0512 01:21:47.698168    7184 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\no-preload-20220512010315-7184\client.crt: The system cannot find the path specified.

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/SecondStart
start_stop_delete_test.go:264: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.Host}} -p default-k8s-different-port-20220512011148-7184 -n default-k8s-different-port-20220512011148-7184: (9.5101628s)
--- PASS: TestStartStop/group/default-k8s-different-port/serial/SecondStart (429.64s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (41.65s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:313: (dbg) Run:  out/minikube-windows-amd64.exe pause -p no-preload-20220512010315-7184 --alsologtostderr -v=1
E0512 01:14:52.935491    7184 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\ingress-addon-legacy-20220511235145-7184\client.crt: The system cannot find the path specified.
start_stop_delete_test.go:313: (dbg) Done: out/minikube-windows-amd64.exe pause -p no-preload-20220512010315-7184 --alsologtostderr -v=1: (6.4143408s)
start_stop_delete_test.go:313: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p no-preload-20220512010315-7184 -n no-preload-20220512010315-7184
start_stop_delete_test.go:313: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p no-preload-20220512010315-7184 -n no-preload-20220512010315-7184: exit status 2 (6.8647471s)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:313: status error: exit status 2 (may be ok)
start_stop_delete_test.go:313: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Kubelet}} -p no-preload-20220512010315-7184 -n no-preload-20220512010315-7184
start_stop_delete_test.go:313: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Kubelet}} -p no-preload-20220512010315-7184 -n no-preload-20220512010315-7184: exit status 2 (6.9884031s)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:313: status error: exit status 2 (may be ok)
start_stop_delete_test.go:313: (dbg) Run:  out/minikube-windows-amd64.exe unpause -p no-preload-20220512010315-7184 --alsologtostderr -v=1
start_stop_delete_test.go:313: (dbg) Done: out/minikube-windows-amd64.exe unpause -p no-preload-20220512010315-7184 --alsologtostderr -v=1: (6.955711s)
start_stop_delete_test.go:313: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p no-preload-20220512010315-7184 -n no-preload-20220512010315-7184
start_stop_delete_test.go:313: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p no-preload-20220512010315-7184 -n no-preload-20220512010315-7184: (7.3497754s)
start_stop_delete_test.go:313: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Kubelet}} -p no-preload-20220512010315-7184 -n no-preload-20220512010315-7184
start_stop_delete_test.go:313: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.Kubelet}} -p no-preload-20220512010315-7184 -n no-preload-20220512010315-7184: (7.0773629s)
--- PASS: TestStartStop/group/no-preload/serial/Pause (41.65s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (52.14s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:276: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:342: "kubernetes-dashboard-8469778f77-bwdns" [4c48a657-b6a3-40e8-86b8-75310a5e2c36] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
helpers_test.go:342: "kubernetes-dashboard-8469778f77-bwdns" [4c48a657-b6a3-40e8-86b8-75310a5e2c36] Running
E0512 01:16:40.448848    7184 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\no-preload-20220512010315-7184\client.crt: The system cannot find the path specified.
start_stop_delete_test.go:276: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 52.1341854s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (52.14s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (135.15s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:188: (dbg) Run:  out/minikube-windows-amd64.exe start -p newest-cni-20220512011616-7184 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubelet.network-plugin=cni --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 --driver=docker --kubernetes-version=v1.23.6-rc.0
E0512 01:16:19.866081    7184 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\no-preload-20220512010315-7184\client.crt: The system cannot find the path specified.
E0512 01:16:19.881396    7184 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\no-preload-20220512010315-7184\client.crt: The system cannot find the path specified.
E0512 01:16:19.896540    7184 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\no-preload-20220512010315-7184\client.crt: The system cannot find the path specified.
E0512 01:16:19.927091    7184 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\no-preload-20220512010315-7184\client.crt: The system cannot find the path specified.
E0512 01:16:19.973092    7184 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\no-preload-20220512010315-7184\client.crt: The system cannot find the path specified.
E0512 01:16:20.067474    7184 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\no-preload-20220512010315-7184\client.crt: The system cannot find the path specified.
E0512 01:16:20.239392    7184 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\no-preload-20220512010315-7184\client.crt: The system cannot find the path specified.
E0512 01:16:20.570066    7184 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\no-preload-20220512010315-7184\client.crt: The system cannot find the path specified.
E0512 01:16:21.214211    7184 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\no-preload-20220512010315-7184\client.crt: The system cannot find the path specified.
E0512 01:16:22.502321    7184 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\no-preload-20220512010315-7184\client.crt: The system cannot find the path specified.
E0512 01:16:24.984805    7184 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-20220511231058-7184\client.crt: The system cannot find the path specified.
E0512 01:16:25.065098    7184 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\no-preload-20220512010315-7184\client.crt: The system cannot find the path specified.
E0512 01:16:30.200020    7184 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\no-preload-20220512010315-7184\client.crt: The system cannot find the path specified.

                                                
                                                
=== CONT  TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:188: (dbg) Done: out/minikube-windows-amd64.exe start -p newest-cni-20220512011616-7184 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubelet.network-plugin=cni --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 --driver=docker --kubernetes-version=v1.23.6-rc.0: (2m15.1499438s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (135.15s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.52s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:289: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:342: "kubernetes-dashboard-8469778f77-bwdns" [4c48a657-b6a3-40e8-86b8-75310a5e2c36] Running
start_stop_delete_test.go:289: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.0320699s
start_stop_delete_test.go:293: (dbg) Run:  kubectl --context embed-certs-20220512010611-7184 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.52s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (6.57s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:306: (dbg) Run:  out/minikube-windows-amd64.exe ssh -p embed-certs-20220512010611-7184 "sudo crictl images -o json"
start_stop_delete_test.go:306: (dbg) Done: out/minikube-windows-amd64.exe ssh -p embed-certs-20220512010611-7184 "sudo crictl images -o json": (6.5686164s)
start_stop_delete_test.go:306: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (6.57s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (5.9s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:207: (dbg) Run:  out/minikube-windows-amd64.exe addons enable metrics-server -p newest-cni-20220512011616-7184 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:207: (dbg) Done: out/minikube-windows-amd64.exe addons enable metrics-server -p newest-cni-20220512011616-7184 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (5.8961014s)
start_stop_delete_test.go:213: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (5.90s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (18.98s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:230: (dbg) Run:  out/minikube-windows-amd64.exe stop -p newest-cni-20220512011616-7184 --alsologtostderr -v=3

                                                
                                                
=== CONT  TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:230: (dbg) Done: out/minikube-windows-amd64.exe stop -p newest-cni-20220512011616-7184 --alsologtostderr -v=3: (18.9764981s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (18.98s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (6.1s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:241: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p newest-cni-20220512011616-7184 -n newest-cni-20220512011616-7184

                                                
                                                
=== CONT  TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:241: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p newest-cni-20220512011616-7184 -n newest-cni-20220512011616-7184: exit status 7 (3.0601472s)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:241: status error: exit status 7 (may be ok)
start_stop_delete_test.go:248: (dbg) Run:  out/minikube-windows-amd64.exe addons enable dashboard -p newest-cni-20220512011616-7184 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4
start_stop_delete_test.go:248: (dbg) Done: out/minikube-windows-amd64.exe addons enable dashboard -p newest-cni-20220512011616-7184 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4: (3.0418007s)
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (6.10s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (86.13s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:258: (dbg) Run:  out/minikube-windows-amd64.exe start -p newest-cni-20220512011616-7184 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubelet.network-plugin=cni --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 --driver=docker --kubernetes-version=v1.23.6-rc.0
E0512 01:19:03.842922    7184 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\no-preload-20220512010315-7184\client.crt: The system cannot find the path specified.
E0512 01:19:08.575131    7184 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\addons-20220511225738-7184\client.crt: The system cannot find the path specified.
E0512 01:19:28.203746    7184 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-20220511231058-7184\client.crt: The system cannot find the path specified.

                                                
                                                
=== CONT  TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:258: (dbg) Done: out/minikube-windows-amd64.exe start -p newest-cni-20220512011616-7184 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubelet.network-plugin=cni --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 --driver=docker --kubernetes-version=v1.23.6-rc.0: (1m19.1169326s)
start_stop_delete_test.go:264: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p newest-cni-20220512011616-7184 -n newest-cni-20220512011616-7184
start_stop_delete_test.go:264: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.Host}} -p newest-cni-20220512011616-7184 -n newest-cni-20220512011616-7184: (7.0103073s)
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (86.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (144.81s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:101: (dbg) Run:  out/minikube-windows-amd64.exe start -p auto-20220512010229-7184 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --driver=docker
E0512 01:19:52.962305    7184 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\ingress-addon-legacy-20220511235145-7184\client.crt: The system cannot find the path specified.

                                                
                                                
=== CONT  TestNetworkPlugins/group/auto/Start
net_test.go:101: (dbg) Done: out/minikube-windows-amd64.exe start -p auto-20220512010229-7184 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --driver=docker: (2m24.8121511s)
--- PASS: TestNetworkPlugins/group/auto/Start (144.81s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:275: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:286: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (7.68s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:306: (dbg) Run:  out/minikube-windows-amd64.exe ssh -p newest-cni-20220512011616-7184 "sudo crictl images -o json"

                                                
                                                
=== CONT  TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:306: (dbg) Done: out/minikube-windows-amd64.exe ssh -p newest-cni-20220512011616-7184 "sudo crictl images -o json": (7.6825407s)
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (7.68s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (45.59s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:313: (dbg) Run:  out/minikube-windows-amd64.exe pause -p newest-cni-20220512011616-7184 --alsologtostderr -v=1

                                                
                                                
=== CONT  TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:313: (dbg) Done: out/minikube-windows-amd64.exe pause -p newest-cni-20220512011616-7184 --alsologtostderr -v=1: (6.634844s)
start_stop_delete_test.go:313: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p newest-cni-20220512011616-7184 -n newest-cni-20220512011616-7184

                                                
                                                
=== CONT  TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:313: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p newest-cni-20220512011616-7184 -n newest-cni-20220512011616-7184: exit status 2 (6.7468619s)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:313: status error: exit status 2 (may be ok)
start_stop_delete_test.go:313: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Kubelet}} -p newest-cni-20220512011616-7184 -n newest-cni-20220512011616-7184

                                                
                                                
=== CONT  TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:313: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Kubelet}} -p newest-cni-20220512011616-7184 -n newest-cni-20220512011616-7184: exit status 2 (7.0845412s)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:313: status error: exit status 2 (may be ok)
start_stop_delete_test.go:313: (dbg) Run:  out/minikube-windows-amd64.exe unpause -p newest-cni-20220512011616-7184 --alsologtostderr -v=1

                                                
                                                
=== CONT  TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:313: (dbg) Done: out/minikube-windows-amd64.exe unpause -p newest-cni-20220512011616-7184 --alsologtostderr -v=1: (6.9844449s)
start_stop_delete_test.go:313: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p newest-cni-20220512011616-7184 -n newest-cni-20220512011616-7184

                                                
                                                
=== CONT  TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:313: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p newest-cni-20220512011616-7184 -n newest-cni-20220512011616-7184: (9.5477586s)
start_stop_delete_test.go:313: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Kubelet}} -p newest-cni-20220512011616-7184 -n newest-cni-20220512011616-7184
E0512 01:21:19.875517    7184 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\no-preload-20220512010315-7184\client.crt: The system cannot find the path specified.
start_stop_delete_test.go:313: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.Kubelet}} -p newest-cni-20220512011616-7184 -n newest-cni-20220512011616-7184: (8.5901015s)
--- PASS: TestStartStop/group/newest-cni/serial/Pause (45.59s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (5.05s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:276: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:342: "kubernetes-dashboard-6fb5469cf5-mrs7d" [d3ab2feb-63ed-486a-a3c3-0d1e4e1e34cd] Running
start_stop_delete_test.go:276: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.0518386s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (5.05s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.55s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:289: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:342: "kubernetes-dashboard-6fb5469cf5-mrs7d" [d3ab2feb-63ed-486a-a3c3-0d1e4e1e34cd] Running

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:289: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.0195486s
start_stop_delete_test.go:293: (dbg) Run:  kubectl --context old-k8s-version-20220512010246-7184 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.55s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (6.6s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:306: (dbg) Run:  out/minikube-windows-amd64.exe ssh -p old-k8s-version-20220512010246-7184 "sudo crictl images -o json"

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:306: (dbg) Done: out/minikube-windows-amd64.exe ssh -p old-k8s-version-20220512010246-7184 "sudo crictl images -o json": (6.5993033s)
start_stop_delete_test.go:306: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (6.60s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop (45.1s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:276: (dbg) TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:342: "kubernetes-dashboard-8469778f77-rjc8w" [5de43635-9073-4274-a294-86c75b45a7b0] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop
helpers_test.go:342: "kubernetes-dashboard-8469778f77-rjc8w" [5de43635-9073-4274-a294-86c75b45a7b0] Running
start_stop_delete_test.go:276: (dbg) TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 45.0925433s
--- PASS: TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop (45.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (7.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:122: (dbg) Run:  out/minikube-windows-amd64.exe ssh -p auto-20220512010229-7184 "pgrep -a kubelet"

                                                
                                                
=== CONT  TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:122: (dbg) Done: out/minikube-windows-amd64.exe ssh -p auto-20220512010229-7184 "pgrep -a kubelet": (7.1586607s)
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (7.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (37.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:138: (dbg) Run:  kubectl --context auto-20220512010229-7184 replace --force -f testdata\netcat-deployment.yaml
net_test.go:138: (dbg) Done: kubectl --context auto-20220512010229-7184 replace --force -f testdata\netcat-deployment.yaml: (8.0020705s)
net_test.go:152: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:342: "netcat-668db85669-kqv6g" [1811a891-2bad-4205-a21e-19db4c70788e] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])

                                                
                                                
=== CONT  TestNetworkPlugins/group/auto/NetCatPod
helpers_test.go:342: "netcat-668db85669-kqv6g" [1811a891-2bad-4205-a21e-19db4c70788e] Running
E0512 01:22:56.545837    7184 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\ingress-addon-legacy-20220511235145-7184\client.crt: The system cannot find the path specified.
net_test.go:152: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 29.0914481s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (37.32s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop (5.73s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:289: (dbg) TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:342: "kubernetes-dashboard-8469778f77-rjc8w" [5de43635-9073-4274-a294-86c75b45a7b0] Running
start_stop_delete_test.go:289: (dbg) TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.0319573s
start_stop_delete_test.go:293: (dbg) Run:  kubectl --context default-k8s-different-port-20220512011148-7184 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop (5.73s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/VerifyKubernetesImages (6.93s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:306: (dbg) Run:  out/minikube-windows-amd64.exe ssh -p default-k8s-different-port-20220512011148-7184 "sudo crictl images -o json"

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:306: (dbg) Done: out/minikube-windows-amd64.exe ssh -p default-k8s-different-port-20220512011148-7184 "sudo crictl images -o json": (6.9270578s)
start_stop_delete_test.go:306: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/default-k8s-different-port/serial/VerifyKubernetesImages (6.93s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/Pause (44.67s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/Pause
start_stop_delete_test.go:313: (dbg) Run:  out/minikube-windows-amd64.exe pause -p default-k8s-different-port-20220512011148-7184 --alsologtostderr -v=1

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/Pause
start_stop_delete_test.go:313: (dbg) Done: out/minikube-windows-amd64.exe pause -p default-k8s-different-port-20220512011148-7184 --alsologtostderr -v=1: (6.7018336s)
start_stop_delete_test.go:313: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p default-k8s-different-port-20220512011148-7184 -n default-k8s-different-port-20220512011148-7184

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/Pause
start_stop_delete_test.go:313: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p default-k8s-different-port-20220512011148-7184 -n default-k8s-different-port-20220512011148-7184: exit status 2 (6.9273276s)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:313: status error: exit status 2 (may be ok)
start_stop_delete_test.go:313: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Kubelet}} -p default-k8s-different-port-20220512011148-7184 -n default-k8s-different-port-20220512011148-7184
start_stop_delete_test.go:313: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Kubelet}} -p default-k8s-different-port-20220512011148-7184 -n default-k8s-different-port-20220512011148-7184: exit status 2 (6.6629856s)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:313: status error: exit status 2 (may be ok)
start_stop_delete_test.go:313: (dbg) Run:  out/minikube-windows-amd64.exe unpause -p default-k8s-different-port-20220512011148-7184 --alsologtostderr -v=1
start_stop_delete_test.go:313: (dbg) Done: out/minikube-windows-amd64.exe unpause -p default-k8s-different-port-20220512011148-7184 --alsologtostderr -v=1: (6.7828398s)
start_stop_delete_test.go:313: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p default-k8s-different-port-20220512011148-7184 -n default-k8s-different-port-20220512011148-7184
start_stop_delete_test.go:313: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p default-k8s-different-port-20220512011148-7184 -n default-k8s-different-port-20220512011148-7184: (10.8113573s)
start_stop_delete_test.go:313: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Kubelet}} -p default-k8s-different-port-20220512011148-7184 -n default-k8s-different-port-20220512011148-7184
start_stop_delete_test.go:313: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.Kubelet}} -p default-k8s-different-port-20220512011148-7184 -n default-k8s-different-port-20220512011148-7184: (6.784687s)
--- PASS: TestStartStop/group/default-k8s-different-port/serial/Pause (44.67s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.65s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:169: (dbg) Run:  kubectl --context auto-20220512010229-7184 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.65s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.59s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:188: (dbg) Run:  kubectl --context auto-20220512010229-7184 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.59s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (5.52s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:238: (dbg) Run:  kubectl --context auto-20220512010229-7184 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
net_test.go:238: (dbg) Non-zero exit: kubectl --context auto-20220512010229-7184 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080": exit status 1 (5.5097061s)

                                                
                                                
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
--- PASS: TestNetworkPlugins/group/auto/HairPin (5.52s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Start (138.4s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Start
net_test.go:101: (dbg) Run:  out/minikube-windows-amd64.exe start -p false-20220512010244-7184 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=false --driver=docker
E0512 01:24:52.966658    7184 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\ingress-addon-legacy-20220511235145-7184\client.crt: The system cannot find the path specified.
E0512 01:26:19.890933    7184 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\no-preload-20220512010315-7184\client.crt: The system cannot find the path specified.
E0512 01:26:25.022331    7184 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-20220511231058-7184\client.crt: The system cannot find the path specified.
net_test.go:101: (dbg) Done: out/minikube-windows-amd64.exe start -p false-20220512010244-7184 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=false --driver=docker: (2m18.3994724s)
--- PASS: TestNetworkPlugins/group/false/Start (138.40s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/KubeletFlags (6.44s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/KubeletFlags
net_test.go:122: (dbg) Run:  out/minikube-windows-amd64.exe ssh -p false-20220512010244-7184 "pgrep -a kubelet"
net_test.go:122: (dbg) Done: out/minikube-windows-amd64.exe ssh -p false-20220512010244-7184 "pgrep -a kubelet": (6.4369611s)
--- PASS: TestNetworkPlugins/group/false/KubeletFlags (6.44s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/NetCatPod (19.83s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/NetCatPod
net_test.go:138: (dbg) Run:  kubectl --context false-20220512010244-7184 replace --force -f testdata\netcat-deployment.yaml
net_test.go:152: (dbg) TestNetworkPlugins/group/false/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:342: "netcat-668db85669-kdn2f" [37f1b86b-84f8-4d8a-89e6-80057a254ad5] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:342: "netcat-668db85669-kdn2f" [37f1b86b-84f8-4d8a-89e6-80057a254ad5] Running
net_test.go:152: (dbg) TestNetworkPlugins/group/false/NetCatPod: app=netcat healthy within 19.0297437s
--- PASS: TestNetworkPlugins/group/false/NetCatPod (19.83s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/DNS (0.6s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/DNS
net_test.go:169: (dbg) Run:  kubectl --context false-20220512010244-7184 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/false/DNS (0.60s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Localhost (0.58s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Localhost
net_test.go:188: (dbg) Run:  kubectl --context false-20220512010244-7184 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/false/Localhost (0.58s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/HairPin (5.52s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/HairPin
net_test.go:238: (dbg) Run:  kubectl --context false-20220512010244-7184 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
E0512 01:27:00.487021    7184 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\old-k8s-version-20220512010246-7184\client.crt: The system cannot find the path specified.
E0512 01:27:00.502072    7184 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\old-k8s-version-20220512010246-7184\client.crt: The system cannot find the path specified.
E0512 01:27:00.518056    7184 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\old-k8s-version-20220512010246-7184\client.crt: The system cannot find the path specified.
E0512 01:27:00.548967    7184 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\old-k8s-version-20220512010246-7184\client.crt: The system cannot find the path specified.
E0512 01:27:00.596504    7184 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\old-k8s-version-20220512010246-7184\client.crt: The system cannot find the path specified.
E0512 01:27:00.689413    7184 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\old-k8s-version-20220512010246-7184\client.crt: The system cannot find the path specified.
E0512 01:27:00.862110    7184 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\old-k8s-version-20220512010246-7184\client.crt: The system cannot find the path specified.
E0512 01:27:01.191930    7184 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\old-k8s-version-20220512010246-7184\client.crt: The system cannot find the path specified.
E0512 01:27:01.834433    7184 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\old-k8s-version-20220512010246-7184\client.crt: The system cannot find the path specified.
E0512 01:27:03.117000    7184 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\old-k8s-version-20220512010246-7184\client.crt: The system cannot find the path specified.
net_test.go:238: (dbg) Non-zero exit: kubectl --context false-20220512010244-7184 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080": exit status 1 (5.5109884s)

                                                
                                                
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
--- PASS: TestNetworkPlugins/group/false/HairPin (5.52s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (164.94s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:101: (dbg) Run:  out/minikube-windows-amd64.exe start -p kindnet-20220512010244-7184 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=kindnet --driver=docker
E0512 01:27:29.336574    7184 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\auto-20220512010229-7184\client.crt: The system cannot find the path specified.
E0512 01:27:29.346813    7184 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\auto-20220512010229-7184\client.crt: The system cannot find the path specified.
E0512 01:27:29.362318    7184 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\auto-20220512010229-7184\client.crt: The system cannot find the path specified.
E0512 01:27:29.392927    7184 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\auto-20220512010229-7184\client.crt: The system cannot find the path specified.
E0512 01:27:29.438177    7184 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\auto-20220512010229-7184\client.crt: The system cannot find the path specified.
E0512 01:27:29.533414    7184 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\auto-20220512010229-7184\client.crt: The system cannot find the path specified.
E0512 01:27:29.702409    7184 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\auto-20220512010229-7184\client.crt: The system cannot find the path specified.
E0512 01:27:30.033115    7184 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\auto-20220512010229-7184\client.crt: The system cannot find the path specified.
E0512 01:27:30.681127    7184 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\auto-20220512010229-7184\client.crt: The system cannot find the path specified.
E0512 01:27:31.976300    7184 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\auto-20220512010229-7184\client.crt: The system cannot find the path specified.
E0512 01:27:34.548724    7184 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\auto-20220512010229-7184\client.crt: The system cannot find the path specified.
E0512 01:27:39.680142    7184 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\auto-20220512010229-7184\client.crt: The system cannot find the path specified.
E0512 01:27:41.546055    7184 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\old-k8s-version-20220512010246-7184\client.crt: The system cannot find the path specified.
E0512 01:27:49.928761    7184 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\auto-20220512010229-7184\client.crt: The system cannot find the path specified.
E0512 01:28:10.424436    7184 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\auto-20220512010229-7184\client.crt: The system cannot find the path specified.
E0512 01:28:22.522855    7184 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\old-k8s-version-20220512010246-7184\client.crt: The system cannot find the path specified.
E0512 01:28:51.401506    7184 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\auto-20220512010229-7184\client.crt: The system cannot find the path specified.
E0512 01:29:03.360881    7184 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\default-k8s-different-port-20220512011148-7184\client.crt: The system cannot find the path specified.
E0512 01:29:03.375908    7184 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\default-k8s-different-port-20220512011148-7184\client.crt: The system cannot find the path specified.
E0512 01:29:03.391078    7184 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\default-k8s-different-port-20220512011148-7184\client.crt: The system cannot find the path specified.
E0512 01:29:03.422496    7184 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\default-k8s-different-port-20220512011148-7184\client.crt: The system cannot find the path specified.
E0512 01:29:03.469774    7184 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\default-k8s-different-port-20220512011148-7184\client.crt: The system cannot find the path specified.
E0512 01:29:03.562988    7184 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\default-k8s-different-port-20220512011148-7184\client.crt: The system cannot find the path specified.
E0512 01:29:03.736679    7184 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\default-k8s-different-port-20220512011148-7184\client.crt: The system cannot find the path specified.
E0512 01:29:04.069792    7184 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\default-k8s-different-port-20220512011148-7184\client.crt: The system cannot find the path specified.
E0512 01:29:04.714967    7184 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\default-k8s-different-port-20220512011148-7184\client.crt: The system cannot find the path specified.
E0512 01:29:06.002109    7184 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\default-k8s-different-port-20220512011148-7184\client.crt: The system cannot find the path specified.
E0512 01:29:08.604534    7184 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\addons-20220511225738-7184\client.crt: The system cannot find the path specified.
E0512 01:29:08.635897    7184 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\default-k8s-different-port-20220512011148-7184\client.crt: The system cannot find the path specified.
E0512 01:29:13.768946    7184 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\default-k8s-different-port-20220512011148-7184\client.crt: The system cannot find the path specified.
E0512 01:29:24.014788    7184 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\default-k8s-different-port-20220512011148-7184\client.crt: The system cannot find the path specified.
E0512 01:29:44.454633    7184 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\old-k8s-version-20220512010246-7184\client.crt: The system cannot find the path specified.
E0512 01:29:44.501068    7184 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\default-k8s-different-port-20220512011148-7184\client.crt: The system cannot find the path specified.
E0512 01:29:52.992703    7184 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\ingress-addon-legacy-20220511235145-7184\client.crt: The system cannot find the path specified.
net_test.go:101: (dbg) Done: out/minikube-windows-amd64.exe start -p kindnet-20220512010244-7184 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=kindnet --driver=docker: (2m44.9390927s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (164.94s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (5.06s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:109: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:342: "kindnet-cwskd" [f392dc24-28aa-42f4-87fb-5cc52c22a521] Running
E0512 01:30:13.337068    7184 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\auto-20220512010229-7184\client.crt: The system cannot find the path specified.
net_test.go:109: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 5.0528445s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (5.06s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (7.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:122: (dbg) Run:  out/minikube-windows-amd64.exe ssh -p kindnet-20220512010244-7184 "pgrep -a kubelet"
net_test.go:122: (dbg) Done: out/minikube-windows-amd64.exe ssh -p kindnet-20220512010244-7184 "pgrep -a kubelet": (7.203925s)
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (7.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (21.83s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:138: (dbg) Run:  kubectl --context kindnet-20220512010244-7184 replace --force -f testdata\netcat-deployment.yaml
net_test.go:152: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:342: "netcat-668db85669-7zgls" [855c2724-0a41-470d-ab2a-d5587c012ab7] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0512 01:30:25.477895    7184 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\default-k8s-different-port-20220512011148-7184\client.crt: The system cannot find the path specified.
helpers_test.go:342: "netcat-668db85669-7zgls" [855c2724-0a41-470d-ab2a-d5587c012ab7] Running
net_test.go:152: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 21.1065272s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (21.83s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.98s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:169: (dbg) Run:  kubectl --context kindnet-20220512010244-7184 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.98s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.84s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:188: (dbg) Run:  kubectl --context kindnet-20220512010244-7184 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.84s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.75s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:238: (dbg) Run:  kubectl --context kindnet-20220512010244-7184 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.75s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (388.4s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:101: (dbg) Run:  out/minikube-windows-amd64.exe start -p enable-default-cni-20220512010229-7184 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --enable-default-cni=true --driver=docker
E0512 01:31:25.033472    7184 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-20220511231058-7184\client.crt: The system cannot find the path specified.
E0512 01:31:38.214470    7184 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\false-20220512010244-7184\client.crt: The system cannot find the path specified.
E0512 01:31:38.228753    7184 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\false-20220512010244-7184\client.crt: The system cannot find the path specified.
E0512 01:31:38.244881    7184 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\false-20220512010244-7184\client.crt: The system cannot find the path specified.
E0512 01:31:38.276705    7184 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\false-20220512010244-7184\client.crt: The system cannot find the path specified.
E0512 01:31:38.323628    7184 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\false-20220512010244-7184\client.crt: The system cannot find the path specified.
E0512 01:31:38.416728    7184 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\false-20220512010244-7184\client.crt: The system cannot find the path specified.
E0512 01:31:38.591073    7184 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\false-20220512010244-7184\client.crt: The system cannot find the path specified.
E0512 01:31:38.922751    7184 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\false-20220512010244-7184\client.crt: The system cannot find the path specified.
E0512 01:31:39.569158    7184 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\false-20220512010244-7184\client.crt: The system cannot find the path specified.
E0512 01:31:40.854651    7184 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\false-20220512010244-7184\client.crt: The system cannot find the path specified.
E0512 01:31:43.424031    7184 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\false-20220512010244-7184\client.crt: The system cannot find the path specified.
E0512 01:31:47.411360    7184 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\default-k8s-different-port-20220512011148-7184\client.crt: The system cannot find the path specified.
E0512 01:31:48.557612    7184 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\false-20220512010244-7184\client.crt: The system cannot find the path specified.
E0512 01:31:58.801273    7184 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\false-20220512010244-7184\client.crt: The system cannot find the path specified.
E0512 01:32:00.505419    7184 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\old-k8s-version-20220512010246-7184\client.crt: The system cannot find the path specified.
E0512 01:32:19.287232    7184 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\false-20220512010244-7184\client.crt: The system cannot find the path specified.
E0512 01:32:28.316439    7184 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\old-k8s-version-20220512010246-7184\client.crt: The system cannot find the path specified.
E0512 01:32:29.357651    7184 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\auto-20220512010229-7184\client.crt: The system cannot find the path specified.
E0512 01:32:43.094165    7184 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\no-preload-20220512010315-7184\client.crt: The system cannot find the path specified.
E0512 01:32:57.189682    7184 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\auto-20220512010229-7184\client.crt: The system cannot find the path specified.
E0512 01:33:00.263894    7184 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\false-20220512010244-7184\client.crt: The system cannot find the path specified.
E0512 01:34:03.378115    7184 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\default-k8s-different-port-20220512011148-7184\client.crt: The system cannot find the path specified.
E0512 01:34:08.624102    7184 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\addons-20220511225738-7184\client.crt: The system cannot find the path specified.
E0512 01:34:22.200405    7184 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\false-20220512010244-7184\client.crt: The system cannot find the path specified.
E0512 01:34:31.271796    7184 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\default-k8s-different-port-20220512011148-7184\client.crt: The system cannot find the path specified.
E0512 01:34:53.007196    7184 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\ingress-addon-legacy-20220511235145-7184\client.crt: The system cannot find the path specified.
E0512 01:35:11.762561    7184 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kindnet-20220512010244-7184\client.crt: The system cannot find the path specified.
E0512 01:35:11.777083    7184 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kindnet-20220512010244-7184\client.crt: The system cannot find the path specified.
E0512 01:35:11.792982    7184 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kindnet-20220512010244-7184\client.crt: The system cannot find the path specified.
E0512 01:35:11.825304    7184 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kindnet-20220512010244-7184\client.crt: The system cannot find the path specified.
E0512 01:35:11.871304    7184 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kindnet-20220512010244-7184\client.crt: The system cannot find the path specified.
E0512 01:35:11.966077    7184 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kindnet-20220512010244-7184\client.crt: The system cannot find the path specified.
E0512 01:35:12.138413    7184 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kindnet-20220512010244-7184\client.crt: The system cannot find the path specified.
E0512 01:35:12.467754    7184 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kindnet-20220512010244-7184\client.crt: The system cannot find the path specified.
E0512 01:35:13.122176    7184 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kindnet-20220512010244-7184\client.crt: The system cannot find the path specified.
E0512 01:35:14.416418    7184 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kindnet-20220512010244-7184\client.crt: The system cannot find the path specified.
E0512 01:35:16.981400    7184 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kindnet-20220512010244-7184\client.crt: The system cannot find the path specified.
E0512 01:35:22.112478    7184 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kindnet-20220512010244-7184\client.crt: The system cannot find the path specified.
E0512 01:35:32.355364    7184 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kindnet-20220512010244-7184\client.crt: The system cannot find the path specified.
E0512 01:35:52.838673    7184 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kindnet-20220512010244-7184\client.crt: The system cannot find the path specified.
E0512 01:36:08.258951    7184 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-20220511231058-7184\client.crt: The system cannot find the path specified.
E0512 01:36:19.914900    7184 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\no-preload-20220512010315-7184\client.crt: The system cannot find the path specified.
E0512 01:36:25.041865    7184 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-20220511231058-7184\client.crt: The system cannot find the path specified.
E0512 01:36:33.803616    7184 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kindnet-20220512010244-7184\client.crt: The system cannot find the path specified.
E0512 01:36:38.240760    7184 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\false-20220512010244-7184\client.crt: The system cannot find the path specified.
E0512 01:37:00.515066    7184 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\old-k8s-version-20220512010246-7184\client.crt: The system cannot find the path specified.
E0512 01:37:06.051414    7184 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\false-20220512010244-7184\client.crt: The system cannot find the path specified.
E0512 01:37:29.370480    7184 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\auto-20220512010229-7184\client.crt: The system cannot find the path specified.
net_test.go:101: (dbg) Done: out/minikube-windows-amd64.exe start -p enable-default-cni-20220512010229-7184 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --enable-default-cni=true --driver=docker: (6m28.4004391s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (388.40s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (7.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:122: (dbg) Run:  out/minikube-windows-amd64.exe ssh -p enable-default-cni-20220512010229-7184 "pgrep -a kubelet"
E0512 01:37:55.742871    7184 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kindnet-20220512010244-7184\client.crt: The system cannot find the path specified.
net_test.go:122: (dbg) Done: out/minikube-windows-amd64.exe ssh -p enable-default-cni-20220512010229-7184 "pgrep -a kubelet": (7.0102718s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (7.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (21.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:138: (dbg) Run:  kubectl --context enable-default-cni-20220512010229-7184 replace --force -f testdata\netcat-deployment.yaml
net_test.go:152: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:342: "netcat-668db85669-rtr6c" [cc211211-4177-4cc9-823a-666b20a41110] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:342: "netcat-668db85669-rtr6c" [cc211211-4177-4cc9-823a-666b20a41110] Running
net_test.go:152: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 20.0192221s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (21.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (134.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:101: (dbg) Run:  out/minikube-windows-amd64.exe start -p bridge-20220512010229-7184 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=bridge --driver=docker
E0512 01:39:36.605350    7184 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\ingress-addon-legacy-20220511235145-7184\client.crt: The system cannot find the path specified.

                                                
                                                
=== CONT  TestNetworkPlugins/group/bridge/Start
net_test.go:101: (dbg) Done: out/minikube-windows-amd64.exe start -p bridge-20220512010229-7184 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=bridge --driver=docker: (2m14.1217562s)
--- PASS: TestNetworkPlugins/group/bridge/Start (134.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (6.56s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:122: (dbg) Run:  out/minikube-windows-amd64.exe ssh -p bridge-20220512010229-7184 "pgrep -a kubelet"

                                                
                                                
=== CONT  TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:122: (dbg) Done: out/minikube-windows-amd64.exe ssh -p bridge-20220512010229-7184 "pgrep -a kubelet": (6.5569267s)
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (6.56s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (20.99s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:138: (dbg) Run:  kubectl --context bridge-20220512010229-7184 replace --force -f testdata\netcat-deployment.yaml
net_test.go:152: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:342: "netcat-668db85669-zw9lx" [85448d1f-b043-4292-a653-16d0f343c7b7] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0512 01:42:00.530607    7184 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\old-k8s-version-20220512010246-7184\client.crt: The system cannot find the path specified.

                                                
                                                
=== CONT  TestNetworkPlugins/group/bridge/NetCatPod
helpers_test.go:342: "netcat-668db85669-zw9lx" [85448d1f-b043-4292-a653-16d0f343c7b7] Running
net_test.go:152: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 20.037838s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (20.99s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.6s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:169: (dbg) Run:  kubectl --context bridge-20220512010229-7184 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.60s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.49s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:188: (dbg) Run:  kubectl --context bridge-20220512010229-7184 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.49s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.63s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:238: (dbg) Run:  kubectl --context bridge-20220512010229-7184 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.63s)

                                                
                                    

Test skip (25/268)

x
+
TestDownloadOnly/v1.16.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/cached-images
aaa_download_only_test.go:121: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.16.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/binaries
aaa_download_only_test.go:140: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.16.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.23.5/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.23.5/cached-images
aaa_download_only_test.go:121: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.23.5/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.23.5/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.23.5/binaries
aaa_download_only_test.go:140: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.23.5/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.23.6-rc.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.23.6-rc.0/cached-images
aaa_download_only_test.go:121: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.23.6-rc.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.23.6-rc.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.23.6-rc.0/binaries
aaa_download_only_test.go:140: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.23.6-rc.0/binaries (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Registry (25.54s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:280: registry stabilized in 36.9031ms

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:282: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...

                                                
                                                
=== CONT  TestAddons/parallel/Registry
helpers_test.go:342: "registry-fzj69" [39a6817e-f3f5-4907-9227-0b219d1e66df] Running

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:282: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.0587376s

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:285: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...

                                                
                                                
=== CONT  TestAddons/parallel/Registry
helpers_test.go:342: "registry-proxy-bnfrx" [02de54a2-839a-4cd3-b4af-d7c59e03ac9d] Running

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:285: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.1008398s
addons_test.go:290: (dbg) Run:  kubectl --context addons-20220511225738-7184 delete po -l run=registry-test --now
addons_test.go:295: (dbg) Run:  kubectl --context addons-20220511225738-7184 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:295: (dbg) Done: kubectl --context addons-20220511225738-7184 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (14.9871935s)
addons_test.go:305: Unable to complete rest of the test due to connectivity assumptions
--- SKIP: TestAddons/parallel/Registry (25.54s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (48.62s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:162: (dbg) Run:  kubectl --context addons-20220511225738-7184 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:182: (dbg) Run:  kubectl --context addons-20220511225738-7184 replace --force -f testdata\nginx-ingress-v1.yaml
addons_test.go:182: (dbg) Done: kubectl --context addons-20220511225738-7184 replace --force -f testdata\nginx-ingress-v1.yaml: (4.6231465s)
addons_test.go:195: (dbg) Run:  kubectl --context addons-20220511225738-7184 replace --force -f testdata\nginx-pod-svc.yaml
addons_test.go:195: (dbg) Done: kubectl --context addons-20220511225738-7184 replace --force -f testdata\nginx-pod-svc.yaml: (1.7983649s)
addons_test.go:200: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:342: "nginx" [c7ddab9f-dbb7-4073-9d87-3893301bbec0] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
helpers_test.go:342: "nginx" [c7ddab9f-dbb7-4073-9d87-3893301bbec0] Running

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:200: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 35.2602395s
addons_test.go:212: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-20220511225738-7184 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:212: (dbg) Done: out/minikube-windows-amd64.exe -p addons-20220511225738-7184 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": (6.4642694s)
addons_test.go:232: skipping ingress DNS test for any combination that needs port forwarding
--- SKIP: TestAddons/parallel/Ingress (48.62s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:448: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
driver_install_or_update_test.go:41: Skip if not linux.
--- SKIP: TestKVMDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (300.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:900: (dbg) daemon: [out/minikube-windows-amd64.exe dashboard --url --port 36195 -p functional-20220511231058-7184 --alsologtostderr -v=1]

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:911: output didn't produce a URL
functional_test.go:905: (dbg) stopping [out/minikube-windows-amd64.exe dashboard --url --port 36195 -p functional-20220511231058-7184 --alsologtostderr -v=1] ...
helpers_test.go:488: unable to find parent, assuming dead: process does not exist
E0511 23:25:31.423375    7184 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\addons-20220511225738-7184\client.crt: The system cannot find the path specified.
E0511 23:29:08.238871    7184 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\addons-20220511225738-7184\client.crt: The system cannot find the path specified.
E0511 23:34:08.254735    7184 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\addons-20220511225738-7184\client.crt: The system cannot find the path specified.
E0511 23:39:08.273435    7184 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\addons-20220511225738-7184\client.crt: The system cannot find the path specified.
E0511 23:42:11.488528    7184 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\addons-20220511225738-7184\client.crt: The system cannot find the path specified.
E0511 23:44:08.291192    7184 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\addons-20220511225738-7184\client.crt: The system cannot find the path specified.
E0511 23:49:08.304735    7184 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\addons-20220511225738-7184\client.crt: The system cannot find the path specified.
--- SKIP: TestFunctional/parallel/DashboardCmd (300.01s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd
=== PAUSE TestFunctional/parallel/MountCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd
functional_test_mount_test.go:57: skipping: mount broken on windows: https://github.com/kubernetes/minikube/issues/8303
--- SKIP: TestFunctional/parallel/MountCmd (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (60.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1561: (dbg) Run:  kubectl --context functional-20220511231058-7184 create deployment hello-node-connect --image=k8s.gcr.io/echoserver:1.8
functional_test.go:1567: (dbg) Run:  kubectl --context functional-20220511231058-7184 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1567: (dbg) Done: kubectl --context functional-20220511231058-7184 expose deployment hello-node-connect --type=NodePort --port=8080: (1.4907171s)
functional_test.go:1572: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:342: "hello-node-connect-74cf8bc446-45d4d" [d6e35845-c328-4eda-87b4-8ae2f5d132bf] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
helpers_test.go:342: "hello-node-connect-74cf8bc446-45d4d" [d6e35845-c328-4eda-87b4-8ae2f5d132bf] Running

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1572: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 58.4952879s
functional_test.go:1578: test is broken for port-forwarded drivers: https://github.com/kubernetes/minikube/issues/7383
--- SKIP: TestFunctional/parallel/ServiceCmdConnect (60.49s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:545: only validate podman env with docker container runtime, currently testing docker
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:193: skipping: access direct test is broken on windows: https://github.com/kubernetes/minikube/issues/8304
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:97: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:97: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:97: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddons (45.28s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddons
addons_test.go:162: (dbg) Run:  kubectl --context ingress-addon-legacy-20220511235145-7184 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:162: (dbg) Done: kubectl --context ingress-addon-legacy-20220511235145-7184 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s: (4.7547442s)
addons_test.go:182: (dbg) Run:  kubectl --context ingress-addon-legacy-20220511235145-7184 replace --force -f testdata\nginx-ingress-v1beta1.yaml
addons_test.go:182: (dbg) Done: kubectl --context ingress-addon-legacy-20220511235145-7184 replace --force -f testdata\nginx-ingress-v1beta1.yaml: (1.4390841s)
addons_test.go:195: (dbg) Run:  kubectl --context ingress-addon-legacy-20220511235145-7184 replace --force -f testdata\nginx-pod-svc.yaml
addons_test.go:195: (dbg) Done: kubectl --context ingress-addon-legacy-20220511235145-7184 replace --force -f testdata\nginx-pod-svc.yaml: (1.3787291s)
addons_test.go:200: (dbg) TestIngressAddonLegacy/serial/ValidateIngressAddons: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:342: "nginx" [c9f86189-2903-4481-9108-d3228631441a] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:342: "nginx" [c9f86189-2903-4481-9108-d3228631441a] Running
addons_test.go:200: (dbg) TestIngressAddonLegacy/serial/ValidateIngressAddons: run=nginx healthy within 31.1872428s
addons_test.go:212: (dbg) Run:  out/minikube-windows-amd64.exe -p ingress-addon-legacy-20220511235145-7184 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:212: (dbg) Done: out/minikube-windows-amd64.exe -p ingress-addon-legacy-20220511235145-7184 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": (6.3219226s)
addons_test.go:232: skipping ingress DNS test for any combination that needs port forwarding
--- SKIP: TestIngressAddonLegacy/serial/ValidateIngressAddons (45.28s)

                                                
                                    
x
+
TestScheduledStopUnix (0s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:76: test only runs on unix
--- SKIP: TestScheduledStopUnix (0.00s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (14.16s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:105: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-20220512011134-7184" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p disable-driver-mounts-20220512011134-7184
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p disable-driver-mounts-20220512011134-7184: (14.1584943s)
--- SKIP: TestStartStop/group/disable-driver-mounts (14.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel (15.44s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel
net_test.go:79: flannel is not yet compatible with Docker driver: iptables v1.8.3 (legacy): Couldn't load target `CNI-x': No such file or directory
helpers_test.go:175: Cleaning up "flannel-20220512010229-7184" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p flannel-20220512010229-7184

                                                
                                                
=== CONT  TestNetworkPlugins/group/flannel
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p flannel-20220512010229-7184: (15.4388952s)
--- SKIP: TestNetworkPlugins/group/flannel (15.44s)

                                                
                                    
Copied to clipboard